Managers in all businesses are driven to measure key elements of their performance, because we all know that old maxim:

‘If you cannot measure it, you can’t manage it’.

In blogs and white papers you will find a raft of must have KPI’s that will give you the control you crave, but is it really that easy?

This saying on measuring performance is often attributed to the management guru Peter Drucker. In fact Drucker knew when and when not to measure. His views were much more  nuanced according to a blog on Measurement Myopia by the Drucker Institute. Although he did believe that measuring results is crucial to performance, he also believed that the relationship between managers and their people is also key.

“Work implies not only that somebody is supposed to do the job, but also accountability, a deadline and, finally, the measurement of results —that is, feedback from results on the work and on the planning process itself,” Drucker wrote in Management: Tasks, Responsibilities, Practices.

But for all that, Drucker also knew that not everything could or should be held to this standard.

“Your first role . . . is the personal one,” Drucker told Bob Buford, a consulting client then running a cable TV business, in 1990. “It is the relationship with people, the development of mutual confidence, the identification of people, the creation of a community. This is something only you can do.” Drucker went on: “It cannot be measured or easily defined. But it is not only a key function. It is one only you can perform.”

Experienced managers will naturally relate to this need for balance. But without neglecting the Drucker wisdon, I know many (including myself) who have dreamed of the the possibility of having one Key Performance Indicator (KPI) that could reliably predict how customers would experience their service provision. One simple measure that their teams could use as a focus for their primary mission: to ensure customers remain satisfied, loyal and profitable. The limitations of most measures of customer satisfaction and loyalty are that they look in the rear view mirror, in that they ask questions after the fact. Far better to create a leading indicator, but how?

To get a better feel for customer satisfaction, many managers spend time in the field talking to customers and their teams. Some will create rafts of measures to monitor and improve their operations. Their logic being that a great performing team is more likely to have loyal customers. However there is a temptation to measure everything, which can confuse team members. To overcome this managers bring focus through introducing KPI’s and dashboards to make it easier to see the issues and take action. More sophisticated businesses look towards the Balanced Score Card methodology in which a more holistic view is taken of the operation. They not only examine financial and processes efficiency, but also consider organisational capacity and customers in relation to their strategic goals. This balanced approach is pretty sensible, but a can be too ‘management speak’ for the people at the sharp end of the business. The key challenge is to create measures that drive the right behaviours and culture, and not ones where people start to find ways of working around. So it is not quite as simple as many make out. From my own experiences, I always felt it would be extremely beneficial to develop a simple measure that was:

  1. Easily understood by everyone.
  2. That gave us a forward view that, for example, a particular piece of equipment was potentially going to lead to severe customer irritation and dissatisfaction.

Our business was injection moulding systems, and we knew that something was going wrong at the customer when the spare parts spend for a partucular machine increased, fault reporting was high and the same problem re-occurred over a 12 month period. We created a ratio of these 3 indicators and found that at a machine level, we could start to rank problem systems and identify those that were likely to turn into an irate customer. Our thinking was that not only could this be used by the local teams to bring focus to a specific customer issue, it also gave an indication of how well teams were managing their installed base.

Recently I heard Mark Noble, Customer Support Director at Inca speak at a Service Community meeting in the UK. Inca design and manufacture digital printers and gave themselves the goal to improve installed equipment productivity and hence satisfaction of their customer base. For their technology, it is the performance of the print head that controls up to 256 ink delivery nozzles, which is critical to uptime. By combining 3 key performance parameters of the machine, alarms, nozzle deviations and productivity, Inca could rank their equipment in terms of the likelihood to cause customer dissatisfaction. They created simple dashboards that clearly identified the priority machines to be working on. This allowed them to identify faults before they became critical therefore reducing costs, while enabling their customer support centre to identify issues more effectively. In turn this allowed them to maintain better print quality and improve the planning of engineer visits around customers production runs. As they developed their management process, two interesting themes started to come out:

  1. The temptation to over-complicate the dashboard: The possibility of easily bringing together different measures into an ‘easy to read’ dashboard came from an investment in ‘Business Intelligence’ software. However, the temptation to add lots of interesting, but not necessarily essential measures proved too great. And before long they found that a great idea became too complicated to use effectively. The ‘Keep It Simple Stupid’ or KISS principle is very important when thinking about how to make KPI’s effective.
  2. A focus on customer outcomes improves loyalty: Because the programme had an intense focus on the most important customer outcome, print quality, Inca found that they had important information that could help improve their customer’s business. When they detected a problem, they started to call customers to inform them of the issue and resulting actions that needed to be taken. Rather than provide this feedback at the operator/supervisor level, the key account manager would contact the senior operations director directly and inform them of the situation and action plan. They saw two unexpected benefits:
    1. Customers loved the personal service which added value to the bottom line of their business. Because it was a senior management business discussion, recommendations were quickly actioned.
    2. Inca’s own sales force became enthusiastic about the power of service in helping them drive revenues and loyalty.

I came across a second example of this approach at the Field Service Europe conference by Alec Pinto of Peak-Service, part of the Qiagen corporation, a €1Bn technical services provider for medical, analytical and industrial equipment. As part of their transformation journey, they created a customer experience indicator which aggregated measures of machine utilisation, revisits, call response time and call completion time. They made very transparent how this indicator was driven by their four KPIs. And then under this, their teams could easily drill down to the next level of measures required to get into more detailed problem solving.

The Customer Experience Indicator helped bring a focus to individuals and teams on the drivers of customer experience as they moved through a significant change. This gave them a single measure, which could be used by teams to highlight potential customer experience issues, as well as the ability to drill down to the detail drivers in order to develop solutions. It enabled the management team to move through the internal changes in their service organisation, whist keeping an eye on the real impact of those changes on the customers.

The above are both examples of moving away from a rear view mirror perspective, towards focusing on outcomes that will impact their future customer loyalty. Inca are using machine performance to manage customer experience for each piece of equipment in a pro-active manner. Peak-Service are using existing service measures to highlight when their organisation might not be functioning as well as it should.

I came across a third example of this type of thinking when talking to a colleague who has worked for over 20 years in customer support for one of the worlds leading IT hardware / solutions companies. In the early 2000’s, this business started to combine operational metrics to predict potential customer dissatisfaction, so that they could get ‘in front of the customer satisfaction scores’ for customer support. They mapped out the key stages in the lifecycle of the service support process, such as the time taken to answer a call, the time to diagnose a problem or the first time fix rate. From their operational experience they knew when customers would become dissatisfied and began to develop algorithms to monitor and analyse key measures through the lifecycle. If a combination of measure went outside set thresholds, then alerts were raised. Thresholds could be set depending on the context of the customer, such as the SLA agreements or the mission critical nature of the application. At first they required additional analysts to monitor what was a clunky process. But they persevered, because service had become critical to their survival. Over time, with improvements in analytics solutions, this business process became much smoother and they experienced a number of benefits:

  1. Improved customer satisfaction: when the business was later acquired, they found very high satisfaction rates compared to other parts of the acquiring company.
  1. Upgrading of skills: As potential customer experience issues were identified in real time within the support chain, the engineers received immediate back-up and support. Often tricky problems would be ‘swarmed’ by experts, so helping raise the technical competence of all the support staff.
  1. Potential for Automating the Support process: As equipment health data was fed into the process, so the potential for automating the support started to become clear. Service requests would automatically be raised, standard actions identified, the customer informed that an action was required, and then asked‘when would they like the field visit. Obviously there is a limit to the complexity of problem that can be solved in this way, but areas where automated support was most prevalent, also proved to have the best customer experience score. When you consider the potential time saving for organisations with literally thousands of pieces of equipment this is not a major surprise.

The problem with combining metrics in this way is that there is a danger that the process becomes too complex to understand. In this situation algorithms and specialists were used to turn knowledge into forward thinking action. As with nearly all global businesses, they deployed a standard set of cross-business KPIs. To manage the complexity challenge, they broke their KPIs down into levels:

Level 1: High-level business measures such as Customer Satisfaction

Level 2: Operational Metrics such as utilisation

Level 3: Detailed Operational metrics

With the advent of sophisticated business intelligence tools, this approach allows managers to drill down to the detail required to solve problems and improve. How they share this data and motivate their staff to take action, well these are the management skills that Peter Drucker alluded to.

In trying to summarise this exploration of the impact of metrics on performance, there are perhaps seven key messages that we can take away:

  1. Always have your people & users in mind when designing a performance management process as,
    • It needs to be at the level that people can action
    • Not everything is measurable, and some elements of performance need old fashioned personal feedback
  2. Wherever possible, focus measures on the outcomes experienced by the customer and those factors that influence the outcomes.
  3. Using operational data that already exists in most businesses, it is possible to create leading measures that drive action.
  4. Combining measures to create forward looking lead indicators of customer experience is possible and can be very effective in creating a simple measure that drives action. However transparency as to the drivers of this measure is important and needs to be clear to all stakeholders.
  5. The balance between simplicity for clarity versus detail for action is an ongoing challenge. Constantly evaluate performance management process in relation to your customers and the business priorities.
  6. With the IoT changing our perspective on data, and the availability of easy to use business intelligence solutions, technology is not the barrier to better measures. It is more having the mind-set to try different approaches which is the challenge.
  7. All these case studies are examples of companies applying their deep know-how of their equipment and customers, to identify problems before they happen.

         Fore-armed is fore-warned!

 

 

Nick Frank is Managing Partner at Si2 Partners, a consultancy helping clients leverage services to win in industrial markets

Further articles on this and other topics can be found in the Si2 Partners Resources Page and the Si2 Knowledge Center

If you are a Service professional (manager, practitioner, consultant or academic) in an industrial setting join our group Service in Industry on Linkedin