Many years ago, I worked for a steel company that established quality circles of union employees. We did a lot of things right, so the quality circles initiative was generally effective; any number of ideas that improved productivity, safety, quality, and employee engagement were proposed by the teams and implemented by the company. One thing that we did not do was measure the impact of any of the implemented ideas. It was assumed that measurement served to inhibit ideas and participation on the team members.
Several years later, I was employed by a hotel company seeking to engage its employees in problem-solving. The company did a wonderful job of collecting and organizing customer satisfaction data derived from guest comment cards. Detailed monthly reports full of useful information about guest satisfaction were forwarded to individual hotels by the corporate office. Management of the particular hotel where I worked was reluctant to release the monthly report to the various departments. It was felt that, if the data showed a drop in guest satisfaction from one month to the next, employee morale would suffer.
A few years after that, I found myself in a fairly heated argument with the directors of tool repair and maintenance at a metal stamping operation. We were trying to develop something of a homegrown operational equipment effectiveness (OEE) measure and were discussing how long a stamping press should be down before it was considered “equipment downtime.” While I recognized that there might be any number of reasons that a press might be down for a few minutes, I advocated for a fairly short period of time, 30 minutes. After that, the operator would record “press down.”
The two directors were adamant that 30 minutes was far too short a time. I offered 60 minutes. Nothing doing, was their reply. I eventually asked about the reason for their resistance: “Whatever duration we agree to, the operators will make sure that downtime lasts that long, and we’ll be blamed for it.”
The thread that runs through all these examples is the fact that measures of performance too often are seen as demotivators. Such metrics cause pain and problems for those to whom they apply. No good can come from “being measured.”
And yet, we can call to mind several instances in which measures of performance are expected, even desired. A sports event without a scoreboard that is visible to all participants and spectators is unimaginable. Every sports participant wants to know his or her batting average, rebounds, assists, yards per carry, best personal time, and so on, updated with respect to their most recent effort or competition.
Why is it, then, that managers and their associates seek so energetically to avoid being measured? The reluctance to be measured is rational because too many organizations have created cultures in which measures of performance are used to punish rather than to motivate. In each of the cases above, managers resisted the deployment of performance metrics because they had ample experience with inappropriate use of such measures: being berated for “poor performance” while good performance went unacknowledged.
In my experience, this reluctance to deploy operating performance measures is deeply ingrained in the psyches of most managers and workers. The senior leaders of another client operation refused to organize and post operating performance charts that would show the progress of the lean manufacturing initiative. The notion that measures can be created and used in a way that improves morale—and increases their own motivation and that of their associates—didn’t occur to them.
In fact, the creation and development of a culture in which performance measures motivate high levels of performance isn’t difficult. Managers simply need to attend to these principles in their measures:
- Measures must be seen as relevant and controllable by those who are measured.
- Measures must be readily accessible by all who are measured.
- Measures must be regularly reviewed and discussed.
- Measures and their use must be utterly free of anything resembling “fixing the blame” or “finger-pointing.”
Relevant and Controllable
Thirty years ago, I trained administrative managers at a large hospital here in Cleveland. A group of occupational therapists told me about their unhappiness with some of the metrics by which their performance was evaluated. One of those metrics was “revenue per square foot.” As they saw it, they were being held accountable for a measure over which they had no control. And they were right. “Revenue per square foot” was only distantly related to their work as therapists and caregivers.
Metrics that aren’t relevant and controllable are ignored. They don’t motivate. Often, as was the case with the occupational therapists, they simply annoy and frustrate.
My son once worked for a sandwich shop in town that provided delivery service. His manager decided that he would post the average daily delivery time of each of the drivers. My son and several of the other drivers complained about this posting of their individual performance. They correctly claimed that delivery time was affected by factors outside of their control: weather, number of orders taken on a run, time of day, distance, and so on. (They also correctly claimed that the supervisor himself tended to fudge his own numbers by picking orders that were easy to deliver.)
The best way to assure that performance metrics are relevant and controllable is to get input from the people whose performance those metrics will measure.
I was reviewing, with a plant manager, a set of performance indicators that he was preparing to communicate to his union workers. Among the indicators was a measure of scrap. He told me that, not too long before our conversation, he would have been fired for releasing the performance data—even to supervisors, much less to union members. I expressed surprise that even scrap was a forbidden topic of discussion. After all, how could the folks responsible for scrap reduce it if they didn’t know how much they were producing?
Just as the scoreboard is visible to all participants and spectators at a sports event, performance indicators need to be accessible and visible to everyone in the organization. Relevant and controllable metrics must be posted, updated, and communicated widely and regularly.
Regularly Reviewed and Discussed
I’ve always been surprised that managers don’t discuss performance indicators much. You might disagree, thinking that budget and other financial indicators are discussed ad nauseum where you work. But budget isn’t operations. Looking at budget tells me nothing about how well operations is fulfilling its mission: to make products that delight the customer and to do so without waste.
I’ll admit that some operating KPIs are reviewed and discussed when managers feel there’s a problem. In that case, they look at performance data in a “How bad is the fire?” manner.
There is no improvement of processes without regular review and discussion of the measures of those processes. Managers need to regularly ask, “Are we getting better, staying the same or getting worse?”
Get Rid of the ‘Fix the Blame’ Approach
I once had a client where two plant managers were fired for fudging their operating performance numbers. In other words, two well-paid, influential managers ruined their own careers rather than be honest about their plant’s performance.
W. Edwards Deming is well known for his admonition to drive fear out of any organization’s culture. He frequently told an anecdote about a foreman who didn’t stop production to repair a worn-out piece of equipment because he feared stopping production would mean missing his daily quota. Instead, he let production continue. When the machine failed, it forced the line to shut down for four days.
I tell a similar story: I worked for a coal company four decades ago. Engineers at one of our largest mines found that all the sections averaged exactly 20 feet of advancement of the coal face each shift. The “productivity standard” for the crews was, you won’t be surprised to hear, 20 feet of advancement of the coal face per shift. Supervisors feared that, if their crews fell short of that standard, they’d be raked over the coals. Why wouldn’t they strive to beat the standard, then? Because they feared it would be raised and they’d end up being raked over the coals for getting 20 feet per shift.
Managers, supervisors and operators resist performance measurements because they don’t have much experience that includes measurements being used to their advantage. In fact, most of their experience tells them that numbers will be used against them. The supervisors in the mines I worked with weren’t lazy or unethical. They were doing their jobs in a rational way, given that the culture within which they found themselves was punitive and blaming.
Changing that culture isn’t easy, but it starts with asking different questions about performance. And it starts with asking those same questions about good performance. Conversations about performance below and above expectations should start with, “Let’s talk about how we got those results.” If fixing the blame worked, all manufacturing organizations would already be perfect.
Metrics, then, can motivate high levels of performance. But they do so only in a culture where the metrics are used to engage those responsible for the work that’s actually being measured.
Rick Bohan, principal, Chagrin River Consulting LLC, has more than 25 years of experience in designing and implementing performance improvement initiatives in a variety of industrial and service sectors. He is also co-author of People Make the Difference: Prescriptions and Profiles for High Performance.