Metrics Management Lessons Illustrated in a True Story
This series of posts has been all about the use of metrics in business. We have looked at the purpose of metrics, and why it is so important to distinguish between Reporting Metrics (used primarily by leadership) and Operational Metrics (used primarily by front-line managers and individual contributors). We also looked at a method of organizing metrics via root cause trees, to help differentiate Reporting from Operational Metrics, define relationships between metrics, and improve the speed of analysis.
In this post we will explore three other issues relating to metrics, which I believe represent some of the most common and most serious traps into which leadership can fall. These issues have incredibly boring names which cause most people’s eyes to roll back in their heads. I think this is why they get so little consideration, but they are absolutely critical. If you are measuring processes or business performance and you ignore these three issues, you are in for a long, drawn out slide into mediocrity. So instead of naming them (initially) I’ll tell a story.
To borrow a line from Dave Barry – I swear I’m not making this up.
A Real-World Tale
A leadership team in a retail business is looking at a critical metric relating to customer satisfaction – On-Time Delivery. The leadership team has recently had lengthy and thoroughly unpleasant conversations with customers, the vast majority of which are so unhappy they are voicing plans to switch to competitors. Their main grievance is that the company’s ability to deliver product on time is abysmal. This news came as quite a surprise to company leaders. On-Time Delivery has been in great shape for months, and has been trending up for over a year. Prior to a year ago there were issues, sure. But improvements were made. The score came up – it’s been “green” for three quarters in a row. Surely by now that is starting to change the minds of customers?
The CEO starts to ask some questions. The first one is, “What is the target for On-Time Delivery?”
Without blinking an eye, the head of Supply Chain replies, “Which target?”
Uh oh. More than one target? For a single metric?
The CEO frowns deeply and asks, “What do you mean, which target?”
“We have two. We have one that is reported to the customer and one that we track internally. The target we track and report to the customer is 80%. The one that we track internally is 90%. We do it this way because we never hit our 90% goal and we don’t want the customer to see us fail. We report the 80% target (which we mostly hit) to the customer.”
The head of Supply Chain says all of this very matter-of-factly. The CEO looks stunned.
Let’s pause the story here. What we have here is a single metric with two targets. One target is essentially a gimme target that they hit consistently. That gets reported to the customer. The second target is tracked “internally” and is almost never achieved. That one is reported “internally” behind a hot mess of a metrics dashboard.
Guess which target Supply Chain thought was most important? And upon which their bonuses were based? If you answered “the gimme target,” you get a gold star. That’s exactly what they did. Alright, let’s continue the story – we aren’t even at the best part yet:
The CEO finds his voice and replies with, “Interesting. But the customer doesn’t expect 80% On-Time Delivery, they expect 100%, and they are willing to settle for consistent performance at 95%. Neither of the targets you described drive us to achieving customer expectations. So who set these targets?”
“I did,” replies the head of Supply Chain. “But of course you agreed,” he quickly adds, suddenly shifting a bit uneasily.
Through clenched teeth the CEO manages to get out, “I think we are going to have to address this.”
Alright, let’s fast-forward this story a bit and really highlight our three metrics issues-which-shall-not-be-named. The CEO asked a lot more questions over the course of a few days, and got some really interesting responses.
Here were the highlights:
- All parties involved knew what the customer’s expectations were. However because hitting their expectations was considered “unrealistic” they didn’t try. The decision was consciously made to report a color (red, yellow, green) rather than a number to the customer. Anything over 80% was considered “green” even though that wasn’t the minimum standard acceptable to the customer. When shown a “green” metric totally at odds with their experience and expectations customers responded with utter disbelief and frustration.
- The definition of On-Time Delivery had changed at least twice in the two years prior. Note that in any organization which has uncontrolled metrics, this is EXTREMELY common. From my experience, I’d say that uncontrolled metrics change definitions about once a year. If there is a leadership change, uncontrolled metrics will ALMOST CERTAINLY change definitions from what was being reported under previous management.
- Despite the change, On-Time Delivery was still being reported as a single metric, with multi-year trends. This meant that comparing trends over time was comparing apples to oranges.
- Whenever the definitions of the metric changed, the reported performance increased dramatically. The definition of On-Time Delivery was stretched to the breaking point to make this happen. When a customer placed an order with multiple line items, the order wasn’t considered as a whole. If any individual line item was delivered on time, the order was considered to have been delivered on time, despite the fact that the customer was still waiting for parts of their order. Defining the metric this way had serious and substantial knock-on effects, negatively impacting order entry, sales forecasting, revenue projections, and ultimately even customer behavior.
- The person in charge of defining the metric was the head of the group being measured, and their bonus was based (in part) on performance to this metric. This wouldn’t be as bad as it sounds, except that nobody else was involved in redefining On-Time Delivery, or how and when to communicate those changes.
Three Big Lessons Learned
Alright, so what are the three issues-which-shall-not-be-named? Now that we have seen them in an extreme example, let’s go ahead and name them off.
1. Metrics Definitions
In the example above, very few people knew the exact definition of On-Time Delivery, and it led to major trouble. All metrics require definition including their exact calculation. This is especially true if the metric seems obvious.
The example above looked at On-Time Delivery. That is about as simple as it gets, right? Wrong. There are a million ways to define On-Time Delivery; time from order to the communicated arrival estimate, time from order to customer-requested receipt date, time from order to handover to the shipping vendor.
Every metric measured can be defined and calculated in many ways.
Belief that we know what is being measured based solely on the name of a metric is a MASSIVE leadership trap. We can never really know without a documented and agreed definition.
Speaking of massive leadership traps – defining metrics from the company perspective rather than the customer perspective is another great way to fail slowly. You can see that clearly in the example above. Top-level metrics in particular need to be related back to the customer, more or less directly (see part 2 in this blog series).
Governance – the mere utterance of the word puts leaders to sleep. Nothing is less exciting.
However in the case of metrics, nothing is more critical. In the example above, nobody was controlling the definition of On-Time Delivery. The head of supply chain was free to define it to best suit Supply Chain, without thought to the customer experience.
Governance goes hand-in-glove with metrics definitions. Defining metrics and documenting them is meaningless unless you do three things: limit the changes to metrics, communicate changes that are allowed to occur, and indicate changes in metric calculations on existing reports/dashboards.
Limiting changes to metrics definitions is critical. Changing the definition of a metric will likely result in one of two things:
- The need to recalculate past performance according to the new definition, in which case all interested parties need to be informed. The metric is now measuring something different (or the same thing but in a different way) which will influence both how to analyze the metric itself as well as decisions based on the metric’s performance.
- The need to give up past history and start fresh with the new definition. When you consider that it takes months in order to establish reasonable data trends, this is not a decision to be taken lightly.
Establishing centralized or third-party governance of metrics is also key. This prevents managers from re-defining metrics to show their area in a better light to the detriment of the company. It also helps to communicate changes to metrics systematically when they occur.
As an aside – my go-to department for company-wide metrics governance is Finance. They understand data, numbers, and the integrity of historical data. They are also the most trusted department in the company.
3. Regular Metrics Review
The third and last metrics issue-which-shall-not-be-named is regular review (once or twice a year). There are two key areas to review:
- Are the metrics still valid? Do they still measure what is important to company goals, and do they do it in the best way?
- What behaviors have arisen in the business as a result of your metrics?
The first is relatively obvious, but still critical and often over-looked. At a minimum, limiting changes to metrics means you must provide some mechanism where leaders and managers can make their case for change. Aside from that, metrics may become irrelevant, or there may be a need to measure something new.
The second point is really tough, and requires a very open, honest leadership team. The single biggest leadership trap relating to metrics is unintended consequences. The examples are innumerable. There might be one department whose bonus rests on a metric which is conflicting with that of another department. Internal company metrics could cause behavior in sales or customer service people which adversely affects the customer. If leadership can’t talk openly and honestly about these things, mediocrity (or worse) will be the result.
Lack of metrics definitions, governance and regular review are probably the most common, most stealthy drags on business that I’ve seen. When leaders ignore these aspects of management the company at its core will be inefficient at best and misdirected at worst. It’s absolutely true that what isn’t measured isn’t managed. But it is also true that what is measured poorly is managed poorly.
My next post will attempt to unify the last three and present some ideas on building a culture of variance analysis within your teams. Until then let me know what you think and what your experiences have been with metrics in processes or functional areas you’ve dealt with!