Business Process Management Metrics - Part 1
Measuring the performance of a process is deceptively simple. Processes are made up of three basic components:
- Actions taken on those inputs to change, add to, and modify them
- Outputs, which are created as a result
To measure the performance of a process, a leader has only to take the key areas of these three process components and measure them over time.
My experience is that most functional leaders and executives understand this high-level concept. However, there are critical areas just under the surface that are tragically ignored, often to the detriment of business performance. In this four part series of posts, I’ll highlight these and give real-world examples showing just how serious of a concern this is for anyone who designs, implements, or manages business processes.
At a high level, we can consider two very different categories of process metrics, each with a very different purpose – Operational Metrics and Reporting Metrics.
Operational metrics are those used to measure the process at a relatively low level. These metrics show process health, but more importantly they allow for analysis, early warning of potential issues, and troubleshooting when things go wrong. These metrics are often measured in very small time increments – weekly, daily, sub-daily or even minute-by-minute.
Reporting metrics are those used to measure a process at a higher level, over a longer time horizon: monthly, quarterly, yearly, year-over-year, etc. Importantly, these metrics should not be delivered or reported on as points-in-time, but rather as trends.
Understanding Relationships Between Operational and Reporting Metrics
It is easiest to show how these categories of metrics work by looking at how they relate to each other. Operational metrics tend to roll up into reporting metrics – in many cases they literally roll up mathematically. For example, a reporting metric of Total Fleet Downtime might mathematically break down into Truck Model X Downtime plus Truck Model Y Downtime. In other cases, though the metrics roll up more intuitively. For example, the retail metric of Brand Health could break down into On-Time Delivery and Display Peg Count, among other things. In this case, there is no numerical relationship between On-Time Delivery and Brand Health. However, we intuitively understand that On-Time Delivery is a major customer concern, and performing to customer expectations is critical to Brand Health.
In essence, the relationship between the two sets of metrics is this: when there is variance in a Reporting Metric, the analysis of that variance always takes place using Operational Metrics.
Notice how I haven’t mentioned anything yet about Key Performance Indicators (KPIs). The term KPI is all over the map, with people using the term to describe both Operational and Reporting Metrics. I’ve stopped using the term “KPI” and instead use terms that explicitly define how a metric should be used.
Avoiding Leadership Traps
Why does the difference between metrics matter? Because leaders who do not clearly understand the difference between Operational and Reporting Metrics inevitably fall into a serious trap.
Do you ever wonder why your management team feels the need to dip down and manage day-to-day activity that is totally inappropriate for the scope and span of their role? Do you ever wonder how management teams get stuck making repeated, circular and contradictory knee-jerk decisions? Very often, the answer is that they believe that their Operational Metrics are Reporting Metrics.
When leaders believe Operational Metrics are Reporting Metrics, terrible things begin to happen. Operational Metrics tend to change frequently and have a wide variation. It is extremely difficult to understand Operational metrics without a sophisticated understanding of trends and natural variation, as well as the intimate process details behind the metric. Lacking this understanding, leadership decisions made using Operational Metrics are inevitably short-sighted.
Even worse, influencing an Operational Metric requires day-to-day action and management – which is the role of individual contributors and front-line managers. A senior leader who believes an Operational Metric is a KPI, will naturally dip down to this level, undercut their front-line managers and inevitably start micro-managing them. This situation devolves into a confluence of non-productivity. Front-line managers are less productive because they are constantly reporting status or taking direct orders from the senior leader. The senior leader on the other hand loses their ability to be strategic or forward-thinking.
You can probably think of many other leadership traps where metrics are concerned. Whether you are designing processes, managing them, or leading a functional group or organization, avoiding metrics traps will mean putting serious effort into the following areas:
- Agree on and map the relationship between Reporting Metrics and Operational Metrics
- Define and govern your bpm process metrics
- Educate all managers on the metrics (how they are rolled up, reported and governed)
- Establish a regular review (both of the metrics and the behaviors they are generating in the business)
- Establish a culture of variance explanation
In future posts that are included in this four part series, I’ll explore some ideas around each of these points. You can read part 2 to learn how to build root cause trees to support your analysis of operational and reporting metrics here. In part four here, I discuss how to establish a culture that helps avoid common leadership traps.
Until next time, please let me know what your experiences have been, and what you think in the comments below!