Benchmarking, formally defined by Wikipedia as the process of comparing one’s business processes and performance metrics to industry bests and best practices from other companies, are typically presented by consultants as a boon for business managers and a reason to buy their services and/or solutions. After all, if you can’t benchmark, not only do you know how good you are doing (compared to the industry), but you do not know if you are improving or deteriorating, at what rate, and what the potential is.
And all this is true, provided the benchmarks are accurate, apples-to-apples, and actionable. This is not always the case, and when the benchmarks are poorly designed and implemented, definitely not the case. In fact, if the benchmarks are not accurate, they can cost the organization precious time, money, and resources and result in worse, instead of better, performance. And even though you don’t hear about it (as the last thing a Big 6 consultancy wants to do is scare you away from one of their most profitable service offerings — as it takes a long time to design the scorecard, collect the data, and interpret the findings [which translates into a huge number of top dollar billable hours for the House of Lies] — it happens more often than you think, and if you end up being one of the unlucky, you will be cursing benchmarks until the end of your Procurement career (and beyond if the word ever again arises).
the doctor is being dead serious here. Benchmarks (like dashboards) hide at least six serious dangers that can seriously hinder productivity, savings, and innovation. Three of these are very common to internal benchmarks, and three of these are very common to external benchmarks.
One of the most significant dangers of internal benchmarks is hidden opportunities due to false negatives. This often arises when monitoring best-price contracts. A classic example is that of enterprise desktop systems. Considering that technology depreciates the time it hits the market, just like a car depreciates from the time it leaves the lot, the price of these systems should decrease over time. If the benchmark says that the contracted configuration decreased over the 12-month contract by an average of 0.5% a month, for a total decrease of 6%, the buying organization might believe that the vendor is honouring the best-price clause. But if the buying organization isn’t aware that the average depreciation of these systems is 12% to 18% and doesn’t monitor market pricing, the buyer might not know that the pricing should have decreased an average of 1.25% a month, and would have lost 0.75% a month on purchases. If the organization was buying 500 systems a month as part of a phased replacement for 1.5K each, or spending 750,000 a month, that’s a loss of $5,625 a month for a total loss of over $60K, or another help desk resource! (And if all hidden opportunities were this small, it might not be too bad. But this is more of a best-case loss example.)
One of the most significant dangers of external benchmarks is wasted years due to lack of validation. One common example is that of contingent or manual labour spend analysis. For example, consider the analysis of warehouse (contingent) labour across the enterprise. An enterprise could quickly find that its paying, on average, a fully burdened rate of $17 an hour for workers to stuff boxes while its competitors are paying, on average, a fully burdened rate of $14 an hour for workers to stuff boxes. This might lead an analyst to believe that the organization is paying 30% more than it should be and that it should seek out a new contingent labour provider to get costs down, and waste months on RFX and analysis only to find out that the most it can lower its costs from the quotes is 10%. At this point, the analyst might go back and do an analysis of what it would cost to take the labour management back in house (which would require building a Contingent Labour CoE, staffing it, etc.) and still not see a savings when it replaces the outsourced management cost with the internal management costs applied to the total wages paid out. At this point the analyst would give up, or spend even more time investigating the reason only to find out that the organization’s main warehouses are in California, New York, and Massachusetts, the states with the highest minimum wages in the nation, while most of its competitors keep their warehouses in the mid-west / south-west states that only mandate the federal minimum wage of $7.25 (vs. minimum wages north of $10). Benchmarks only capture price and performance tiers, not the realities that led to them.
But these are only two of the six major hidden dangers that can ruin any benchmarking project (and the efforts that they will kick off, for better or worse). For a detailed insight into the other four, download the doctor‘s latest white-paper (sponsored by Trade Extensions) on The Dangers of Benchmarks and Trend Analysis (registration required) today. You need to know these inside out before even looking at a benchmark (which, when improperly constructed and improperly interpreted, can be just as deadly and dangerous as a dashboard).