Category Archives: Manufacturing. Metrics

Analytics is for Industrial Manufacturers Too

I’ve written dozens, sorry, dozens upon dozens of posts on Analysis and why every organization should be using it. So I’m not going to go into the details again in this post, but make it crystal clear for you business types who have yet to sign the cheque:

      High-performance businesses — those that substantially outperform competitors over the long-term and across economic cycles — are five times more likely to use analytics strategically compared to their peers.
    Using More Analytics Can Help Industrial Manufacturers.

Now, it’s true that correlation is not causation, as Pinky and the Brain skillfully informed you in their lesson in statistics, but a multiplier of five is very significant. It means that the use of advanced analytics tools is definitely a common trait of industry leaders and if you’re not sure how to become an industry leader, the best way to start is to emulate what the leaders do.

Share This on Linked In

PPV is a Bad Measure of Procurement Performance

As noted in a recent brief from ChainLink Research, PPV (Purchase Price Variance) is a bad metric for Procurement, especially if your buyers’ performance is being based on it. Not only does this kind of metric encourage behaviour that may lower PPV but create a higher total cost, but it can cost your organization a bundle, and this goes for commodities that usually have low volatility as well as those that have high volatility. Here’s why.

Let’s say you were buying 10,000 barrels of crude oil in 2009 on a monthly basis. The OPEC basket price, which started the year at 40.44 on January 2 and ended the year at 77.16 on December 31, and which reached a low of 38.10 on February 18 and a high of 77.88 on December 1, varied, on average, by $7.20 a month, with a minimum variance of $2.91 in November and a maximum variance of $13.30 in May. If your buyers are being measured on PPV, and they are good at predicting annual pricing trends, chances are they are going to pay as close to $65.04 as possible, as this amount (and any amount between $64.00 and $66.08, to be precise) minimizes the average monthly PPV. (The PPV varies from 0 in July and September to $21.14 in February and averages out to $7.46.)

In this situation, your buyer would spend 7.34 Million dollars trying to minimize PPV, which would cost your organization 467,200. This is what your buyer would pay each month (buying on the day that was closest to the price point target):

Month Price Cost PPV
Jan 46.32 463200 14.82
Feb 43.90 439000 17.24
Mar 50.77 507700 10.37
Apr 52.26 522600 8.88
May 63.71 637100 2.57
Jun 66.08 660800 4.94
Jul 65.04 650400 3.90
Aug 68.04 680400 6.90
Sep 65.12 651200 3.98
Oct 66.81 668100 5.67
Nov 74.95 749500 13.81
Dec 70.64 706400 9.50
AVG 61.14 611367 8.55
SUM   7336400  

But if your buyer was focussed on cost avoidance, your buyer would only spend 6.87 Million dollars trying to minimize cost, saving your organization 467,200. If you ignored PPV, this is what your buyer would pay each month (buying on the day that allowed for the lowest purchase price):

Month Price Cost PPV
Jan 39.29 392900 21.85
Feb 38.10 381000 23.04
Mar 41.79 417900 19.35
Apr 47.15 471500 13.99
May 50.41 504100 10.73
Jun 66.08 660800 4.94
Jul 59.66 596600 1.48
Aug 68.04 680400 6.90
Sep 64.00 640000 2.86
Oct 66.81 668100 5.67
Nov 74.95 749500 13.81
Dec 70.64 706400 9.50
AVG 57.24 572433 11.18
SUM   6869200  

Still think minimization of PPV is a good idea?

Share This on Linked In

Key Performance Indicators, A Book Review

Today I’m going to briefly review Key Performance Indicators, 2nd Edition, by David Parmenter, a regular contributor to and a big supporter of The Balanced Scorecard by Kaplan and Norton and Stephen Few‘s work on Visual Business Intelligence, highlighted on

The book is very informative on the subject and literally presents a step-by-step how-to guide, complete with checklists, surveys, workshop outlines, reporting templates, and key task descriptions that will guide you though a successful project execution. It spends over 50 pages detailing a 12-step model, complete with key tasks that need to be completed at each phase, that, when properly applied, will almost guarantee the success of any KPI project that is appropriately undertaken with the right support. It outlines the critical success factors of any KPI initiative and even provides 23 pages of performance measures that you can review when searching through the right performance metrics for your organization.

But most importantly, it clearly outlines the difference between key performance indicators, performance indicators, results indicators, and key results indicators. Many organizations mix up performance and results indicators, and mixing up key performance indicators with non-key result indicators can do more harm then good. Consider the example of a UK hospital that decided the most important metric was the time between patient registration and patient review by a house doctor. Since the nurses realized they could not stop patients from registering with minor injuries, which did not require immediate treatment, but that they could delay the registration of patients in ambulances, because they were receiving quality care from the paramedics, the nursing staff started asking paramedics to leave the patients in the ambulances until a house doctor could see them, as this improved the “average time to see patients”. It wasn’t long before there was a parking lot full of ambulances, and on some days there were even ambulances circling the hospital because the parking lot was full. This not only created a major problem for the ambulance service, which was unable to deliver an efficient emergency service, but put patients lives at risk, as they couldn’t be effectively triaged until they were registered. In business terms, the classification of a minor result indicator as a key performance indicator put lives at risk!

So what’s the difference? A result indicator tells you what you did while a performance indicator tells you what you should do. Not all results or performance indicators are important. In the case of the hospital, it doesn’t always matter that some patients with minor sports injuries or flus have to wait three hours to see a doctor, it matters that car crash victims, heart attack patients, and patients with other life threatening injuries see a house doctor as fast as possible. While the average wait time should be tracked (because if the average wait time for patients with minor injuries is consistently three hours, it means you probably need more doctors), it’s not critical. On the other hand, the wait times for the top three triage levels (code, critical, and urgent) are critical. Code patients need to be seen immediately, critical patients within a few minutes, and urgent patients within an hour at most. These are critical success factors, and, as such, key performance indicators.

Parmenter provides some easy classifications early on to help you distinguish key & non-key results indicators from key & non-key performance indicators, as well as a hierarchy to help you understand them. KRIs, at the top of the hierarchy, are influenced by RIs and PIs, which are driven by KPIs. Basic classifications include:

Key Result Indicators (KRIs)

  • customer satisfaction
  • net profit
  • customer profitability
  • employee satisfaction
  • return on capital employed

Result Indicators (RIs)

  • net profit on key product lines
  • sales made yesterday or last week
  • customer complaints from key customers
  • hospital bed utilization in a week

Performance Indicators (PIs)

  • percentage increase in sales on the top ten percent of customers
  • number of employee suggestions implemented in the last thirty days
  • sales calls for the next week or two weeks
  • late deliveries to customers

Key Performance Indicators (KPIs)

  • late planes
  • number of trucks leaving not at capacity
  • average time to treatment for code, critical, and urgent patients

Basically, as per Parmenter, KPIs must have the following characteristics:

  • non-financial
  • measured frequently (at least weekly, if not daily or hourly)
  • acted on by the CEO
  • clearly indicated required action(s)
  • tied to a team
  • significant impact
  • encourage appropriate action

In addition, they must be current- or future-oriented, must make a difference, and must result in the CEO picking up the phone and calling the team leader when performance slips out of the acceptable zone. Finally, there must be no more than 10 of them. While you can have up to 80 performance and results indicators, and up to 10 additional key results indicators, you should never have more than 10 KPIs, many organizations can get away with only 5 KPIs, and some industries can see dramatic performance improvements by focussing on only 1 KPI. And regardless of what KPIs you settle on, make sure they can be understood by a 14-year old. You want them to be abundantly clear to everyone in the organization, and this is the one way that’s guaranteed to achieve that.

Finally, the book presents an expanded balanced scorecard that Parmenter believes is more relevant to the success of a KPI program, which he insists should not be undertaken until you have the support of the C-suite and the CEO who will allocate adequate time and resources to the project.

All-in-all, it’s a very well done and informative book that could easily be used as a text in a University level course. The only bad thing I have to say about it is that parts of it, like the chapters on the KPI Team Resource Kit and the Facilitator’s Resource Kit, are as dry as the desert. Done right, the interesting insights should materialize in the workshops, but the planning for them is probably not going to be very exciting.

Share This on Linked In

Managing Alliances with the Balanced Scorecard

In the first Harvard Business Review of 2010, Robert S. Kaplan and David P. Norton, the original developers of the balanced scorecard, are back with an article (co-authored with Bjarne Rugelsjoen) on Managing Alliances with the Balanced Scorecard. Quoting a recent study by McKinsey & Company that fond that half of all joint ventures fail to yield returns to each partner above the cost of capital, they argue that a methodology is needed that will dramatically improve the odds of success.

Not surprisingly, the authors are recommending the adoption of the balanced scorecard management system (BSC), a technique that can help companies switch their focus from operations and contractual obligations to strategy and commitment, which the authors argue is the key to success. Proper application of BSC techniques should clarify strategy, drive behavioural change, and provide a governance system for strategy execution. As an example, the authors presented a detailed case study based on Solvay, a top-40 pharmaceutical company, and Quintiles, a contract research company providing a wide range of clinical research and trial services for pharmaceuticals that Solvay selected in 2001 to manage all stages of its trial processes across all of its pharmaceuticals under development. After an initial five year partnership, which worked well (but not as well as each side felt it could), the companies wanted to move up to an alliance, but needed a way to accomplish it successfully. They chose a variation of the BSC process — that they called JSC, formed an alliance management team — led by an external impartial consultant, and got to work. And while it took some time to make things happen, the alliance based on the new JSC (BSC) approach reduced total cycle time for clinical trials by approximately 40% (which not only considerably reduces costs but accelerates profits as the products hit the market faster) and generated a new methodology for managing non-performing sites that halved the number of non-performing sites (which don’t recruit enough patients) and saved 25,000 to 35,000 Euros per site. (Considering that a study can have up to 150 sites, this new methodology can save up to 5.25M Euros per study. And while that might only be 5% of the cost of bringing a new pharmaceutical to market, it’s not pocket change!)

The methodology is based on the collaboration theme scorecard that captures metrics that allow you to track your progress on objectives under each theme. The general format for each scorecard is the definition of:

  • the process objective,
  • joint wins,
  • metrics, and
  • initiatives.

The scorecards are the tools of an alliance strategy map that defines the intended collaboration, business processes, and expected values. The article presents the strategy map used by Solvay Pharmaceuticals and Quintiles in the definition and execution of their alliance.

The article also mention’s Infosys’ success with its relationship scorecard and LagasseSweet’s success with its own modification of the balanced scorecard, which has helped it to identify 150M in new revenue opportunities. Thus, if you are willing to take the extra time required to jointly build the alliance strategy map and theme scorecards, you might just see a much bigger ROI than you might expect with your own implementation of the BCS.

Share This on Linked In

What Metrics Should Your Manufacturing Organization Be Using?

Share This on Linked In

In The Metrics-Driven Organization from Exact Software, the authors reprint a table from AMR from 2007 that listed the top 19 manufacturing metrics in use today, with 13 used by over 50% of organizations. Metrics are important, because you really can’t manage what you can’t measure, but you need to have the right metrics, and the right number of metrics to achieve success.

The top 13 metrics are:

  • inventory levels
  • fixed manufacturing costs
  • average cycle times
  • scrap and rework
  • variable manufacturing costs
  • profitability of products
  • raw material quality
  • finished goods quality
  • demand / demand variance
  • manufacturing line-schedule visibility
  • transportation/logistics schedules and costs
  • manufacturing line capacity visibility
  • KPIs / performance of key production assets

But which are the right ones for you?

The article notes that you need to focus on metrics that drive productivity, metrics that are demand-driven, and metrics that can be automated with technology. In addition, it’s also important to focus on metrics that impact the perfect order, metrics that impact sustainability, and metrics that impact working capital.

I’d recommend the following five from the above list:

  • inventory levels (working capital)
  • average cycle times (productivity)
  • profitability of products (sustainability)
  • demand / demand variance (demand driven)
  • KPIs / performance of key production assets (automated)

Plus these two metrics from AMR’s list that are only used by 48% and 35% of organizations, respectively:

  • supplier on-time delivery (perfect order)
  • variability of cycle times (productivity, perfect order, sustainability, & working capital)