Daily Archives: January 27, 2010

There’s More to Cost than Cost Analysis

While it was exceptionally well written, I was a little disappointed with this article on uncovering hidden costs over on SupplyManagement.com.

The article, which made the acute observation that there is no fixed arithmetic formula between the cost of producing the goods and services sold to us and the price charged for them: sellers charge what the market will bear and that to break down a suppliers’ figures, we need to know the proportion of each area of cost that the goods or services are likely to attract, did a great job of describing the different types of analysis one could bring to bear on costs, but a poor job of actually indicating how to reduce the “hidden” costs once found. The advice boiled down to “collaboration is key” which, while correct, doesn’t help you answer the important questions that will arise during the collaboration.

While questions like:

  • Can we give our suppliers access to our contracts to reduce cost?
  • Does our supplier have contracts we could access to cut cost?

are theoretically easy to answer (but not always easy to answer in practice due to the poor state of contract management in many organizations)

questions like:

  • What is the thing made of and what is happening in the market for that material?
  • Where are the cost drivers in these goods or services?
  • How can you buy that material most effectively?

are a little harder to answer.

You need to understand how to perform market intelligence, you need to understand how the goods are assembled (using virtual modelling environments like those offered by Apriori) or the services delivered, and you need to understand what innovative new technologies or processes could be applied to reduce those costs. And that’s more than you’re going to get from a purchase-price, open-book, or total absorption cost analysis. You have to start with a true Total Cost of Ownership and then dive in on each cost.

Share This on Linked In

Spend Analysis III: Crosstabs Aren’t “Analysis”

Today’s post is from Eric Strovink of BIQ.

Pivot tables are great, no question about it. They’re also a pain in the neck to build, so any tool that builds a crosstab automatically is helpful. But crosstabs are where many “spend analysis” systems declare victory and stop.

Pivot tables are most useful when built in large clusters. Hundreds of them, for example, booked automatically down some dimension of interest (like Cost Center by Vendor, booked by Commodity). They’re also best when they’re created dynamically, inserted into existing Excel models, with the raw data readily available for secondary and tertiary analyses.

It’s also useful to see a breakdown of all dimensions by a single dimension — i.e., hundreds of thousands, even millions, of pivot table cells calculated automatically on every drill. For example, here’s MWBE spend, broken down against each of 30,000 nodes, re-calculated on every drill. 


Click image to enlarge

Not to belabor the point, but there’s a big difference between (1) dumping a single crosstab to an HTML page, and (2) inserting hundreds of pivot tables into an existing Excel model, or calculating 120,000 crosstab cells automatically on every drill. The former is interesting. The latter supports serious analysis.

Are pivot tables the most useful way to present multidimensional data? Often, they aren’t. The Mitchell Madison Group’s Commodity Spend Report books top GL accounts, top Cost Centers, and top Vendors by Commodity, with a monthly split of spend, showing share of total category spend at each display line. Is a static report like this one “analysis?” No, of course not. But in this case the multi-page “report” isn’t static at all. It was built with a simple extract from the dataset, inserted directly into a user-defined Excel model. The output is trivially alterable by the user, putting analysis power directly into his or her hands. For example, with a simple tweak the report could just as easily be booked by Vendor, showing Commodity, Cost Center, and so on — or adapted to an entirely different purpose.

What about matching externally-derived benchmarks to internal data? Is it useful to force-fit generic commodity benchmark data into an A/P dataset, as some spend analysis vendors try to do, and pretend that actionable information will result? Or is it more productive to load relevant and specific benchmark data into a flexible Excel model that you control, and into which you insert actuals from the dataset? The former approach impresses pie-in-the-sky analysts and bloggers. The latter approach produces concrete multi-page analyses, like this, that demonstrate how “best price” charged for an SKU, per contract, might not be “best price” after all (who ever heard of a PC whose price was flat for 12 months?)1


Click image to enlarge

Next installment: User-Defined Measures

Previous Installment: Why Data Analysis is Avoided

1 This example is based on disguised but directionally accurate data. A similar analysis on actual data identified hundreds of thousands of dollars in recoverable overcharges.

Share This on Linked In