Spend Analysis III: Crosstabs Aren’t “Analysis”

Today’s post is from Eric Strovink of BIQ.

Pivot tables are great, no question about it. They’re also a pain in the neck to build, so any tool that builds a crosstab automatically is helpful. But crosstabs are where many “spend analysis” systems declare victory and stop.

Pivot tables are most useful when built in large clusters. Hundreds of them, for example, booked automatically down some dimension of interest (like Cost Center by Vendor, booked by Commodity). They’re also best when they’re created dynamically, inserted into existing Excel models, with the raw data readily available for secondary and tertiary analyses.

It’s also useful to see a breakdown of all dimensions by a single dimension — i.e., hundreds of thousands, even millions, of pivot table cells calculated automatically on every drill. For example, here’s MWBE spend, broken down against each of 30,000 nodes, re-calculated on every drill. 


Click image to enlarge

Not to belabor the point, but there’s a big difference between (1) dumping a single crosstab to an HTML page, and (2) inserting hundreds of pivot tables into an existing Excel model, or calculating 120,000 crosstab cells automatically on every drill. The former is interesting. The latter supports serious analysis.

Are pivot tables the most useful way to present multidimensional data? Often, they aren’t. The Mitchell Madison Group’s Commodity Spend Report books top GL accounts, top Cost Centers, and top Vendors by Commodity, with a monthly split of spend, showing share of total category spend at each display line. Is a static report like this one “analysis?” No, of course not. But in this case the multi-page “report” isn’t static at all. It was built with a simple extract from the dataset, inserted directly into a user-defined Excel model. The output is trivially alterable by the user, putting analysis power directly into his or her hands. For example, with a simple tweak the report could just as easily be booked by Vendor, showing Commodity, Cost Center, and so on — or adapted to an entirely different purpose.

What about matching externally-derived benchmarks to internal data? Is it useful to force-fit generic commodity benchmark data into an A/P dataset, as some spend analysis vendors try to do, and pretend that actionable information will result? Or is it more productive to load relevant and specific benchmark data into a flexible Excel model that you control, and into which you insert actuals from the dataset? The former approach impresses pie-in-the-sky analysts and bloggers. The latter approach produces concrete multi-page analyses, like this, that demonstrate how “best price” charged for an SKU, per contract, might not be “best price” after all (who ever heard of a PC whose price was flat for 12 months?)1


Click image to enlarge

Next installment: User-Defined Measures

Previous Installment: Why Data Analysis is Avoided

1 This example is based on disguised but directionally accurate data. A similar analysis on actual data identified hundreds of thousands of dollars in recoverable overcharges.

Share This on Linked In