Daily Archives: September 25, 2007

So You Want To Do Spend Analysis?

Now that you’ve read my pieces on The Future of Sourcing and Spend Analysis Today you know that spend analysis is key to your continued success when it comes to year-over-year savings. You want to get on with it, but you’re not sure where, or how, to start. In this post, I’ll attempt to answer that question by providing you with a step-by-step process you can use to get your spend analysis effort under way and keep more of those corporate dollars in the corporate coffers, where they belong.

Before I continue, I’d like to point out that this blog isn’t your only source of great information on spend analysis (even though it does have over 20 posts on the subject). There’s also the Spend Analysis and Opportunity Assessment: There’s Gold in Them There Hills … Of Data wiki-paper over on the eSourcing Wiki, which, in full disclosure, I should point out has yours truly and Eric Strovink (of BIQ) among the co-authors, and the references in the bibliography that it maintains (which, in the spirit of openness has links to public white papers by Ketera and Zycus, among others).

Step 1: Locate Your Data
The first thing you need to do is figure out where all of your procurement data resides. This is harder then you think, even if you have an ERP system. Because even if you have an ERP, chances are that a significant quantity of the data you need is NOT in the ERP system. Some of it will be in the ERP system, some of it will be in other accounting systems (most large organizations have more than one ERP system, or at least more than one instance – a lot more in some cases), some in the AR system, some in your P-card systems, some in your T&E systems, etc. … you get the point. If you don’t know where your data is, you can’t extract what you need – and without the right data, your effort will fail.

Step 2: Adopt a Taxonomy
Once you have located your data, you need to figure out how you are going to integrate it. The commodity structure that is going to be the foundation of your spend analysis efforts should be based on a standard taxonomy. This can be a universal standard such as UNSPSC or your own custom taxonomy (there is considerable debate as to which is preferable, but there is general agreement that the taxonomy should be modified to your particular considerations and needs, not the other way around). As long as you can easiliy map all of your data in your disparate systems to the taxonomy, and as long as the taxonomy doesn’t interfere with the proper grouping of spend for sourcing and procurement purposes (which is sometimes the case with unmodified UNSPSC), that’s what counts.

[Note: Steps 3, 4, and 5 — and usually most of step 2 as well — are often performed by Spend Analysis vendors on your behalf. You would be wise not to overpay for those services; and if you do avail yourself of them, you should understand what is being done, even if you don’t do it yourself.]

Step 3: Centralize the Data in a Single Repository

Step 3A: Define a Master Transaction Record
Across all the various systems that you are integrating into your spend cube, there are some data fields that are similar, and some that have no equivalents. You’ll need to define a “master” transaction format that can accomodate the data from all of your disparate systems. It will have some fields to which most of the feeds will contribute; but it will have other fields to which only one feed contributes. Thus, there will be many more fields in your “master” record than in any of the individual transaction sources that you are including. Note that when you combine “like” fields together from disparate systems, you have to ensure that their values are unique — which means concatenating a system ID to the field. For example, GL code #37 in System A might mean something entirely different in System B; thus, the System A GL code should changed to: “SysA-37”, and the System B GL code should be changed to “SysB-37”. That way, the two (different) 37 codes won’t be erroneously grouped together.

Note that this effort requires a sophisticated “data translation” tool — the “T” of the (in)famous “ETL” that everyone always talks about. The data translation tool should be capable of column (field) manipulation, computation of new columns as a function of existing columns, addition and deletion of columns, and so on. And, the translation tool needs to produce a script that can be re-run again and again, since this translation will need to occur on every data refresh.

Note also that the Master Transaction Record will contain an interesting new field that isn’t present in any of the data feeds — a “source” field. This will enable you to dimensionalize (slice) the data by a single system, or by all the source systems, or by any subset of the systems.

Step 3B: Collect Related Information
This is typically a Vendor Master, or a GL description table, or a Cost Center breakdown, either maintained by one or more of the ERP systems, or maintained independently somewhere else in the enterprise. Just as with the Master Transaction Record, duplicate tables from multiple sources must be merged into a “master” table using the same methodology as in 3A.

Step 3C: Build the Initial Cube
In this step, all of the data that you’ve assembled and translated is loaded into the Spend Analysis system. Files containing related information are related to the Master Transaction Record. One or more groups of Master Transaction Records are loaded into the system. Then, data dimensions (columns) within your data are defined, and hierarchies (implicit and explicit) created. Measures (quantities that are rolled up in the cube) are also identified. Finally, the cube is put together in some initial fashion (this varies by spend analysis vendor; sometimes this initial cube is available to you immediately for preliminary analysis, sometimes it is not).

The tools made available to users for step 3 processes vary; it is fairly unusual for all of them to be accessible to business users, although that is an absolute requirement for power users (see Step 7).

Step 4: Family the Data
If there are multiple ERP systems being combined, then the “GL” column in the Master Transaction Record (as well as others) will contain multiple instantiations of pretty much the same thing — for example, several different varieties of “office supplies” or “contract labor” and so on. It’s necessary, therefore, to create an “uber” GL — a grouping of like GL codes into logical categories, so as to avoid redundancies later. Similarly, the Vendor dimension must also be “familied” so that the (typically many) entries made across all the ERP systems for a particular vendor are grouped together. Familying should be done for any dimension that needs it; cost center is another candidate.

Step 5: Map the Data
Once the key dimensions are familied, it’s time to map spending to the taxonomy you chose in Step 2. The result of the mapping phase is a set of “mapping rules” which constitute a knowledge base for how to assign spending to the taxonomy. Once the mapping rules are created, when new spend is added to the dataset, the rules are applied to each new transaction, and that transaction is moved to the appropriate taxonomy bucket automatically. Spend Analysis vendors vary on their approaches to creating the mapping rules; some sport automated rules generation systems commingled with manual correction, and perform this service for their customers; others allow customers or third parties to build their own rules. In any event, the end result is a spend cube in which it is finally possible to determine how much was really spent in a particular commodity area (a key piece of information that is not available from ERP systems, and certainly not available when there are multiple ERP systems in the enterprise).

Step 6: Pick the Low Hanging Fruit
These days, everyone wants a quick win – and there’s no quicker way to get a win than to drill around a spend cube for the very first time and look for obvious indications of trouble. These include incorrect vendor density (too many suppliers, or too few suppliers for a particular commodity); high spend rates for certain commodities (such as office supplies) compared to similar companies; and so-called “bypass” spending — that is, spending that is not with approved vendors, or that is clearly off-contract. Some Spend Analysis vendors and consultants provide standard reports to assist with this process.

Step 7: Acquire a Real Spend Analysis Tool and Let Your Power Users Loose!
As I pointed out in Spend Analysis Today, a real spend analysis tool is one that truly gives the user the ability to “play” with the underlying OLAP database. For starters, a user should be able to define their own cubes that consist of any dimensions they want, re-order and re-structure the dimensions of the cube at any time, dynamically create their own reports, analyze multiple dimensions simultaneously using multidimensional extracts, define and re-define the classification rules dynamically, and populate their own models with data from the spend cube.

Step X: Get a Good Consultant
At any point during the process, if you are unsure about what to do, you should find a consultant who specializes in spend analysis for a living to help you out. This doesn’t mean call up your favorite Big-5 Consulting firm and asking them to send over their best guy. Rather, it means seeking out the boutiques who do it for a living day-in and day-out and have truly mastered the process (like The Buying Triangle). The real masters will be able to analyze your spend, compare it to current contracts and benchmarks, and find money owed to you – that you are eligible for right now!

Up Next: How To Get The Most From Your Spend Analysis System


It’s your data. Use it!