Monthly Archives: September 2007

The 5th International Supply Chain Management Symposium is Almost Upon Us

As I indicated back in March and again in August, the 5th PMAC / MeRC / ORNEC International Supply Chain Management Symposium is coming up next month in Toronto, Ontario. With keynotes from David C. Swiggum from IBM, Jim Mikell from Everest Group, Kevin Costello from Ariba, and Robert C. Johnson from Purolater as well as talks from Christopher J. Carter from A.T. Kearney, Mark Gallant from Accenture, Jon Hansen from e-Procure, John Keogh from Hewlett-Packard, and Nicholas Selersen from KPMG, this conference is quickly becoming the little conference that could. If you’re in Canada, or the Northern US (or even as far away as Brazil, India, and Australia – just look at the speaker list), I would strongly urge you to check this event out. It’s sad to say, but there aren’t a lot of good Supply Chain / Sourcing / Procurement conferences North of the border, so it’s important to take advantage of the few good ones we have!

There’s More to Ketera than Connect

The big news this month with Ketera was their recent Connect conference in California, but back in July they put out a good whitepaper on Supplier Catalog Management: Avoiding an Expensive SAP SRM Migration in the context of supplier enablement.

The white-paper starts by noting that SAP SRM customers are in a big dilemma with regards to their current Requisite implementation (which is no longer supported) – either they migrate to CCM, which will in turn require another migration when the customer upgrades to SRM 6.X down the line, or they migrate to MDM Catalog, which is young, buggy, unproven in large deployments, and has a non-trivial cost of migration. However – there is a third option – and that is to migrate to a third party solution. Of course, the solution proposed is Ketera’s Supplier Content Management (KSCM) solution, but the central idea is valuable – why rely on an inefficient and costly solution with a poor migration path when you can instead use an efficient, cost-effective, and extendible third party solution that can meet your needs.

The white-paper also outlines what such a solution should look like. It should be on-demand, streamline the content/catalog development and update process, allow suppliers to easily upload, validate, and manage catalogs and related content using tools they are familiar with (such as MS Excel templates), enable multi-party workflows that bring together suppliers, buyers, and external service providers, and make all catalogs immediately available for use by SAP SRM once they are created.

Furthermore, the solution should support at least two deployment modes: Supplier Managed, Vendor Hosted and Supplier Managed, Client Hosted. In both cases, the supplier provides all the product data and is responsible for keeping it up to date, but in the first case the vendor manages the implementation and IT support and integrates into SAP SRM via punch out while in the second case, the buyer manages the implementation and the buyer’s IT team handles the bulk of support. And, if the supplier or buyer wishes it so, the Vendor should be capable of managing the catalog on behalf of the Supplier.

Now, I know this isn’t as glamorous as the financial supply chain solutions discussed by Jason Busch over on Spend Matters, as innovative as the cost-baslining and modeling solutions I suggested back in a July post, or as appealing to a CFO – but it’s important nonetheless, since the more efficient a procurement professional is, the more time they have to seek out, find, and capture true savings.


When it comes to data migration, there’s no need to be a sap.

So You Want To Do Spend Analysis?

Now that you’ve read my pieces on The Future of Sourcing and Spend Analysis Today you know that spend analysis is key to your continued success when it comes to year-over-year savings. You want to get on with it, but you’re not sure where, or how, to start. In this post, I’ll attempt to answer that question by providing you with a step-by-step process you can use to get your spend analysis effort under way and keep more of those corporate dollars in the corporate coffers, where they belong.

Before I continue, I’d like to point out that this blog isn’t your only source of great information on spend analysis (even though it does have over 20 posts on the subject). There’s also the Spend Analysis and Opportunity Assessment: There’s Gold in Them There Hills … Of Data wiki-paper over on the eSourcing Wiki, which, in full disclosure, I should point out has yours truly and Eric Strovink (of BIQ) among the co-authors, and the references in the bibliography that it maintains (which, in the spirit of openness has links to public white papers by Ketera and Zycus, among others).

Step 1: Locate Your Data
The first thing you need to do is figure out where all of your procurement data resides. This is harder then you think, even if you have an ERP system. Because even if you have an ERP, chances are that a significant quantity of the data you need is NOT in the ERP system. Some of it will be in the ERP system, some of it will be in other accounting systems (most large organizations have more than one ERP system, or at least more than one instance – a lot more in some cases), some in the AR system, some in your P-card systems, some in your T&E systems, etc. … you get the point. If you don’t know where your data is, you can’t extract what you need – and without the right data, your effort will fail.

Step 2: Adopt a Taxonomy
Once you have located your data, you need to figure out how you are going to integrate it. The commodity structure that is going to be the foundation of your spend analysis efforts should be based on a standard taxonomy. This can be a universal standard such as UNSPSC or your own custom taxonomy (there is considerable debate as to which is preferable, but there is general agreement that the taxonomy should be modified to your particular considerations and needs, not the other way around). As long as you can easiliy map all of your data in your disparate systems to the taxonomy, and as long as the taxonomy doesn’t interfere with the proper grouping of spend for sourcing and procurement purposes (which is sometimes the case with unmodified UNSPSC), that’s what counts.

[Note: Steps 3, 4, and 5 — and usually most of step 2 as well — are often performed by Spend Analysis vendors on your behalf. You would be wise not to overpay for those services; and if you do avail yourself of them, you should understand what is being done, even if you don’t do it yourself.]

Step 3: Centralize the Data in a Single Repository

Step 3A: Define a Master Transaction Record
Across all the various systems that you are integrating into your spend cube, there are some data fields that are similar, and some that have no equivalents. You’ll need to define a “master” transaction format that can accomodate the data from all of your disparate systems. It will have some fields to which most of the feeds will contribute; but it will have other fields to which only one feed contributes. Thus, there will be many more fields in your “master” record than in any of the individual transaction sources that you are including. Note that when you combine “like” fields together from disparate systems, you have to ensure that their values are unique — which means concatenating a system ID to the field. For example, GL code #37 in System A might mean something entirely different in System B; thus, the System A GL code should changed to: “SysA-37”, and the System B GL code should be changed to “SysB-37”. That way, the two (different) 37 codes won’t be erroneously grouped together.

Note that this effort requires a sophisticated “data translation” tool — the “T” of the (in)famous “ETL” that everyone always talks about. The data translation tool should be capable of column (field) manipulation, computation of new columns as a function of existing columns, addition and deletion of columns, and so on. And, the translation tool needs to produce a script that can be re-run again and again, since this translation will need to occur on every data refresh.

Note also that the Master Transaction Record will contain an interesting new field that isn’t present in any of the data feeds — a “source” field. This will enable you to dimensionalize (slice) the data by a single system, or by all the source systems, or by any subset of the systems.

Step 3B: Collect Related Information
This is typically a Vendor Master, or a GL description table, or a Cost Center breakdown, either maintained by one or more of the ERP systems, or maintained independently somewhere else in the enterprise. Just as with the Master Transaction Record, duplicate tables from multiple sources must be merged into a “master” table using the same methodology as in 3A.

Step 3C: Build the Initial Cube
In this step, all of the data that you’ve assembled and translated is loaded into the Spend Analysis system. Files containing related information are related to the Master Transaction Record. One or more groups of Master Transaction Records are loaded into the system. Then, data dimensions (columns) within your data are defined, and hierarchies (implicit and explicit) created. Measures (quantities that are rolled up in the cube) are also identified. Finally, the cube is put together in some initial fashion (this varies by spend analysis vendor; sometimes this initial cube is available to you immediately for preliminary analysis, sometimes it is not).

The tools made available to users for step 3 processes vary; it is fairly unusual for all of them to be accessible to business users, although that is an absolute requirement for power users (see Step 7).

Step 4: Family the Data
If there are multiple ERP systems being combined, then the “GL” column in the Master Transaction Record (as well as others) will contain multiple instantiations of pretty much the same thing — for example, several different varieties of “office supplies” or “contract labor” and so on. It’s necessary, therefore, to create an “uber” GL — a grouping of like GL codes into logical categories, so as to avoid redundancies later. Similarly, the Vendor dimension must also be “familied” so that the (typically many) entries made across all the ERP systems for a particular vendor are grouped together. Familying should be done for any dimension that needs it; cost center is another candidate.

Step 5: Map the Data
Once the key dimensions are familied, it’s time to map spending to the taxonomy you chose in Step 2. The result of the mapping phase is a set of “mapping rules” which constitute a knowledge base for how to assign spending to the taxonomy. Once the mapping rules are created, when new spend is added to the dataset, the rules are applied to each new transaction, and that transaction is moved to the appropriate taxonomy bucket automatically. Spend Analysis vendors vary on their approaches to creating the mapping rules; some sport automated rules generation systems commingled with manual correction, and perform this service for their customers; others allow customers or third parties to build their own rules. In any event, the end result is a spend cube in which it is finally possible to determine how much was really spent in a particular commodity area (a key piece of information that is not available from ERP systems, and certainly not available when there are multiple ERP systems in the enterprise).

Step 6: Pick the Low Hanging Fruit
These days, everyone wants a quick win – and there’s no quicker way to get a win than to drill around a spend cube for the very first time and look for obvious indications of trouble. These include incorrect vendor density (too many suppliers, or too few suppliers for a particular commodity); high spend rates for certain commodities (such as office supplies) compared to similar companies; and so-called “bypass” spending — that is, spending that is not with approved vendors, or that is clearly off-contract. Some Spend Analysis vendors and consultants provide standard reports to assist with this process.

Step 7: Acquire a Real Spend Analysis Tool and Let Your Power Users Loose!
As I pointed out in Spend Analysis Today, a real spend analysis tool is one that truly gives the user the ability to “play” with the underlying OLAP database. For starters, a user should be able to define their own cubes that consist of any dimensions they want, re-order and re-structure the dimensions of the cube at any time, dynamically create their own reports, analyze multiple dimensions simultaneously using multidimensional extracts, define and re-define the classification rules dynamically, and populate their own models with data from the spend cube.

Step X: Get a Good Consultant
At any point during the process, if you are unsure about what to do, you should find a consultant who specializes in spend analysis for a living to help you out. This doesn’t mean call up your favorite Big-5 Consulting firm and asking them to send over their best guy. Rather, it means seeking out the boutiques who do it for a living day-in and day-out and have truly mastered the process (like The Buying Triangle). The real masters will be able to analyze your spend, compare it to current contracts and benchmarks, and find money owed to you – that you are eligible for right now!

Up Next: How To Get The Most From Your Spend Analysis System


It’s your data. Use it!

The 2nd Sourcing Innovation Series – The Ents Awaken!

If you haven’t guessed already, calling bloggers to action is a bit like herding cats … except the cats move faster … much faster … and use real claws if you tick them off. And even though they recognize that Web 2.0 time goes by faster than dog years, when it comes to the big issues, they like to think about them … they really like to think about them! And sometimes they think so deep, you wonder if they’ve succumbed to the slumber of the Ents.

Fortunately, they haven’t … and some of them have finally picked up the virtual pen and committed their thoughts to the virtual paper. Even though Charles was quick to enter the fray, logging his thoughts the day my pre-announcement post went up, it was only late last week that the other bloggers began to make themselves known. Tim gave us a preamble post, inspired by a post of Brian, Jon Hansen gave us his preamble post and his main contribution, and Jason gave us his first post.

And more will be coming. Both David and Dave were contemplating theirs last week, JP is keen as well (though he might be a bit late with the new job), and a couple of the usual suspects (including Kevin Brooks and maybe even Eric Strovink, who’s also lined up for a guest post on spend analysis and contract management) should be chiming in sometime in the near future as well. There’s even a good chance I might even get one of our favorite analysts from AMR to chime in!

It’s taking longer than usual, but I assure you that I’m committed to do this and that I take the words of Cmdr. Peter Quincy Taggart to heart when it comes to innovation.

Never give up, never surrender! The future depends on it!

“Supply Chain” Does Not Have to be a Dirty Word!

In a recent discussion with Kevin Brooks, the former marketing guru of Apexon, and of Ariba before that, who is now the Vice President of Marketing for TrueDemand, what struck me most was not how useful the right technology can be in addressing the 7 Deadly Sales Suppressors, but how TrueDemand has noticed that sales folks – even the sales execution folks at big CPG companies whose careers depend on how many units they can move in a given timeframe – still view “supply chain” as such a low priority that it’s almost a dirty word and the domain of “procurement”, which many still believe has nothing to do with them. To them, it’s all about revenue … which is … well … wrong. Business is about profit and profit = revenue – costs. Thus keeping costs down is just as important in the pursuit of profit.

Furthermore, this apparently still holds true in the merchandising and sales execution teams at some of the world’s largest CPG companies … even those that have adopted the latest sourcing and procurement technologies and, at least within their supply chain divisions, understand just how important a smoothly operating supply chain is. When it comes right down to it, if you don’t have the right product, at the right place, at the right time, of the right quality, at the right price point – all the marketing and promotion in the world is not going to help you in the least.

This is where supply chain comes into play. Even if you call it logistics and distribution, operations, or sales execution – it’s still supply chain, and it’s still the proper application and use of supply chain technology that’s going to make the difference in a market where product life times are shorter every year and even a few days, or hours, can make a significant difference with regards to the impact of a newly launched promotion.

However, I will have to admit that it is a still a bit off a toss up as to what the right set of systems is – since there is no one system that tackles everything involved and since success depends on a system that you can use. Forecasting; Sourcing; Procurement; Logistics & Distribution; Inventory, Warehouse Management & Replenishment; Merchandising; Product Lifecycle Management (PLM); and Sales Execution – to name a few – are all important. And there is no one system that tackles more than a couple of areas. Now, it’s true that some of the bigger players, like Oracle and SAP, have modules that work off a central data store to tackle all of these issues to some degree, but no system tackles all of these issues to the extent that today’s best of breed players do, nor do any of the systems that support more than a handful of these modules have tight integration.

However, that’s not the point. These systems exist, and dozens of players in each area keep improving them, because they have value. They not only help supply chain do their jobs better, but they are capable of surfacing exceptions in near-real time that need to be acted on in days, or hours, to keep a product launch from going off-track. They’re effective, and for many companies, their day-to-day operations would be severely threatened, if not impossible without them. So don’t bicker about the terminology, or its value. You need the technology, and as globalization increases, you’re only going to need it more. Without it, you won’t be able to make sense of the data fast enough to make a difference. Supply Chain drives your company, its profits, and, ultimately, your compensation. Get used to it.


If you don’t accept reality, don’t expect reality to accept you.