Category Archives: Spend Analysis

Spend Analysis Meme Busting, Part II

Today’s post is again courtesy of Eric Strovink of BIQ (acquired by Opera Solutions, rebranded ElectrifAI).

9. Spend analysis is a “big iron” problem requiring large servers and databases.
Nonsense. Global 100 datasets fit easily onto ordinary laptop computers, and a modern laptop computer can deliver near-instantaneous drill-down times.

10. Static reports can deliver profound insight, replace procurement analysts and sourcing consultants, and even direct the course of entire sourcing programs.
This idea surfaces every now and then in the writings of pie-in-the-sky analysts and on blogs like Spend Matters. Most recently, it is being promulgated by spend analysis vendors who have run out of new marketing ideas. Of course, if it were really true, those reports would have existed long ago, and we’d all be out of a job.

11. Extracting data from accounting/ERP systems is difficult.
In fact, it’s easy, and usually doesn’t require IT resources to accomplish. The only difficulty I’ve ever experienced was a situation where a customer’s head of IT folded his arms and swore up-and-down that it simply wasn’t possible to dump data from their accounting system. One telephone call to the accounting system vendor (a helpful Canadian firm) provided the command necessary to extract the data trivially.

12. Accounts payable data is all you need for spend analysis.
This meme is on the decline. By now, many practitioners know that A/P spend visibility will find low-hanging fruit, and that it will identify buckets of spend that might be worth a closer look. But they also realize that in order to get to the next level of savings, it’s necessary to build commodity-specific analyses that take into account existing contract terms and invoice-level data, as well as balance-of-trade and demand-side considerations. And, this is only the tip of the iceberg when one considers the analysis possibilities buried in HR data and in ops data such as service repair records. Experienced practitioners build data analysis cubes whenever necessary, throwing them away when done, or retaining them if it is useful to do so.

13. Real data analysis requires statistics and applied mathematics.
Some consultants argue that unless rigorous statistical analysis is applied to a dataset, no meaningful conclusions can be drawn. For sourcing, though, the data visibility provided by a spend analysis system is usually more than sufficient. For example, try loading invoice data for a particular SKU over an extended time frame, and scatter-plotting the price. Is the price the same? Many times it isn’t, even if you have a contract that should have locked it down. When you find this pattern — and chances are you will — you’ve just written yourself a check, with no applied mathematics required.

14. Spend analysis systems should react in real time to real-time data.
This meme is becoming popular amongst armchair analysts and bloggers, but it’s scoffed at by practitioners. What does real time mean? When the requisition arrives? When the PO is issued? When the invoice arrives? When partial payment is made? When full payment is made? And what is anyone going to do about it, anyway, in “real time?” This ties into the notion of the “executive dashboard,” where the idea seems to be that an errant transaction will set off some sort of red alert. Even intrusion detection systems, which actually can justify a real-time component, have largely given up on this idea; the “fact” of an alert is not necessarily indicative of anything, and only in-depth after-the-fact pattern analysis can distinguish signal from noise.

15. “We already purchased an enterprise license for [xyz BI system or OLAP database], so we don’t need yet another analysis capability.”
This is the classic argument that the head of IT feeds to the CFO, which enables the CFO to kill the incipient spend analysis project, as she often does when fed such an argument (CFOs being people who really dislike spending money). Everyone feels good about saving money, and they move on to the next item on the agenda. Procurement doesn’t dare argue with IT, typically, so that’s the end of the story. Unfortunately, IT and the CFO have just doomed their company to another N years of zero spend visibility, because (a) nobody in Procurement knows how to configure either a BI system or an OLAP database, and (b) nobody in IT is going to take any time out to help them — assuming, that is, that IT themselves understand the BI system or OLAP database, which in many cases they do not.

16. “It’s important to deploy a spend analysis solution enterprise-wide, immediately.”
Here are three reasons why this can be a poor idea.

  • Procurement is a tricky business with lots of opportunities to overlook things even when work has been done competently and professionally. A spend analysis system, unfortunately, does a great job of pointing out those things. Why should Procurement hang its dirty laundry out the window for the whole company to see? Wouldn’t it be a better idea to find (and correct) obvious errors quietly and privately, before publishing a company-wide spend cube — if a company-wide cube is even necessary?
  • Company-wide initiatives create company-wide inertia, starting with IT sticking its thumbs in its belt and making trouble about how much time/money/effort it’s going to take to “set up a server” or, worse, how much time and effort it’s going to take to “evaluate the vendor’s remote hosting site for security compliance.” Wouldn’t you rather be running the spend cube immediately on your laptop, rather than waiting months for IT to get its act together? You can always deploy the cube company-wide, later, without any pressure.
  • Company-wide spend analysis initiatives create company-wide opportunities to kill the initiative. These days, spend analysis can be obtained inexpensively and set up quickly, within your discretionary budget and without asking anyone’s permission.

17. It will cost a lot to outsource cube construction, or to train internal resources on the spend analysis system.
This situation has improved substantially, and it continues to improve. New services offerings from some vendors have cut this price substantially from the levels of a few years ago; and contingency-based sourcing consulting firms will not only build your spending cube for free, but also return solid savings to your bottom line.

In summary:

9. Spend analysis can be done on a modern laptop computer.

10. A static report is not capable of providing much in the way of insight.

11. Data extraction from the vast majority of accounting / ERP systems is quite easy. (It might take some mapping to get it into cube form, but that’s why spend analysis tools come with good E”T”L tools.)

12. Accounts payable data is just the beginning. Every database is its own gold mine!

13. Data analysis starts with well mapped and relatively complete data – not advanced statistics or applied mathematics. Visibility is key.

14. The spend analysis system should be capable of accepting updated data when the data – and the analyst – is ready for new data. The answer does not lie in “real-time” or “monthly updates” because every organization, and every data set, is different.

15. BI & OLAP is not spend analysis.

16. As with any solution, initial deployment should be limited in scope to that which is controllable while the learning curve is overcome.

17. Training and consulting is a lot more affordable than you think, and if deployed on invoice data first, will return immediate ROI.

Thanks again, Eric!

Spend Analysis Meme Busting, Part I

Today’s post is courtesy of Eric Strovink of BIQ (acquired by Opera Solutions, rebranded ElectrifAI). (It is also a good summary of some of the more critical assumptions that were wrong in the book I reviewed last Friday in this post.) 

The following memes are circulating through the space.  Needless to say, they are all very false!

1. Vendor classification by spend is the secret sauce.
For direct materials, the key is said to be automatic UNSPSC classification. For indirect materials, the key is said to be D&B-style “who owns whom” plus SIC code. This ignores the fact that who owns whom is often irrelevant (Hilton hotels, for example, are all franchise-owned), and that many large vendors sell multiple, disparate products and services, so vendor mapping by itself is often ineffective.

2. Direct linkage to existing accounting systems or e-commerce trading platforms is the “holy grail”.
The logic is that the transactions from the e-commerce platform or ERP system should feed the spend analysis system transparently and directly. The confusion here is between “business intelligence” and “spend analysis,” a distinction that is often incorrectly blurred. To do its job properly, a spend analysis system must re-cast and re-map transactions into a useful form, often in a very different manner for each individual analysis, not just report blindly on fixed input data. A direct linkage to accounting is therefore a bad idea.

3. Spend analysis means the propagation of spend data to a large audience via a data warehouse.
This is the idea that a shared data warehouse is a “spend analysis” system, when in fact no useful analysis can be done without alteration of the schema (something that is impossible in the data warehouse model).

4. Conventional relational reporting tools are useful on spend datasets.
In fact, reporting on spend data can’t be relational, because only OLAP queries will function reasonably on large spend data sets. And, conventional reporting isn’t useful, because useful analyses always involve modeling and data outside the scope of the spend analysis system.

5. Spend analysis is only useful within the context of an e-sourcing suite.
This is a fallacy promoted by suite vendors. Armed with good visibility into spend, any procurement department can improve performance dramatically without a multi-million dollar commitment to an e-sourcing suite. Furthermore, the actual proportion of spend actually traveling through the suite is typically less than 20% of total spend. Thus, “integration” with a minority of the spend is a step backward, not a step forward.

6. “Point” or “best of breed” solutions should be avoided; rather, they should be part of an ERP system or e-sourcing suite. This might be true if the point solution is expensive or complex to implement. It is not true if the solution is inexpensive and easy to use. This is especially true given that suite vendors promote warehouse solutions that offer little or no analytical capability. In that case, the point solution can add significant capability that is lacking.

7. Vendor familying and spend mapping is difficult and/or requires special technology and tools.
This is a marketing myth, used by spend analysis vendors to aggrandize their own tools and/or expertise. Procurement professionals and sourcing consultants are far better at classifying spend than the “10 guys in Bangalore” behind most vendors’ claims. The familying/mapping process can be accomplished easily and quickly in most cases, even on large datasets.

8. It’s critically important to get spend mapping “right” the first time.
Wrong. No spend map survives first contact with a procurement professional who knows his commodities well. The best strategy is to empower that procurement professional to correct errors as he finds them, quickly, easily, and with real-time results. Nothing else will satisfy him.

In summary:

1. The “secret sauce” is being able to build the cubes you need.

2. There is no “holy grail” of integration.

3. Spend Analysis and Data Warehouse technologies are two completely different things.

4. Spend Analysis and Spend Reporting are two completely different things.

5. Spend Analysis can be done on its own and does not need to be part of a sourcing suite. 

6. Best-of-Breed solutions are just fine for spend analysis.  

7. Spend Mapping is easy.  The real secret sauce is “map the GL-codes, map the vendors, map the GL-code and vendor combinations for vendors who supply more than one GL-code”.  This gets you at least 90% and can be done by an accounts payable clerk in a day in most organizations with a proper solution that supports rules-based mapping and rule (group) priorities.

8. No one gets spend mapping “right” the first time, so there should be no assumption of such.  That’s why a real spend analysis solution where you can continue to build cubes and throw them away until you’re happy is critical. 

Spend Analysis: Another Book Review … And This One’s NOT Positive!

Pandit and Marmanis recently published a book titled Spend Analysis: The Window into Strategic Sourcing that has received a fair amount of praise from prince and pauper alike. Since I am currently in the process of co-authoring a text on the subject (now that my first book, The e-Sourcing Handbook [free e-book version on request], is almost ready to go to press), I figured that I should do proper due diligence, obtain their book, and read it cover to cover. I did – and I was disappointed.

Although the book would have been interesting ten years ago, good seven years ago, and still have some relevance five years ago, today it adds very little insight. In fact, the book is filled with fallacies, incorrect definitions, and poor advice.

Problems start to surface as early as the third paragraph (page 5) where the authors attempt to ‘simplify’ the definition of Spend Analysis, stating that “spend analysis is a process of systematically analyzing the historical spend (purchasing) data of an organization in order to answer the following types of questions“. There are at least three problems with this ‘simplification’:

  • Spend analysis is NOT systematic. Sure, each analysis starts out the same … build a cube … run some basic reports to analyze spend distribution by common dimensions … dive in. However, after this point, each analysis diverges. Good analysts chase the data, look for anomalies, and try to identify patterns that haven’t been identified before. If a pattern isn’t known, it can’t be systematized. Every category sourcing expert will tell you that real savings levers vary by commodity, by vendor, by contract, and by procuring organization — to name just a few parameters.
  • Good spend analysis analyzes more than A/P spend. It also analyzes demand, costs, related metrics, contracts, invoices, and any other data that could lead to a cost saving opportunity.
  • The questions the authors provide are narrow, focused, and only cover low hanging fruit opportunities. You don’t know a priori where savings are going to come from, and no static list of questions will ever permit you to identify more than a small fraction of opportunities.

From here, problems quickly multiply. But I’m going to jump ahead to the middle of the book (page 101) where the authors (finally) present their thesis to us, which they summarize as follows:

A complete and successful spend analysis implementation requires four modules:

  • DDL : Data Definition and Loading
  • DE : Data Enrichment
  • KB : Knowledge Base
  • SA : Spend Analytics

Huh? I don’t know about you, but I always thought that spend analysis was about, well, THE ANALYSIS! A colleague of mine likes to say, when aggravated, “it’s the analysis, stupid”. And I agree. A machine can only be programmed to detect previously identified opportunities. And guess what? Once you’ve identified and fixed a problem, it’s taken care of. Unless your personnel are incompetent, the same problem isn’t going to crop up again next month … and if it does, you need a pink slip, not a spend analysis package. DDL? Okay – you need to load the data into the tool – but if you don’t know what you’re loading, or you can’t come up with coherent spend data from your ERP system, you have a different problem entirely (again, you’re in pink slip territory). Enrichment? It’s nice – and can often help you identify additional opportunities, but if you can’t analyze the data you already have, you have problems that spend analysis alone isn’t going to solve. Knowledge base? Are the authors trying to claim that the process of opportunity assessment can be fully automated, and that sourcing consultants and commodity managers should pack their bags and head for the hills? Last time I checked, sourcing consultants and commodity managers seem to have no difficulty finding work.

So let’s focus on the analysis. According to the authors,

an application that addresses the SA stage of spend analysis must be able to perform the following functions:

  • web-centric application with Admin & restricted access privileges
  • specialized visualization tools
  • reporting and dashboards
  • feedback submission for suggesting changes to dimensional hierarchy
  • feedback submission for suggesting changes to classification
  • immediate update of new data
  • ‘what-if’ analysis capability

I guess I’ll just take these one-by-one.

  • Web-centric? If the authors meant that users should be able to share data over the web, then I’d give them this one … but the rest of the book strongly implies that they are referring to their preferred model, which is web-based access to a central cube. I’m sorry, that is not analysis. That is simply viewing standardized reports on a central, inflexible warehouse. We’ll get back to this point later.
  • They got this one right. However, the most specialized “visualization tool” they discuss in their book is a first generation tree-map … so maybe it was just luck they got this one right.
  • Reporting is a definite must – as long as it includes ad-hoc and user-driven analyses and models. Dashboards? How many times do I have to repeat that today’s dashboards are dangerous and dysfunctional.
  • Feedback submission for suggesting changes? There’s a big “oops!” Where’s the analysis if you can’t adjust the data organization yourself, right now, in real time? And if you have to give “feedback” which goes to a “committee” where everyone else has to agree on the change, which typically negates or generalizes the desired change – guess what? That’s right! The change never actually happens, or if it does happen, the delay has caused it to become irrelevant.
  • Feedback submission for suggesting fixes to the data? How can you do a valid analysis if you can’t fix classification errors, on the fly, in real time?
  • If the authors meant immediate update of new data as soon as it was available, then I’d give them this one. But it seems that what they really mean is that “the analysis cube should be updated as soon as the underlying data warehouse is updated“, but considering that they state on page 182, “in our opinion, there is no need for a frequent update of the cube” (note the singular case, which I’ll return to later), and then go on to state that quarterly warehouse updates are usually sufficient, I can’t give them this one either.
  • I agree that what-if analysis capability is a must – but how can you do “what if” analysis if you can’t change the data organization or the data classification, or even build an entirely new cube, on the fly?

The authors then dive into the required capabilities of the analytics module, which, in their view, should be:

  • OLAP tool capable of answering questions with respect to several, if not all, of the dimensions of your data
  • a reporting tool that allows for the creation of reports in various formats; cross-tabulation is very important in the context of reporting
  • search enabled interface

Which, at first glance, seems to be on the mark — except for the fact that the authors’ world-view does not include real-time dimension and data re-classification, which means that any cross-tabs that are not supported by the current data organization of the warehouse are impossible. Furthermore, it’s not the format of the reports that matter, but the data the user can include in them. Users should be able to create and populate any model they need, whether it’s cross-tabular or not. Finally, we’re talking about spend analysis, not a web search engine. Search is important in any good BI tool, but if it’s one of the three fundamental properties that is supposed to make the tool ‘unique’, I’m afraid that’s a pretty ordinary tool indeed.

The authors apparently don’t understand that spend analysis is separate from, and does not need to be based on, a data warehouse. Specifically, they state (on page 12) that “data warehousing involves pulling periodic transactional data into a dedicated database and running analytical reports on that database … it seems logical to think that this approach can be used effectively to capture and analyze purchasing data … indeed … using this approach is possible“.

It’s possible to build a warehouse, but it’s not a good idea for spend analysis. The goal of warehousing is to centralize and normalize all of the data in your organization in one, and only one, common format that is supposed to be useful to everyone. Unfortunately, and this is the dirty little secret with data warehouses, this process ends up being useful to no one in the organization, which is why most analysts simply download raw transactions to their desktops for private analysis, and ignore the warehouse. But the authors don’t stop there. In a later chapter, they go on to imply that the schema is very important and that selection of the target schema for spend analysis should be carefully chosen based on several considerations (page 177), namely:

  1. are your domains adequately represented?
  2. will your schema be evolving to support a centralized PIM system?
  3. is your company global? is internationalization an important requirement?
  4. is any taxonomy already implemented at a division level?
  5. has the schema been maintained in recent months?

To this, all I can say is:

  1. Doesn’t matter. What matters is that the analyst has the data she needs for the analysis she is currently conducting.
  2. Who cares? There should be no link between your PIM and your SA system. PIM is just another potential data source to use, or ignore, as your analysts see fit.
  3. Whatever. If you have a good ETL tool, you can define a few rules to do language and currency mapping on import.
  4. Irrelevant. We’re talking SA, not ERP.
  5. I would think it would have been, since the only way in the authors’ worldview to change spend data representations is to change the underlying schema of the warehouse!

The authors cheerily state (on page 14) that “a good commodity schema is at the heart of a good spend analysis program because most opportunities are found by commodity managers“. But hold on just a minute! If most of your opportunities are being found by your commodity managers using a basic A/P spend cube, then they’re limiting themselves to very simple low hanging fruit – which is picked clean in the first few months in a typical organization that makes a commitment to spend analysis. That’s why the traditional spend analysis value curve drops to almost zero within a year – meaning that if you don’t recover the cost of the effort in the first three months, you’ll never recover it. An A/P cube is just the beginning of the discovery process, not the endpoint.

The authors also make a strong argument for auto-classification, stating that (on page 100) “the reader must note that classifying millions of transactions is a task that should be done by using auto-classification and rules-based classification technology” and that “unless you license spend analysis applications, data scrubbing can be a very manual time consuming activity which requires a team of content specialists“.

Actually, nothing about rules-based classification mandates that the rules must be built by a robot, and there are many reasons why that can be a bad idea (not the least of which is the fact that robots are far from infallible). Classification rules can be built easily and effectively by hand … by a clerk … even in a very large organization with many disparate business units. Once built, this set of rules can then be applied in a fully automated way to every new transaction added to the system. So let’s not confuse “automation of creation” with “automation of application,” please. Of course, you do need a good, modern, spend analysis tool that allows for the creation of rules groups of different priorities, and you need a rules creation mechanism that’s easy to use and easy to understand.

Have you ever wondered why skilled consultants can build and map a spend cube to 90% accuracy very quickly? Well, here’s one tried-and-true “manual” methodology that builds terrific “automated” rules:

  1. map the top 200 GL codes
  2. map the top 200 vendors
  3. map the GL code + Vendor for vendors who sell you more than one item, or items in more than one category, depending on the level of detail you need

If you want to, you can get to 95-97% accuracy by extending to the top 1000 GL codes and the top 1000 vendors — if you really believe you are going to source 1000 vendors (and of course you’re not). To check your work, you’ll need to run reports that show you:

  • top GL’s and top commodities by vendor
  • top vendors and top GL’s by commodity
  • top vendors and top commodities by GL

Simply keep mapping until all three reports are consistent, and you are as accurate as you’ll need to be — and you’ll have the advantage of having built your own mapping rules, that you understand. The alternative, which is error-checking the work of an automaton (a process that must be done, because no robot is perfect), is difficult, tedious, and error-prone — and it must be repeated on every data refresh.

When the authors state (on page 116) that “manual editing is sufficient, but it is also extremely inefficient … it is not scalable with respect to the size of the data“, this is flatly untrue. The creation of dimensional mapping rules is wholly unrelated to the volume of the transactions — the same effort is required for 1M transactions as is required for 100M, and most spend datasets can be mapped very effectively with dimensional rules only. The only exception is datasets whose only component is a text description; and here, too, the authors’ “scalability” argument falls apart, since human-directed phrase mapping can divide-and-conquer quite effectively.

To top it all off, the authors go on to violate the first rule of spend analysis, which is “NEVER, EVER, EVER EXCLUDE DATA”. They take great pains to classify all of the errors that can occur in the ETL process and then bluntly state that (on page 109) “if you have errors in category iv (root cause is undocumented and cannot be inferred), then you have two alternatives … the first alternative, if possible, is to exclude these data from your sources … errors of category iv are unacceptable and could jeopardize your entire analysis … so they should be eliminated“.

No, NO, NO! YOU MUST ACCEPT ALL OF THE RECORDS and YOU MUST DO SOMETHING SENSIBLE with the records that don’t fit into your notion of reality. For example — create a new Vendor ID, and family it automatically under Not Found, or Missing. Dropping data jeopardizes your analysis much more than creating an “Uncategorized” or “Missing” data node. What if errors represent 15% of your spend? Then you’d be reporting that you are spending 85M on a category when you are spending 100M. Your numbers won’t add up … and when the CFO files a SEC filing on data that is later found to be incomplete by the auditors, guess whose head is going to roll?

And before I forget, let’s get back to that web-centric requirement where the authors imply that all of this means web-based access to a central cube (singular case). Throughout the entire book they refer to “the cube” (such as when they state that “in our opinion, there is no need for a frequent update of the cube“) as if there’s only ever one cube to be built. Turns out there isn’t just one cube to be built — there are dozens of cubes to be built. Some power analysts build 30 or 40 commodity-specific invoice-level cubes (what are those? you won’t learn that from Pandit and Marmanis), and regularly maintain a dozen of these every month — not every quarter (as the authors recommend).

The only real hint that the authors give that multiple cubes might be useful is where they state (on page 51) that “some companies are taking the approach of creating different cubes for different uses, rather than packing all possible information in a single cube for all users … for example, all users might not be interested in auditing P-Card information … rather than include all of the details related to P-Card transactions in the main cube, you can simply model the top-level info (supplier, cost center) in the main cube … then … create a separate ‘microcube’ that has all of the detailed transactional information … the two cubes can be linked, and the audit team can be granted access to the microcube … the microcube approach can be rolled out in a phased manner“. Or, in short form, you can have multiple cubes if you have too much data, and the way you do it is to create ONE main cube, and then micro-cube drill-downs for relatively non-important data. I don’t even know how to verbalize how wrong this is — it completely inverts the value proposition. (Now, to be fair, they also state that “ideally, the cubes should be replicated on the user machine for the purposes of making any data changes“, but they give no definition as to what form these cubes should take or what changes are to be permitted, so we are left assuming their previous definition, which is secondary micro-cubes and only minor, meaningless, alterations, since the dimensional and rule-based classifications require “approval”).

At this point, you’re probably asking yourself – did the authors get anything right? Sure they did! Specifically:

  • Chapter 4 on opportunity identification had a good list of opportunities to start with. Too bad most of them are the low-hanging fruit opportunities easily identified with out-of-the-box reporting and that there’s no real insight on how to do serious untapped opportunity identification when there isn’t a pre-canned report available.
  • Chapter 5 on the anatomy of spend transactions had a good overview of the formats used in various systems … but if you’re a real analyst, you probably know all this stuff anyway.
  • Chapter 7 on taxonomy considerations had good, direct, simple introductions to UNSPSC, eOTD, eCl@ss, and RUS. It’s too bad these schemas are relatively useless when it comes to sourcing-based opportunity identification.
  • When the authors pointed out (on page 8) that there is still widespread low-adoption of spend analysis, they are correct … but when they state that it’s because we’re talking tens or hundreds of millions of transactions, it’s irrelevant and wrong. For any specific analysis, there’s probably only a few million or tens of millions of transactions that are relevant, and a real spend analysis tool on today’s desktops and laptops can operate on that number of transactions without issues. There is no need for a mainframe.
  • When they state that the categorization of errors is critical because not all errors are equally costly to fix, they’re right … but the data warehouse is irrelevant. Just add a new mapping rule and you’re done. Two minutes, tops. What’s the big deal? Oh, I forgot — in the authors’ world, you can’t add a new mapping rule on the fly.

To sum up, when the authors state in their preface (on page xv) that “if implemented incorrectly, a spend analysis implementation program can become easily misdirected and fall short of delivering all of the potential savings“, I wholeheartedly agree. Unfortunately, the authors themselves provide a road map for falling short.

Do You Have A (Cost Reduction) Plan?

Today’s guest post is by Bernard Gunther (bgunther <at> lexingtonanalytics <dot> com) of Lexington Analytics.

Financial Services companies buy almost all “indirect” goods and services. This is exactly the type of procurement that every company does. One might imagine that Financial Services companies would be able to leverage the large amount of work done in all these other companies to become best in class. In turns out that they don’t. Purchasing Organizations at most financial institutions do not historically have the best reputation for delivering results. If you are leading the procurement organization, you need to change this general impression. The easiest way to do this is to deliver results. To deliver results, you need to have a plan.

Your plan needs to describe to people where you plan to start, what you are going to do and how everything is going to be done. The easiest way to create this plan is by reviewing your basic spend cube information. The spend cube takes all your AP spending from one or more systems (cash out the door), groups vendors together (when they appear multiple times) and assigns a commodity code to each transaction (based on a series of rules, generally based on GL code or vendor). From this, you can get reports by commodity on the top vendors, the top organizational units and the total volume of activity.

Using the spend cube data, you can develop an accurate and meaningful plan. A way to start the plan is to take each category and assign it to an action group (below). The action could be to source a category, to do a demand review, to do an Invoice Review or any other type of savings activity your team is capable of delivering. The category could be the full spend in a category or could be a sub-segment (e.g. geography / business unit). For each category, you need to tag spending as:

  • Completed.
    This category was recently done and no further work is required at this time
  • In Process.
    There is a project underway
  • Wave I.
    What you plan to start immediately
  • Wave II.
    What you plan to do after the first wave
  • Wave III.
    Other categories that you know need to get done
  • Further research.

Now that you have a “strawman” plan, you need to see if you have the resources to get this done and if key stakeholders agree with your “strawman”. Gaining stakeholder buy-in will help you understand the true situation “in the field” and will likely get you key resources to address the spending.

Having good spend data will enhance your credibility. Without good data, your first meeting with the Retail group could be “We’re from Procurement and we’d like to help. We think there might be an opportunity to save money. Can we do something for you?”

With good data, your first meeting with this key stakeholder could be “We think there is an opportunity for sourcing PCs. You’re spending $2.3 million with 2 VARS. The rest of the bank is spending $4.5 million and using an additional VAR and buying direct from a manufacturer. Looking at your pricing on your most frequently bought laptops, the Technology group is getting 7.3% better pricing. This means you’re looking at over $150,000 in annual savings. We want your help in doing the following [insert plan here – with details on who should be involved and what it means for them].”

Without a plan, the best you can hope for is another meeting. With a plan and good data, you can get a stakeholder fully bought into your idea, they can give you authorization to proceed and many times, they will give you the support and resources you ask for.

For one bank with about $750 million in spending, we created a plan for savings. This plan targeted savings of $85 million in 3 waves across 113 initiatives. For each of the 10 major department heads, we could tell them how much spending was involved in each initiative, which vendors might be impacted and which budget centers we wanted resources from. Over the next 15 months, we conducted the initiatives and generated $94 million in annual savings on 80% of the baseline. As part of the program, we involved finance to sign off on each of the results so the savings could be measured and tracked. The results were incorporated into the spend cube to support ongoing monitoring of the spending.

Is such a plan hard or expensive to create? The short answer is “no”. New tools have made this process faster and much less expensive to do. For an organization with less than $500 million in spending, a good plan (including building the initial spend cube and conducting the initial syndication) can be put together, with a focused effort, from scratch, in 6 to 8 weeks. And some organizations that have done much of the preliminary work can get it done faster.

A good plan, with good execution can lead to significant results. In the world of procurement, the data you need is there for the taking. All you need to do is to use it.

Thanks Bernard!

The e-Sourcing Handbook (Free e-Book)

The e-book edition of the e-Sourcing Handbook, co-authored and edited by yours truly, and sponsored by Iasta [acquired by Selectica, merged with b-Pack, rebranded Determine, acquired by Corcentric] (an e-Sourcing solution provider), is now available on request (through e-mail).

The e-Sourcing Handbook is your modern guide to Supply and Spend Management Success which utilizes and enhances strategic sourcing technology and best practices. Covering the full spectrum of the e-Sourcing cycle, the handbook helps you understand not only what spend analysis, e-RFx, e-Auction, decision optimization, and contract management are, but where and when to apply these technologies for maximum benefit.

Building on the resounding success of the e-Sourcing Wiki [WayBackMachine] and the e-Sourcing Forum [WayBackMachine] and Sourcing Innovation blogs, the handbook takes the concept of open access to knowledge and best practices one step further by compiling the best information on e-Sourcing to appear on all three public information sources into one definitive source. Furthermore, by mixing content from factual and informative wiki articles with blog postings that are both controversial and opinionated in an innovative manner, the juxtaposition of the two in the handbook allows the reader to see where the boundary lies between information and advocacy. It is the goal of the authors that, through this ground-breaking effort, the reader will gain a better understanding of e-Sourcing and how to take their supply and spend management efforts to the next level.

And, most importantly, unlike some of the recent e-books to pop-up, this is a real book – not a glorified marketing white paper doubled (or tripled) in size with a fancy (spaced-out) layout that contains dozens of colorful, yet useless, images. An exact mirror of the forthcoming print-book, it’s 220 pages of solid content backed up by a 4 page resource section, 8 page glossary, and 22 page bibliography for those who thirst for knowledge. The full table of contents and index are also included to help the reader quickly find what she is looking for.

But perhaps the foreward by co-author Eric Strovink of BIQ (acquired by Opera Solutions, rebranded ElectrifAI) says it best.

The e-Sourcing space has undergone a major transformation since 2000. Vendors who were once dominant or cutting-edge have failed. Many have undergone asset fire sales, become part of the walking-dead, or been absorbed into larger companies; and still others have been forced by their investors into mergers that make little sense to the outside observer.

 

These consolidations have brought about a dangerous commoditization of ideas, along with a slowdown of innovation. Even worse has been the obscuring – by over-enthusiastic and under-educated vendor marketing departments – of deeply important issues that sourcing practitioners must consider and understand in order to be successful.

In response to this, my co-author, Dr. Michael Lamoureux, launched the Sourcing Innovation blog with the specific purpose of educating practitioners and cutting through the marketing babble that had begun to dominate the discussion. Another co-author, David Bush, started the e-Sourcing Wiki (from which the bulk of this Handbook is taken) in a similar attempt to put fundamental e-Sourcing ideas and concepts into a publicly accessible forum. Over the years, David has also built Iasta’s e-Sourcing Forum blog into a credible and useful resource.

These efforts are laudable, but blogs and wikis are sometimes hard to navigate, and effort is often required to extract related information in a useful way. This Handbook is an effort to draw together the knowledge base of the Wiki, along with relevant blog postings, into a coherent and readable framework. Of course, one might argue that none of the authors are readable or coherent – and that may be a fair criticism – but we’ve made a best effort.

Because Michael is a strong and independent voice in the space, it’s appropriate that he is the editor of this Handbook. He has taken an interesting and unorthodox approach, choosing to mix factual and informative wiki articles with blog postings that are both controversial and opinionated. The juxtaposition of the two allows the reader to see where the boundary lies between information and advocacy. This is perhaps the first effort of its kind where two very different resources are interlinked in a constructive, and hopefully interesting, way.

 

I trust that this edition of the Handbook will be the first of many similar efforts, and that together we can collectively energize our space with accurate information and useful insights. Remember, the e-Sourcing Wiki is a public resource – anyone can contribute – so everyone should consider “sharing the wealth” and do so.