Category Archives: Spend Analysis

Aberdeen on Spend Analysis: Lost in the Trees

Today I’d like to welcome back Eric Strovink of BIQ [acquired by Opera Solutions, rebranded ElectrifAI].

Aberdeen’s report on Spend Analysis (“Spend Analysis: Working Too Hard for the Money,” available free from Iasta [acquired by Selectica, merged with b-Pack, rebranded Determine, acquired by Corcentric] and others) draws useful conclusions about the forest, but then, like many other studies in this space, loses its way in the trees. Consider this reasoning:

  • Wealthy people tend to be successful.
  • Wealthy people typically drive luxury automobiles.
  • Therefore, wealthy people are successful because they drive luxury automobiles.

Aberdeen seems to reason about spend analysis in the same way:

  • Best-in-class purchasing organizations tend to have bought spend analysis systems.
  • Best-in-class organizations typically have bought spend analysis from one of the “Big 3” SA vendors.
  • Therefore, organizations are best-in-class because they bought spend analysis from the “Big 3.”

Thus, when the report uses survey data to uncover the underlying reasons why best-in-class organizations are successful, it loses its way. Using circular reasoning, it offers up precisely what one would expect: the standard “Big 3” marketing messages. Those messages, most of which haven’t changed in years, are these:

  1. Automated spend classification. “Only we can classify your spend, with automated algorithms and Bayesian analysis and special databases and… and… well, the point is, you can’t do it yourself, you have to hire us.”
  2. Standard reports. “Our suite of standard reports is better than anyone’s. Why, you don’t even need a sourcing consultant or a sourcing expert on staff, our reports tell you exactly what to do.”
  3. Integration with RFx. “Buy our suite, because it’s all ‘integrated.’ Just ignore the fact that spend analysis doesn’t ‘integrate’ with RFx, that’s not important now.”
  4. Integration with Contract Management for ‘compliance.’ “Let’s fail to mention that you can create a rules-based Contracts dimension yourself in just a few hours, whether you have a CM system or not.”

I’ve addressed these points at length elsewhere (see, for example, Common Sense Cleansing and What Purchasing.com Got Wrong), so I won’t do it here, except to point out that every SA application worth its salt creates a rules system that automatically classifies and maps spend. That’s the whole point, after all. Aberdeen’s report confuses the process of rules system creation with the results of rules system creation. Once a rules system is built, by whatever methods, the end result is a system that automatically maps and classifies both current and future spending.

The above nothwithstanding, the report contains a fascinating chart that shows survey respondents’ opinions of the “importance” of data analysis, data management, reporting, and supplier content, plotted against those same respondents’ classifications of their “current ability” in those four areas. It appears that “current ability” deeply lags “importance” in all four of them. As Aberdeen says,

While organizations recognize the advantages that can be gained from technology deployment for spend analysis, they have still not bridged the gap between theory and practice. Across the primary steps in the spend analysis process, enterprises are generally unable to fully leverage their spend analysis solutions… [emphasis added]

Aberdeen fails to draw the obvious conclusion from this — namely, that legacy approaches to spend analysis are disappointing their users across the board, despite causing an uptick in procurement efficiency. This result ought to be a key conclusion of the study. Spend analysis is about analysis, after all, not about the mechanics of data preparation. In fact, the four key components of spend analysis are:

  • Powerful analysis and ad hoc reporting tools (“data analysis” and “reporting”)
  • Flexible and ultra-fast dataset creation (“data management” and “supplier content”)
  • Real-time dataset modification (“data management,” “data analysis,” and “reporting”)
  • Flexible deployment (Aberdeen doesn’t address this, but the SA space has changed: powerful spend analysis is now deployable for small dollars, on individual analysts’ desktops, without an organization-wide commitment).

All of these components are interdependent — for example, you can’t perform ad hoc analysis if you can’t quickly change the structure of a dataset. And, you can’t change the structure of a dataset if it’s shared with others, because the other users certainly won’t appreciate you changing things out from under them.

It really should be old news by now: data extraction, transformation, loading, familying, and mapping are processes that are easily automated by in-house personnel using modern tools, or by outsourced resources using those same tools. It’s a shame that Aberdeen chose to focus on the “old think” of cleansing — only the very first step of a spend analysis effort — rather than pursuing the most interesting of its own survey results.

Spend Analysis: What Purchasing.com Got Wrong

Purchasing’s recent article on  “How to Select a Sourcing Strategy” wasn’t the only article that just didn’t make the cut in my book. “Their ABCs of Spend Analysis”  article missed the point as well. However, knowing that Eric Strovink of BIQ (acquired by Opera Solutions, rebranded ElectrifAI) would also be taken aback by this article, I invited him to shed some light on what the article missed. So, without further ado, here’s Eric’s guest post on What Purchasing.com Got Wrong.

Occasionally an article crosses my desk that seems well-written and insightful, such as
Purchasing.com’s “The ABCs of Spend Analysis” — but only if I’m willing to accept
assumptions with which I can’t agree.

“A: Acquire the data skills”

Wait, stop right there. In my view, it shouldn’t be necessary to
“acquire data skills” in order to manipulate and report on spend data.
This limits usage to a fraction of the business users who otherwise could
deeply improve their understanding of what’s going on. Any requirement to
dump data out of a spend analysis system and import it into Microsoft
Access or any other database management system is a glaring indictment
of the spend analysis system itself. It’s supposed to be a “spend analysis”
for goodness sake, so where’s the “analysis?” Similarly, a requirement
for SQL skills or other IT expertise in order to construct a report
is equally an indictment of the spend analysis system.

We should not allow tool limitations to dictate that business users
must become IT experts in order to analyze their data. That’s like saying
drivers must become mechanics before they can use the interstate highway system.

“B: Bring the data together”

One could hope that “ETL (Extract, Transform, Load)” would not be
the theme of this page, but of course it is. In fact, the only letter
that’s relevant in this tired acronym is “T” (for “Transform”). If
you can’t load data into your spend analysis system, then find one
that makes it easy. If you can’t dump data out of your ERP or accounting
system(s) in some reasonable flat file format (like .csv), you didn’t
try very hard. Every accounting system I’ve dealt with in the last
four years, old or new, has a perfectly reasonable and simple method
for dumping its data, almost always a method that requires no IT
assistance at all.

Transformation is necessary in order to coerce data from incompatible
systems into a common format. A good transformation tool should be able
to move any field from one column to any other; create new fields;
eliminate fields; and create any output field as a function (including
string, math, and logical functions like “IF”) of any number of
input fields. It should be able to save the transform and apply it
on refresh.

And the translation tool should be — you guessed it — operable by
ordinary business users, not just by IT types.

“C: Change the way you source”

Wait, should we just plow ahead and start sourcing? It turns
out that accounts payable-level spend analysis doesn’t really
show you very much, and this section of the article reinforces
the point. “We realized we were spending more with Supplier K
than we had previously thought, and this gave us more leverage
in negotiations.” OK, but do you know whether contract terms
were met? Was the supplier over-charging? What were the exact
buy points and quantities with supplier Q for commodity X? Why
didn’t a contract with supplier Y result in the savings we projected?

Problem is, an A/P level cube only peels back the first layer
of the onion. You’ve achieved a reasonable idea of what was bought,
who bought it, and who supplied it, and that’s important. However,
spend analysis is an iterative process of first identifying macro
behaviors, and then zooming in using micro analysis on a
commodity-specific basis. The high level cube gives you an
indication of what might be wrong — too many suppliers, or
too few; too high a spend rate given [number of employees] or
[size of business]; too much off-contract spend. But that’s
all it gives you. You don’t really know, for example, whether
off-contract spend (Fred down in Order Entry buying a Dimension
800 from Dell) is an inferior price point to the company’s VAR
contract for ThinkPads — or whether Fred actually got a better deal.

If the high-level cube hints at a possibility for cost reduction,
should you run right out and start running sourcing events? Maybe
not. For one thing, it’s sometimes difficult to determine
whether high spend is a demand issue or a supply issue, and
sourcing won’t touch the demand side. Sourcing events are
politically disruptive, can take many months to implement,
and can upset long-term supplier relationships unnecessarily.
It’s entirely possible that quietly confronting an incumbent
supplier with a detailed analysis of buy patterns from
invoice-level data can not only change behavior immediately,
but also return money to the bottom line from overcharges. And,
it certainly will tell you if you have a demand problem. If,
at the end of the day, sourcing is still required, fine; but in
many cases it’s not. Few incumbent vendors want to go through
a sourcing exercise and potentially lose the business; they’d
rather meet Fred’s price point.

Thanks Eric! I could not have said it better myself.

A Conversation with Ketera

When I was down in the valley, I made a point to visit Santa Clara to catch up with Ketera (acquired by Deem) who recently announced their next generation spend analysis solution, which I’m not going to spend too much time talking about (even though they are getting a good reception and a lot of customers because of it).

I did get a chance to see it recently, and it’s quite good. It’s not the absolute best in any regard (automatic classification, reporting, flexibility, etc.), but the overall solution puts it in the short stack of solutions you should definately be including in your review process. They have a two-tier classifier, which starts off with what they call a high-confidence classifier (which, you guessed it, is based on the old GL Codes, Vendors, GL Codes and Vendors mapping) and a trio of lower-level lower confidence classifiers (that use various rules and external sources) to try and collectively get a high confidence match. Failing that, they have manual classification and a rules engine to automate the match in the future. They also have a tierd knowledge base: global, industry, and organization to allow for multi-level rules and classification. Their reporting solution is now based on MicroStrategy, which, besides giving you standard best of breed reports, allows a user to fairly easily build basic BI reports and drill down within current reports. And it’s on-demand. (So, in recap, they potentially have one of the best classifiers, one of the better reporting front-ends from a usability perspective, and it’s on-demand.)

My interest, and the focus of my California conversation was what they were doing with their e-Procurement, Invoice Management, and what they are doing to enable the Procure-to-Pay (P2P) cycle; their recent partnership agreement with Hyperion (which was just acquired by Oracle) and their progress towards delivering operational Business Intelligence capabilities; and their supplier enablement (and catalog hosting) services and solutions.

At the moment, their procurement solution is one of the better enterprise solutions out there – it’s usable and it interfaces with their Invoice Management Solution and Supplier Management Portal. They have a two-level checkout process, basic and advanced, since most organizational users will use the system ony once a month and need a simple solution, and they have integrated document management. Furthermore, a future release will include Hyperion reporting.

My rationale for this interest is that it appears that there are significantly fewer general-purpose e-Procurement solutions than general-purpose strategic sourcing solutions (even though there are a lot of niche offerings, especially in the Software-as-a-Service arena); even though there are a lot of Business Intelligence solutions and a lot of dashboard solutions, not many attempt to tie together operational metrics with their financial impacts; and even though there are a growing number of supplier management and supplier information management solution, not many support the quick enablement of a large number of suppliers and I wanted to know how close Ketera had come to achieving each of these goals. That’s why I was glad to hear that they were working on tieing in their invoice management and procurement solutions with their contract management solutions, that they were working on Hyperion integration, and that they have a few other surprises for the year ahead. After all, once you have this, you just add better integration with spend analysis, and you have a great start at an integrated procurement and sourcing application that would be a great solution for many underserviced mid-market firms.

(Spend) Analytics vs. (Decision) Optimization

I had a very interesting conversation with Eric Strovink, my co-author of the “Spend Analysis and Opportunity” e-Sourcing Wiki [WayBackMachine] about the power of analytics and how they can reduce the complexity of award optimization when applied after an RFP or Auction to the point where, in his view, optimization might not even be needed at all.

Most of you probably think advanced analytics, like those provided by BIQ (acquired by Opera Solutions, rebranded ElectrifAI), are only for Spend Analysis. In this regard you’re wrong. Dead wrong. But I understand why. Most spend analysis products are built on the idea of one cube built on historical transactional data merged from the ERP, AP, and any other spend database you have in your possession, spend, and augmented with external sources – which is then essentially frozen (even if it is updated daily with data). The good ones come with dozens of built in best-of-breed reports, and allow you to build hundreds more – but on that one cube. Therefore, you can’t use it on your RFP or auction data, because it’s pre-award data – and not the transactional data that’s allowed in the cube.

But what if instead your organization had a spend analysis product that allowed you to build a spend cube any time you wanted – on any data you wanted – on any dimensions you wanted – and then throw it away when you’re done? Then there would be nothing to stop you from building a cube on your RFP or Auction data, building reports by supplier, by cost, or by property (minority supplier, quality, historical on time delivery), building cross tabs and tree maps, and then changing the cube to look at the data a different way.

You wouldn’t need optimization or a plethora of deterministic reports to find out who the lowest cost supplier was, who the highest quality supplier was, who the lowest cost supplier was relative to your quality metric, or any other query that can easily be answered by rank and cross-tab queries.

You’d still need optimization, because it couldn’t tell you the best way to make the 50-30-20 split between three top suppliers subject to your qualitative and on-time delivery requirements when your freight costs vary to each local ship to location, but it would greatly simplify the optimization process. First of all, you could easily see which suppliers do not make the cut in quality or in on-time delivery metrics and eliminate them with a couple of rankings. Then you could quickly analyze total cost rankings based on presumed 100% awards to each suppliers and quickly determine that you could only do the split between three of the top five bids, since the rest of the bids are just too high consider. Furthermore, you could eliminate the need for the quality or on-time delivery constraints since you have eliminated all suppliers that do not meet the requirements. Now you have reduced model size and model complexity, and significantly decreased solve time.

In addition, with all the insight you are able to gain with true analytics on the RFP or Auction data, you are much more likely to get the model right the first time. No more running a model, getting a solution, deciding the solution doesn’t quite work, adding a constraint, and running again. Furthermore, you no longer have a need to run a model with the just the quality constraint, or just the on time delivery constraint, or just the 50-30-20 constraint to determine how each constraint impacts the “best award”. For a sophisticated model where you might have run a few dozen what-if scenarios in the past to understand the interaction of the various costs, factors, and constraints in your quest to build the “right” model, you might now only need a few what-if scenarios to get it right.

And, more importantly, optimization becomes a lot friendlier. You know you have the right data, you know you have the right costs, you know you have the right factors, and you know you have the constraints. In other words, you know you have the right model – which means you know you have the right answer, even if you are a novice! No longer do you need to rely entirely on the math to get it right – the math is only needed in the final step!

That’s Spend Management 2.0! (Sorry Tim*.)

* Minahan of Supply Excellence [WayBackMachine]

Procuri Spend Analysis

During my brief Chicago tour, I had a chance to sit down with Rod True of Procuri (acquired by Ariba, acquired by SAP), Senior Vice President and the former President and Founder of TrueSource, acquired by Procuri last year, and talk not only about Procuri’s TotalSpend solution, but about what Spend Analysis, Visibility, and Intelligence means to Procuri and where their solution is going.

Although I do believe that their tool is not yet a perfect “total” spend solution (but to be fair, I do not think any tool is – which is probably obvious from my recent posts on the spend visibility space), I also believe that with the acquisition of TrueSource, Procuri are just as close, if not closer, than any of the other big players in the space. The reason for this lies largely with Rod True and the team he built and the fact that they get that “spend intelligence” requires three major components to be successful: accurate visibility across all of the relevant data, analysis capabilities, and the ability to use the data for compliance initiatives.

To this end, TrueSource spent a great deal of time on ETL tools that could not only load data from a large number of data sources, but map such data into a plethora of out-of-the-box and custom categorization schemes and do so in such a way that duplicates are detected and dropped. (After all, if your data is no good, neither is your analysis.) Moreover, knowing that most companies still use old ERP or database systems where the best they can muster is a full database dump (for the last month / quarter / year), they have built their ETL tools in such a way that their spend warehouse can be incrementally updated from a full database dump at any time.

They have also built in a large number of reports (over 70) and standard reporting capabilities (through a custom report builder) to allow for role-based reporting and analysis, compliance & audit management, category management, and diversity management and built their warehousing capabilities to support just about any categorization you can desire. They also have role-based dashboards, category project management, and category intelligence built into the solution.

Furthermore, knowing that they could not possibly think of all of the things you might want to do with the data, they also support the fine-grained export of any set or subset of data or report that you might want to analyze in further detail.

And this is where I believe their one weakness lies. They have visibility down pat (and pride themselves on their ability to be able to quickly develop an automated cleansing, classification, and refresh for just about any data source you can imagine), they understand that the entire point of any spend effort is all about compliance – with diversity requirements, with reporting regulations, with business decisions, and with sourcing decisions (otherwise your “savings” might never be realized), but their analytics is limited to what you can do with their pre-defined reports and report builder. And although I have to admit that what they have is most likely more than enough for most of the users in an organization – executives, managers, and even average users – I am not convinced it will ultimately satisfy the emerging spend power users.

It is true that a power user can easily integrate into their cleansed data feed and extract just the data they want (and they told me that they are surprised at how fast their power users can get just the data they want for a custom report and build it in another tool), but I believe that the next generation of spend power users are going to want the ability to create their own custom views, reports, and analyses in the tool itself, versus on their desktop with a Microsoft Office or similar end-user tool.

However, you still need a centralized spend repository with complete, clean, categorized data for your analysis, reporting, and compliance management – and this solution is definitely a valid starting point from that perspective.