Category Archives: Spend Analysis

Spend Analysis II: Why Data Analysis Is Avoided

Today’s post is from Eric Strovink of BIQ.

If I have learned one thing during my career as a software developer and software company executive, it’s this: contrary to what I believed when I was a know-it-all 20-something, there are a lot of clever people in the world. And clever people make smart decisions (for example, reading this blog, which thousands do every day).

One of those decisions is the decision NOT to perform low-probability ad hoc data analysis. It’s a sensible decision. Sometimes it’s based on empirical study and hard-won experience, and sometimes it’s a gut feel; but either way, the decision has a strong rational basis. It’s just not worthwhile.

A picture is helpful:


Click image to enlarge

The above shows the expected value of an ad hoc analysis of a $100K savings opportunity. On the X axis is the number of days required to prepare and analyze the data; on the Y axis, the probability that the analysis will be fruitful. I chose a $700 opportunity cost per analyst-day; choose your own number, it doesn’t really matter.

Note that the graph is mostly “underwater”; that is, the expected value of the analysis is largely negative. Unless the probability of success is quite high, or the time taken to perform the analysis is quite low, it’s simply not a good plan to undertake it.

We are all faced with the decision, from time to time, whether to explore a hunch or not. However, an analyst can only work for 220 days per year. Sending a key analyst off on a wild goose chase could be a serious setback, so it’s a risky decision, and we don’t do it, and so our hunches remain hunches forever.

But what if it wasn’t risky at all?

Nothing can be done about the “probability that the analysis will be fruitful”; that’s fixed. But plenty can be done about the “number of days required to prepare and analyze data.” Suppose a dataset could be built in 5 minutes, and analyzed in under an hour? This turns the expected value of speculative analysis sharply positive. Suddenly it is a very good idea indeed to perform ad hoc analysis of all kinds.

And that’s good news. Because there is a ton of useful data floating around the average company that nobody ever looks at. Top of the list? Invoice-level detail, from which all kinds of interesting conclusions can be drawn. Try this experiment: acquire invoice-level (PxQ) data from a supplier with whom you have a contract. Dump it into an analysis dataset, and chart price point by SKU over time. Chances are, like most companies, you’ll find something very wrong, such as prices all over the map for the same SKU (ironically, sometimes this happens even if you have an e-procurement system that’s supposed to prevent it). If you have a contract, only one of those prices is correct; the rest are not, and represent money on the table that you can recover trivially.

Of course, please don’t spend weeks or months on the exercise, because then it won’t pay off. Instead, find yourself a data analysis tool with which you can move quickly and efficiently — or a services provider who can use the tool efficiently for you (and thus make a contingency-based analysis worthwhile for both of you). Bottom line: if you can’t build a dataset by yourself, in minutes, you’ll end up underwater, just like the graph.

Next installment: Crosstabs Aren’t “Analysis”

Previous Installment: It’s the Analysis Stupid

Share This on Linked In

Spend Analysis I: “It’s The Analysis, Stupid.”

Today’s post is from Eric Strovink of BIQ.

James Carville is not my favorite person, but he’s a funny man. And the above bastardization of his (in)famous Clinton campaign quote seems quite apropos, given the current frenetic level of marketing activity around “spend analysis” (I’m always amused by vendors using this term, because… excuse me for asking… “where’s the “analysis?”)

So why is there so much spend analysis marketing activity, all of a sudden? I suspect it’s “Oracle Terror”. For the last nine years, I’ve watched spend analysis vendors promote their “product” — typically a service masquerading as a product — using the same tired strategy: “We classify data better than [those other guys].” Problem is, when you spend so much time and effort dumbing down spend analysis to a simple-minded premise, you open the door for almost anyone, even a sleepy ERP vendor, to steal your lunch. And that’s exactly what has happened. Oracle has neatly synthesized all of the “classification” messages together, packaging them up with some Silicon Valley marketing magic, and the legacy spend analysis vendors are in a panic. You’re absolutely right, folks, Oracle’s messaging is better than yours. Smarter, more sophisticated, priced innovatively — it’s both ironic and funny. The only surprise is that this didn’t happen years ago.

But here’s the point: real spend analysis is so much more than classification, that the whole classification discussion is absurd. It has always been absurd. Classification-centrism is the Titanic of spend analysis, aiming squarely at a snowball on the top of the iceberg, while completely ignoring the massive value beneath. Nevertheless, relentless classification-oriented marketing over many years has warped end-user perceptions, and carried analysts right along with it. Current analyst firm surveys are spending over 90% of their time on classification questions, Pandit’s hopelessly off-target book (previously dissected and dismissed by Sourcing Innovation) is garnering new attention, and so on.

My iconoclastic point of view has been outlined in these (and other) pages before, but put very simply, it’s this: Classification is easy. Armed with appropriate tools, any intelligent person (your admin, for example) can be trained to do it effectively, in about an hour; and the rules they generate can be applied automatically to new transactions, forever after. When you stop to consider that sourcing consultants have been performing effective spend analysis for years, using nothing more than pencil and paper, it’s obvious that the classification Emperor really doesn’t have any clothes.1

In fact, true value lies in the analysis that you perform. Value is about results, and results come from analysis, not from a data classification process that is just a baby step toward value realization, and one that may not even be relevant. For example, consider that spend classification is really only useful for A/P data. There are many higher-value sources of data lying around, and many datasets can be built from them. In most of those datasets, classification has no place at all. By the way, how many spend datasets do you plan on building? One? Just on A/P data? Then you are missing out on value, by a wide margin.

In this series, I’ll discuss the requirements for ad hoc data analysis, and the very real value that results from it. Spend analysis, at the end of the day, is just data analysis; so it’s critical that your data analysis tools provide the necessary power and flexibility to make you successful.

Next installment: Why Data Analysis Is Avoided

1Ironically, based on the datasets we’ve seen from customers who have walked away from their classification-centric vendors, talking a great game on classification doesn’t necessarily mean delivering great classification.

Share This on Linked In

What Impact Will the BI Megatrends from 2009 Have on Next Generation Spend Analysis?

An article in Intelligent Enterprise last year outlined the “Nine BI Megatrends for 2009” that the author expected to reshape business intelligence and information management in the year(s) ahead. Since spend analysis is a major component of business intelligence in supply chain, one has to wonder what impact these megatrends will have. But first, let’s address the mega-trends presented in the article.

  • Open Source
    Low TCO, mature development stacks such as LAMP (Linux, Apache, MySQL and PHP, Perl, or Python) [or MAMP if you prefer the Mac which, being built on Unix, is fully compatible thanks to Xcode], and new open source offerings from players such as Pentaho are making open-source platforms and foundations attractive, and providing pressure on commercial vendors to bring down the TCO.
  • BI is becoming less isolated
    Many users are now employing reporting, access, and analysis tools that come with functional applications, forcing suites to break down silos to offer value.
  • Users are demanding a richer experience
    The days of simple, canned reporting are finally slipping into the past. BI portals are starting to become richer, more flexible, and more powerful. They’re using Rich Internet Application (RIA) technology to improve the user experience and incorporating mash-ups to allow users to better visualize the data.
  • BI is starting to focus on relationships
    BI used to focus on reports that did not provide any flexibility when it came to investigating data relationships, but new tools are giving users the ability to define their own relationships, cubes, and reports and dive into the data in new and innovative ways and find relationships that, classically, would take weeks of specialist data mining or statistical analysis to uncover.
  • Business Modeling meets MDM
    Master Data Management and emerging semantic models, which could serve business modeling in the same way that data models, schema, and metadata served extract, transform, and load (ETL) tools, are enabling some vendors to create tools that improve business modeling and its data modeling relationship using graphical interfaces that allow analysts to create their own data models without having to learn specialized languages or methodologies.
  • MapReduce meets Large Scale Data Analysis
    Although the most famous implementation belongs to Google, it’s also available in the open source Apache Hadoop framework, and allows organizations to build parallel, virtualized architectures based on server farms using commodity hardware which can analyze more data simultaneously than ever before, allowing for the discovery of new relationships that can prove very insightful to BPM.
  • Column-oriented databases are attacking performance woes
    Some of the leading column-oriented database technologies are employing advanced compression technology and large memory algorithms that is changing the game for BI and data warehouse architectures, allowing complex queries to be answered in realistic amounts of time.
  • Event Processing is opening up new analytical possibilities
    Emergent applications in healthcare, telecommunications, intelligence, IT management, gaming, and web analytics are capturing events and correlating them with analytics from BI tools to give organizations actionable insight.
  • Too Big to Fail
    As more and more queries are run against multi-billion row tables in data warehouses managing hundreds of terabytes of data (and growing daily), we’ll see more and more BI implemented to improve BI.

So what does this mean for spend analysis? With the exceptions of MapReduce and Column-oriented databases, not much. The reality is that It’s the Analysis, Stupid and anything that doesn’t simplify analysis while increasing the analytical power available to the user won’t stay on the radar very long. That’s why I’m pleased to inform you that Eric Strovink’s new series on Spend Analysis starts within a week. As I’m sure it will be as informative and forward looking as his last two (linked in Spend Rappin’), I’m certainly looking forward to it!

Share This on Linked In

Will the Tigers Truly Latch On To Analysis and Optimization?

This is the Year of the Tiger (in more ways than one) and, according to a recent article in the SCMR on “Supply Chain 2010” which quoted a recent AMR Research Survey on 184 companies that found that performance management was considered the most strategic supply chain technology investment, software applications in 2010 will focus on analysis and optimization.

I hope so, because it would be great if companies

  • actually knew how much they were spending,
  • who was getting the money,
  • what they were getting for it,
  • how much they should have paid vs,
  • how much they were invoiced, and
  • how much could have been saved with better information and more leverage.

And it would be wonderful if companies could clearly see that

  • lowest bid is not lowest TCO,
  • lowest landed cost is not lowest TCO,
  • lowest acquisition cost is not lowest TCO, and
  • even the lowest Total Cost of Ownership is not necessarily the best value because
  • Total Value Management means that you need the ability to simultaneously analyze cost, risk, and non-price factors to make the best buy decision.

So will it happen? Or will those few of you smart enough to understand the incredible value these technologies have to offer continue to outpace your competition by leaps and bounds for another year?

Share This on Linked In

Business Intelligence is More than Data Mapping and Cleansing!

BI, more BI, and even more BI. Every time I check a supply management or technology publication, I see yet another article on BI, like this recent article from Inside Supply Management on “getting smart at business intelligence”. Now, you think I’d be pleased at this as I’m always promoting advanced sourcing applications like decision optimization and spend analysis because good technology can help you do good analysis which helps you to make decisions which make you efficient and cost effective, but I’m not. Because every frickin’ BI article, just like every spend analysis article, always starts with mapping and cleansing, and then dwells on it like it’s the be-all and end-all.

Now, I probably shouldn’t complain because what is your average journalist supposed to think is important when even the high-and-mighty analysts — who are supposed to know that “It’s the Analysis, Stupid” — write long-winded thirty-five (35) question spend analysis surveys where twenty-nine (29) questions are about mapping, cleansing and categorization and only one (1) question is about analysis, but I am going to complain, because it’s not helping any of us. It’s not helping those of us trying to teach you what real high-end technology should, and can, do for you and it’s not helping you find the best tools for the job.

You see, real Business Intelligence, when you get right down to it, is not mapping and cleansing, not business unit involvement (because all you really need is the data), not rapid prototyping (because any solution you use should already be built as there are already lots of tools out there), not integration (because modern middleware platforms do that for you with point-and-click interfaces), and not canned reporting (which only tells you what you’re doing, not what you should be doing). Real business intelligence is making smart decisions based on insights gleamed from real data analysis … and real data analysis requires a tool that can cube, slice, and dice data any way you can think of looking at it. Face it, just like there’s no such thing as (a) spend intelligence solution, there’s no such thing as a business intelligence solution — because half of the “solution” is the brains in your head. Brains which won’t get to realize their full potential without a real data analysis tool to provide answers to their inquiries. So what is the definition of a real data analysis tool? I think I’ll let Eric answer that in his forthcoming series. (See the recent Spend Rappin’ repost for quick links to his previous ground-breaking and forward-thinking series on spend analysis.)

Share This on Linked In