Category Archives: Spend Analysis

You Cannot Overlook SSDO And Optimize Your Supply Chain

I was taken aback at this recent article in SupplyChainBrain on Supply Chain Optimization in the New Analytics Economy which outlined five analytics-enabled objectives which did not include strategic sourcing decision optimization, which is the next logical step in the sequence. Consider the objectives:

  • Supply Chain Visibility
    Step one is to understand how much the supply chain is costing you.
  • Demand Forecasting and Inventory Optimization
    Step two is to segment the supply chain, forecast demand, and then optimize inventory for each segment.
  • Network Optimization
    Step three is to periodically perform TCO assessments on the different segments of the existing supply chain network to identify the optimal performance configuration.
  • Predictive Asset Maintenance
    Step four is to perform preventative maintenance to minimize downtime and maximize uptime.
  • Spend Analytics
    Step five is to understand how much is being spent on each procurement category and identify those with the most savings opportunities.

The next natural step is:

  • Strategic Sourcing Decision Optimization
    Once the categories with the biggest savings opportunities are identified, it’s time to optimally source them so the overall TCO is minimized and the utilization of the current networks, optimized in step three, is maximized.

How could you possibly stop at step five?

Share This on Linked In

Analytics VI: Conclusion

Today’s post is by Eric Strovink of BIQ.

I’ve suggested previously in this series that analysis doesn’t have to be done by an applied mathematician; the key is to get insights about data. Sometimes those insights do require rigorous statistical analysis or modeling, to be sure. Much more often, though, one simply needs to examine the laundry, and the dirty socks stand out without any mathematical legerdemain.

Examining the laundry requires data manipulation. This usually takes the form of data warehousing, i.e. classic database management technology, extended in the case of transactional data to OLAP (“Online Analytical Processing”), SQL and or MDX, and reporting languages and tools. Problem is, business data analysts typically have insufficient IT skills to wield these tools effectively; and when they do have the skill, they seldom have the time. Thus, ad hoc analysis of data remains largely aspirational.

Custom data warehouses have value for organizations. ERP systems are a good example. But the data warehouse is a dangerous partner. It is not the source of all wisdom. It cannot possibly contain all the useful data in the enterprise. Warehouse vendors have trouble admitting this. For example, for years ERP sales types claimed that all spending was already tracked and controlled by the ERP system, so there was no need for a specialized third-party “spend analysis” system. These days all the major ERP vendors offer bolt-on spend analysis.

Spend analysis has the same issue. It introduces another static data warehouse, an OLAP data warehouse, along with data mapping tools that are typically not provided to the end user. As above, the data warehouse is a dangerous partner. It is not the source of all wisdom. It cannot possibly contain all the useful spend data in the enterprise. Spend analysis is not just A/P analysis; it can’t be done with just one dataset; and it’s not a set of static reports.

Once an opportunity is identified, more analysis is required to decide how to award business optimally. The Holy Grail of sourcing optimization has been a tool that is approachable for business users; but this goal has proved to be elusive. The good news is that “guided optimization” is now available from multiple vendors at reasonable price points. Although optimists (mostly experts at optimization) have argued for several years now that optimization is easy enough for end users without guidance, I take the practical view that it doesn’t really matter whether that’s true or not. As long as optimization is available at a reasonable price, whether it has a services component or not, the savings it delivers are worthwhile.

By no means is this series an exhaustive review of data analysis. For example, interesting technical advances such as Predictive Model Markup Language (PMML) are enabling predictive analytics to be bundled into everyday business processes. Scenario analysis is also a powerful tool for painting a picture of potential futures based on changes in behavior. But the vendors of these technologies either must make them accessible to end users, or offer affordable services around them. Otherwise they will remain exotic and inaccessible.

The bottom line is that analysis tools must be accessible to end users. It must be easy and fast to build datasets and gain insight from them. Optimization software should automatically perform sensitivity analysis for you, as the doctor has advocated. Ad hoc analysis should be the rule, not the exception. Analysis should not require vendor or IT support; if it does, it likely won’t happen.

The more you look, the more savings you will find; and when you walk into the CFO’s office waving a check, you will get attention as well as the resources to find even more.

Previous: Analytics V: Spend “Analysis”

Share This on Linked In

Analytics V: Spend “Analysis”

Today’s post is by Eric Strovink of BIQ.

As an engineer who originally entered the supply management space in 2001 to build a new spend analysis system, over the last 9 years I’ve watched marketing departments consistently “dumb down” the original broad and exciting definition of spend analysis that I remember from those days, to something really quite ordinary. For example, here are the steps required for classic data warehousing:

  1. Define a database schema and a set of standard reports (once, or rarely)
  2. Gather and transform data such that it matches the schema
  3. Load the transformed data into the database
  4. Publish to the user base
  5. Repeat steps 2-4 for life of warehouse

And here are the steps required for what has come to be termed “spend analysis”:

  1. Define a database schema and a set of standard reports (once, or rarely)
  2. Gather and transform data such that it matches the schema
  3. Load the transformed data into the database
  4. Group and map the data via a rules engine
  5. Publish to the user base
  6. Repeat steps 2-5 for life of warehouse

Not much difference.

You might ask, how can spend analysis vendors compete with each other, when the steps are so simple, and when commodity technologies such as commercial OLAP databases, commercial OLAP viewers, and commercial OLAP reporting engines can be brought to bear on any data warehouse? Well, it’s been tough, and it’s especially tough now that ERP vendors are joining the fun, but they compete in several ways:

  • Our step 4 is better [than those other guys’ step 4].
  • [briefly, until it failed the laugh test] Our static reports are so insightful that you don’t even need anyone on staff any more.
  • [suite vendors’ (tired) mantra] “Integration” with other modules
  • “Enrichment” of the spend dataset with MWBE data, supplier scoring on various criteria, and any other ways that might exist to try to add checklist features for analysts that may broaden interest in the spend analysis dataset beyond simple visibility.

It’s all very discouraging, but the doctor and I will continue to point out that spend analysis is not just A/P analysis; it can’t be done with just one dataset; and it’s not a set of static reports or a dopey dashboard, even though some vendors and IT departments would like to think it is. Spend analysis is a data analysis problem just like any other data analysis problem, and it requires extensible and user-friendly tools that empower people to explore their data for opportunities without third-party assistance. Those data come from multiple sources, not just the A/P system; many datasets will need to be built and analyzed; and from them, hugely important lessons will be learned.

The above notwithstanding, building a single A/P spend cube is a useful exercise. If you’ve never done it before, you will find things that will save you money. But that’s just the tip of the iceberg.

Previous: Analytics IV: OLAP: The Imperfect Answer

Next: Analytics VI: Conclusion

Share This on Linked In

The Time for One Vision is … Not Any Time Soon

 

Today’s guest post is from Eric Strovink of BIQ.

 

 

the doctor has asked “Has the time for one vision arrived?” The point of his post is contained in the last paragraph1, and boils down to whether “Best of Breed” solutions in the supply chain space are “good” or “bad” from an “integrated data” perspective.

It is certainly the case that there are a lot of software solutions that boil down to nothing more than a custom database fronted by a UI and a report writer. Such systems are rather easy to build; in theory, they can be built (as the doctor has pointed out in jest previously) using a VBA programmer and a Microsoft Access database. Jesting aside, home-brew solutions can exceed both the functionality and usability of so-called “enterprise” solutions. For example, Access-based 1990’s-era spend analysis leave-behinds from consulting organizations such as the old Mitchell Madison Group are still running in some large companies today, and, I daresay, are still superior to many current solutions.

Since there’s a pretty low technical bar to producing YAS (Yet Another Solution), there are grounds for hand-wringing when trying to keep track of them all, and of all the disparate data they are managing.But it really doesn’t matter whether a software product is built by in-house resources using Access and VBA, or by an international team of professional programmers using J2EE/Flex/Silverlight/Ajax/etc. and delivered via the browser, because the answer to the doctor’s question is simple:

Until there is a major, earthshaking change in the technology of database systems, the notion of an “integrated” enterprise-wide data store is pure fantasy.

Why? Because, as the post points out, “each data source [is using] a different coding and indexing scheme, [and] there is no common framework that connects the applications.” And that’s all she wrote, folks. You can’t store egg nogg in a fruit basket. It’s just not going to work.

Now, kudos to Coupa and others for “opening their API” (meaningful for programmers, not so much for ordinary humans) and so forth, but there is at present no way to integrate disparate, unrelated data into some centralized data store, without losing all the detail in the process. I don’t care if it’s all “spend” data, either. Slapping a label on something doesn’t make it homogeneous. I’m a bit of an expert on spend data, and I can assure you that spend data comes in all shapes and sizes and is certainly not homogeneous, whether you run an e-procurement system or you do not.2

And, of course, both old and new database vendors have been claiming for years to be able to integrate disparate data sources across the enterprise. Sure, if you want to join a few records across disparate databases that share common keys, there’s demo-ware that they can show you. It works great. But try a multi-way join across millions of records across disparate databases, and I’ll join you for a beer in the year 2025 when the query finishes.

So, I’ll steal the thunder from the “future post” mentioned by the doctor and jump right to the conclusion:

  1. Don’t worry about “integrating” data, because it’s not going to work the way you hope it will. At best, you will end up with inadequate compromises and uselessly generic data, like a design-by-committee spend cube that is shelf-ware after six months.
  2. Do worry about being able to move data easily in and out of the systems that you have. Don’t allow vendors to “lock up” your data; you should be able to change platforms easily, whenever you want to.
  3. Do worry about flexibility and adaptability in your analysis system. You should be able to operate it yourself, for example. If your data is locked up behind some SQL database that only IT drones can access, it isn’t doing you any good at all.
  4. Do worry about being able to move data from [anywhere] to your analysis system, quickly and easily.3

Let’s see what the doctor thinks, when he gets around to it.

1Apparently the doctor has never taken Journalism 101. But we can forgive him, since he doesn’t pretend to be a journalist.

2There are new ideas like “semantic database systems”; but a quick glance at recent history will show how well that works out in practice (Jason Busch over at Spend Matters, for example, made the mistake of drinking the semantic search Kool Aid with the now-defunct Spend Matters Navigator).

3Also, it’s important to clarify the notion that “real time” access to data is required for procurement decisions. No procurement decision needs to be made in real time. Is this a Hollywood science fiction movie where we need to dodge laser blasts from Tie fighters zooming in from all angles? No, it’s the real world, and decisions can and should be made thoughtfully and carefully. When the doctor says “real time,” I would hope that he means that there should be access to the data and answers to questions without waiting a week or a month for some analyst to write software.

Share This on Linked In