Category Archives: Spend Analysis

20 Analytics Predictions from the “Experts” for 2020 Part I

Guess how many will be 100% accurate?

(We’ll give you a hint. You only need one hand. You won’t need your thumb. And you’ll probably have fingers to spare.)

the doctor has been scouring the internet for the usual prediction articles to see what 2020 won’t have in store. Because if there is just one thing overly optimistic futurist authors are good at, it’s at pointing out what won’t be happening anytime soon, even though it should be.

This is not to say they’re all bust — some will materialize eventually and others indicate where a turning point may be needed — but they’re definitely not this year’s reality (and maybe not even this decade’s).

So, to pump some reality into the picture, the doctor is going to discuss the 19 anti-predictions that are taking over mainstream Net media … and then discuss the 1 prediction he found that is entirely 100% accurate.

In no particular order, we’ll take the predictions one by one.

Performance benchmarks will be replaced by efficiency benchmarks

This absolutely needs to happen. Performance benchmarks only tell you how good you’ve done, not how good you are going to do in the future. The only indication of that is how good you are doing now, and this is best measured by efficiency. But since pretty much all analytics vendors are just getting good at performance benchmarks and dashboards, you can bet efficiency is still a long way coming.

IoT becomes queryable and analyzable

… but not in real-time. Right now, the best that will happen is that the signals will get pushed into a database on a near-real time schedule (which will be at least daily), indexed on a near-real time basis (at least daily), and support meaningful queries that can provide real, usable, actionable information that will help users make decisions faster than ever before (but not yet real-time).

Rise of data micro-services

Data micro-services will continue to proliferate, but does this mean that they will truly rise, especially in a business — or Procurement — context. The best that will happen is that more analytics vendors will integrate more useful data streams for their clients to make use of — market data, risk data, supplier data, product data, etc. — but real-time micro-service subscriptions are likely still a few years off.

More in-memory processing

In-memory processing will continue to increase at the same rate its been increasing at for the last decade. No more, no less. We’re not at the point where more vendors will spend big on memory and move to all in-memory processing or abandon it.

More natural-language processing

Natural language processing will continue to increase at the same rate its been increasing for the last decade. No more, no less. We’re not at the point where more vendors will dive in any faster or abandon it. It’s the same-old, same-old.

Graph analytics

Graph analytics will continue to worm its way into analytics platforms, but this won’t be the year it breaks out and takes over. Most vendors are still using traditional relational databases … object databases are still a stretch.

Augmented analytics

The definition of augmented is a system that can learn from human feedback and provide better insights and/or recommendations over time. While we do have good machine learning technology that can learn from human interaction and optimize (work)flows, when it comes to analytics, good insights comes from identifying the right data to present to the user and, in particular, data that extends beyond organizational data such as current market rates, supplier risk data, product performance data, etc.

Until we have analytics platforms that are tightly integrated with the right market and external data, and machine learning that learns not just from user workflows on internal data, but external data and human decisions based on that external data, we’re not going to have much in the way of useful augmented analytics in spend analysis platforms. The few exceptions in the next few years will be those analytics vendors that live inside consultancies that do category management, GPO sourcing, and similar services that collect meaningful market data on categories and savings percentages to help customers do relevant opportunity analysis.

Predictive analytics

As with natural language processing, predictive analytics will continue to be the same-old same-old predictions based on traditional trend analysis. There won’t be much ground-breaking here as only the vendors that are working on neural networks, deep learning, and other AI technologies will make any advancements — but the majority of these vendors are not (spend) analytics vendors

Data automation

RPA is picking up, but like in-memory processing and semantic technology, it’s not going to all-of-a-sudden become mainstream, especially in analytics. Especially since it’s not just automating input and out-of-the-box reports that is useful, but automating processes that provide insight. And, as per our discussion of augmented analytics, insight requires external data integrated with internal data in meaningful trends.

No-code analytics

Cue the woody woodpecker laugh track please! Because true analytics is anything but low-code. It’s lots and lots and lots of code. Hundreds and Thousands and Hundreds of Thousands of lines of codes. Maybe the UI makes it easy to build reports and extract insights with point-and-click and drag-and-drop and allows an average user to do it without scripting, but the analytics provider will be writing even more code than you know to make that happen.

Come back tomorrow as we tackle the next ten.

A Single Version of Truth!

Today’s guest post is from the spend master himself, Eric Strovink of Spe\ndata.

A oft-repeated benefit of data warehouses in general, and spend analysis systems specifically, is the promise of “a single version of truth.” The argument goes like this: in order to take action on any savings initiative, company stakeholders must first agree on the structure and organization of the data. Then and only then can real progress be made.

The problem, of course, is that truth is slippery when it comes to spend data. What, for example, is “tail spend”? Even pundits can’t agree. Should IT labor be mapped to Professional Services, HR, or Technology? For that matter, what should a Commodity structure look like in the first place? Can anyone agree on a Cost Center hierarchy, when there are different versions of the org chart due to acquisitions, dotted-line responsibilities, and other (necessary) inconsistencies?

What tends to happen is that the “single version of truth” ends up being driven by a set of committee decisions, resulting in generic spending data that is much less useful than it could be. Spend analysts uncover opportunities by creating new data relationships to drive insights, not by running displays or reports against static data. So, when the time comes to propose savings initiatives, the very system that’s supposed to support decision-making is less useful than it should be; or worst-case, not useful at all.

Questions and Answers: Metadata

Do we have preferred vendors? Do buyers and stakeholders agree on which vendors are preferred? What vendors are “untouchable” because of long-term contracts or other entanglements? For that matter, with which vendors do we actually have contracts, and what do we mean by “contract”? Are there policies that mandate against a particular savings initiative, such as lack of centralized control over laser printer procurement, or the absence of a policy on buying service contracts? Can we identify and annotate opportunities and non-opportunities, by vendor or by Commodity?

The answers to these (and many other) questions produce “metadata” that needs to be combined with spend data in order to inform the next steps in a savings program. The nature of this metadata is that it’s almost certainly inaccurate when first entered. We’ll need to modify it, pretty much continually, as we learn more; for example, finding out that although John may have dealt with Vendor X and has correctly indicated that he’s dealt with them, it’s actually Carol who owns the relationship. We may also determine that the Commodity mapping isn’t helpful; network wiring, for example, might need to belong with IT, not Facilities.

Alternative Truths

As we add more and more metadata to the system — information that is critical to driving a savings program — we encounter the need to refine and reorganize data to reflect new insights and new information. Data organization is often quite purpose-specific, so multiple different versions of the data must be able to be spawned quickly and coexist without issues. This requires an agile system with completely different characteristics than a centralized system with an inflexible structure and a large audience. In essence, one must learn to become comfortable with alternative truths, because they are essential to the analysis process.

So what happens to the centralized spend analysis system, proudly trotted out to multiple users, with its “single version of truth?” Well, it chugs along in the background, making its committee members happy. Meanwhile, the real work of spend analysis must be (and is) done elsewhere.

Thanks, Eric!.

The Key Reason Spend Analyses Fail (that Often Goes Overlooked)


Today we welcome another guest post from Brian Seipel a Procurement Consultant at Source One Management Services focused on helping corporations understand their spend profile and develop actionable strategies for cost reduction and supplier relationship management. Brian has a lot of real-world project experience in sourcing, and brings some unique insight on the topic.

Organizations that develop an understanding of their spend have an edge when it comes to strategic sourcing: They better understand where money is being spent, with who, and on what than others who enter into the process either blindly or as a knee-jerk reaction to an incumbent price hike. This is particularly important for tail spend in those spend categories on the indirect side that too often fly under the radar.

That edge isn’t a given, however. Building a spend analysis can serve as the foundation for strong opportunity assessments, but doing so won’t automatically lead to better sourcing projects. Organizations who spend time on spend analyses can and do still fail at strategic sourcing for a very big reason. We put too much faith in the front-end process of building this analysis, and forsake the back-end, leaving a critical gap in our understanding of our spend profile.

The Front-End Spend Analysis

The first steps of a spend analysis are akin to cleaning out your basement. What’s the first thing you do? Before sorting into keep-or-toss piles can begin, even before moving and opening boxes – we need to turn on the light and survey the room. “Turning on the light” is really what the front-end of a spend analysis is. Our goal is to shine a light on the spend we have so sourcing project identification can begin. How does a spend analysis accomplish this?

  • Cleansing & Consolidation. Take all of the disparate data sources that make up our profile and create a single view of them, cleaning up supplier names and other critical fields along the way. For example, referring to the supplier “Dun and Bradstreet,” with that single name, even when spend from a second set that refers to “D&B.”
  • Classification. With all spend in one consolidated set, we will now attach meaningful classifications. The discussion around the best way to do this is worthy of a discussion of its own, so let’s simply say care should be taken here. Choose a system that speaks to your organization’s process, products, and objectives.

Let’s cook up an example. Let’s say we want to look into our IT spend to see where we can cut costs. We conduct a spend analysis covering the points above and learn the following: We have four locations using four different managed IT service providers offering similar services at four different price points.

This is the type of intel that suggests a strategic sourcing initiative may be called for. Pitting these suppliers against each other in a market event will drive down costs and potentially streamline operations if we can establish a single supplier for all four locations. We can estimate these savings by building a baseline spend profile and comparing to our average savings by following this strategy within this category. Simple enough. So why do sourcing initiatives often fail to deliver?

Moving Into Opportunity Assessment

Because we just committed a big mistake: We took our initial view of the spend and jumped right to goal setting without taking the time to properly scope. We went from turning on the basement light to selling boxes, en masse and unopened, directly on Ebay without knowing what was inside.

As we go to market, our sourcing event fails each of our four locations for different reasons:

  • The first location is locked into a multi-year contract with a painful termination clause. Without scoping, who didn’t know what our contractual obligations looked like
  • The second location isn’t locked into a contract, but is locked in by a lack of competition in the market. Without scoping, we never looked beyond our own buying history into the market landscape
  • The third location is free of both of these problems, but this isn’t their first rodeo. They used the providers that locations one and two use in the past, but abandoned them due to severe performance issues. Without scoping, we can’t get a good enough view into the decision making process that led to incumbent relationships.
  • Finally, our fourth location. No issues with suppliers, contracts, or market competition. The problem here? When we dig into the spend, we realize the bulk was capex: The purchase of equipment for a new server room buildout. Now that the equipment is purchased, we won’t see this spend come back around for years to come. Without scoping, we assumed spend was annually recurring, and now we have next to nothing.

Better Spend Analysis through Better Scoping

Once our spend analysis is complete, we’ll need to bring additional stakeholders into the fold. Bring in the employees who actually interact with these suppliers and their products and work with them to develop a sourcing history:

  • Did we accurately describe how you use this supplier with our chosen classification system?
  • What are we specifically buying from this supplier, and are these purchases made regularly or only once every few years?
  • How was this supplier selected, and who chose them? Were any competitors engaged at the same time? How did this incumbent beat them out?
  • What does this supplier do well? Where are their biggest points of failure?
  • Has this category been sourced recently? How was the event conducted, and what was the result?

Beyond this interview, ask these stakeholders to provide copies of any active MSAs, SOWs, SLAs, or any other document that can help define the relationship. Of particular note will be termination clauses. What date does the agreement end, and what are the renewal terms? What steps do we follow to terminate on that date, and by when do they need to be taken? If terminating before that date, are there any penalties?

From Insight to Action

Building a detailed spend analysis takes time, and the commitment of resources that could be doing other things. As such, you need to ensure you get a good ROI out of the exercise.

The best way to do that is to see beyond the front-end of what a spend analysis is (the unification, cleansing, and classification of spend data) and consider what a spend analysis helps Procurement do (identify strategic sourcing initiatives and estimate potential impact). Scoping is a critical part of this process, and properly scoping opportunities that a spend analysis shines a light on is a great way to get that ROI.

Thanks, Brian!

Why Are CFOs and CPOs Still Delusional When it Comes to Analytics?

the doctor was recently asked if an organization needs a dedicated Sourcing Spend Analytics solution if the organization already has a generic BI tool that sits on top of its ERP or data warehouse. Well, while the answer is No in theory, it’s rarely No in practice. This is because even if the generic platform you have can support (sourcing) spend analysis, chances are it hasn’t been set up for that. And it will need to be (heavily) customized.

So you either need to get a consultancy and do a lot of specialization, or buy a dedicated solution that is ready out of the box — and, preferably, if possible, buy one that is built on top of your BI platform if you bought one (like Tableau or Qlik) that is best in class.

As we noted in our piece last year that asked why do we still have first generation ERP/Data Warehouse BI, most arguments for generic BI have more holes than swiss cheese. As the Spend Master noted himself ten years ago in his classic, but still under-read, piece on screwing up the screw-ups in BI:

  • central databases, like the kind favoured by most BI tools, don’t solve the analysis problem
  • business analysts should be able to construct BI datasets on their own
  • a governance and stewardship program, which is likely the reason for the generic BI platform acquisition, doesn’t actually put any meat on the table
  • cleansing is often the problem, not basic analysis & reporting
  • BI systems are difficult to use and set up, it is difficult to create ad hoc reports, and it is impossible to change the dataset organization … all the stuff that makes spend analysis, you know, valuable

Plus,

  • BI reports are pretty generic, and not fine tuned to Sourcing, Procurement, or Finance
  • BI engines work on one schema — the ERP schema … which is rarely suited to spend analysis
  • BI engines expect all of the data to come from the ERP. SA systems don’t.
  • The ability of first (and even second generation) BI engines to create arbitrary reports is considerably overstated.

Hopefully someday soon CPOs and CFOs alike will get the point that if you want to do proper Sourcing and Procurement Spend Analysis, you need a proper Sourcing and Procurement Spend Analysis Solution.

Don’t Throw Away That Old Spend Cube, Spendata Will Recover It For You!

And if you act fast, to prove they can do it, they’ll recover it for free. All you have to do is provide them 12 months of data from your old cube. More on this at the end of the post, but first …

As per our article yesterday, many organizations, often through no fault of their own, end up with a spend cube (filled with their IP) that they spent a lot of money to acquire, but which they can’t maintain — either because it was built by experts using a third party system, built by experts who did manual re-mappings with no explanations (or repeatable rules), built by a vendor that used AI “pattern matching”, or built by a vendor that ceased supporting the cube (and simply provided it to the company without any of the rules that were used to accomplish the categorization).

Such a cube is unusable, and unless maintainable rules can be recovered, it’s money down the drain. But, as per yesterday’s post, it doesn’t have to be.

  1. It’s possible to build the vast majority of spend cubes on the largest data sets in a matter of days using the classic secret sauce described in our last post.
  2. All mappings leave evidence, and that evidence can be used to reconstruct a new and maintainable rules set.

Spendata has figured out that it’s possible to reverse engineer old spend cubes by deriving new rules by inference, based on the existing mappings. This is possible because the majority of such (lost) cubes are indirect spending cubes (where most organizations find the most bang for their buck). These can often be mapped to 95% or better accuracy using just Vendor and General Ledger code, with outliers mapped (if necessary) by Item Description.

And it doesn’t matter how your original cube was mapped — keyword matching algorithms, the deep neural net de jour, or by Elves from Rivendell — because supplier, GL-code, and supplier and GL-code patterns can be deduced from the original mappings, and then poked at with intelligent (AI) algorithms to find and address the exceptions.

In fact, Spendata is so confident of its reverse-engineering that — for at least the first 10 volunteers who contact them (at the number here) — they’ll take your old spend cube and use Spendata (at no charge) to reverse-engineer its rules, returning a cube to you so you can see the results (as well as the reverse-engineering algorithms that were applied) and the sequenced plain-English rules that can be used (and modified) to maintain it going forward.

Note that there’s a big advantage to rules-based mapping that is not found in black-box AI solutions — you can easily see any new items at refresh time that are unmapped, and define rules to handle them. This has two advantages.

  1. You can see if you are spending where you are supposed to be spending against your contracts and policies.
  2. You can see how fast new suppliers, products, and human errors are entering your system. [And you can speak with the offending personnel in the latter case to prevent these errors in the future.]

And mapping this new data is not a significant effort. If you think about it, how many new suppliers with meaningful spending does your company add in one month? Is it five? Ten? Twenty? It’s not many, and you should know who they are. The same goes for products. Chances are you’ll be able to keep up with the necessary rule additions and changes in an hour a month. That’s not much effort for having a spend cube you can fully understand and manage and that helps you identify what’s new or changed month over month.

If you’re interested in doing this, the doctor is interested in the results, so let SI know what happens and we’ll publish a follow-up article.

And if you take Spendata up on the offer:

  1. take a view of the old cube with 13 consecutive months of data
  2. give Spendata the first 12 consecutive months, and get the new cube back
  3. then add the 13th month of data to the new cube to see what the reverse-engineered rules miss.

You will likely find that the new rules catch almost all of the month 13 spending, showing that the maintenance effort is minimal, and that you can update the spend cube yourself without dependence on a third party.