Category Archives: Spend Analysis

PRGX – The Biggest Analytics Provider You Don’t Know!


For those that do not know it, PRGX would appear to be one of a select number of dominant services provider in the niche market for recovery audit services — a market that unlike other procurement services faces tremendous price pressure for its core recovery, statement and related auditing and profit recovery services.

the doctor and the prophet, PRGX Intro on Spend Matters Pro (membership required)

In particular, PRGX would appear to be a recovery audit specialist for the global retail sector. And that is what they are, but that is not all they are.


PRGX has started to remake itself quietly from within — out of necessity, given these broader market trends — building and acquiring technology capabilities in the spend analytics and supplier management areas, both to expand its relevance and to start driving automation and scale in its core business.

PRGX has built the most complete, and in many ways the most advanced, analytics and recovery solution for the retail sector and, in doing so, has built one of the most complete and advanced analytics and recovery solutions for just about any sector that buys and relies on goods. Pharma, Manufacturing, and Aerospace and Defense, just to name a few, could all benefit intensely from the out-of-the-box PRGX solution.

This is because it has evolved it’s application from a simple recovery analytics application to a full featured analytics solutions with modules for:

  • Payment Analytics
  • Spend Analytics
  • Product Analytics
  • Recovery Avoidance Analytics
  • Supplier Information Management

With the latter two coming through its recent acquisition of Lavante.

It can analyze what you paid (payment analytics), what you should have paid (recovery analytics), what you are spending (spend analytics), how much that is costing you and profiting you on a product level (product analytics), and what suppliers are supplying that product and how they are performing (SIM with a hefty dose of SPM).

And it can do this analysis end to end around a product or category, and allow you to simultaneously see what you ordered, spent, overspent, took in on sales, lost on returns, and profited when all was said and done. It’s one of the most powerful analytics solutions you don’t know about. Stay tuned — there is more to come!

Do You Need a Spend Cube to Identify Top Underperforming Products?

No, you don’t. But that doesn’t mean you shouldn’t have one!

Let’s face it, if you have an n-step process that can theoretically be done in a spreadsheet, then it can be done in a spreadsheet — but how long will it take to do it in a spreadsheet vs. doing it in a modern spend analysis solution?

If sub-steps of the process consist of:

  1. researching and accumulating industry specific data
  2. comparing your data to industry averages
  3. repeating the comparisons with selected competitors, putting your place in their shoes
  4. looking for anomalies
  5. selecting top categories outside industry average
  6. selecting top underperforming products within those categories

How long is it going to take without a proper spend analysis product?

Industry data is going to come in many different forms, in many different tables, and will initially need to be stored in many different sheets. It will take a lot of manual effort and data formatting to get the data into a consistent format that will allow it all to be compared apples to apples. In contrast, a good spend analysis platform with ETL will allow for data to be automatically mapped to the right format will easily save hours or days of manual effort, as most good data tables will be detailed and large.

Comparisons in spreadsheets require lots of formulas and calculations, which can be tedious and error-prone to implement. Modern spend analysis packages come with lots of standard reports and templates that will allow for comparisons with industry averages and available competitor data out of the box. Again you will be saving hours, if not days, with a good package.

Anomalies are really easy to see in appropriate scatter plots, but very, very hard to spot in rows of data. If you have thousands of rows, how do you detect outliers with a manual scan? Sure you can pivot on volume or distance from average and so on, but is it really an outlier? Careful inspection and analysis is required on each potential row — but a visual glance at an appropriate scatter plot gives you results in seconds.

So no, you don’t need a modern spend analysis product or a spend cube to identify good potential opportunities, if you don’t mind dedicating a back room full of people for days or weeks to do an analysis that can be done by a good spend analysis product in a few hours.

And that’s why modern spend analysis solutions can deliver an ROI of more than 10% year over year as the expected value of analysis is typically above water, vs. under it, as it is when you do it manually. (See this classic post on .)

So while you don’t need a spend analysis solution, it’s akin to saying you don’t need modern technology for sourcing either. There’s nothing you can’t do with pen, paper, and telephone, but do you really want to remain in the dark ages?

As far as SI is concerned, there are only three reasons someone trying to sell you a sourcing suite would tell you that you don’t need a modern spend analysis solution (based on proper spend cubes):

  • they don’t have it and they don’t want you to use another vendor in case that vendor also has, or implements, a comparable sourcing solution (and they fear competition);
  • they want the services revenue (there’s a lot of billable hours to doing it manually); or
  • they truly don’t understand what modern spend analysis can do

And none of these reasons are good reasons. In fact, they are all reasons to be wary of the provider! There are only two cornerstone technologies that set leading sourcing organizations apart, and analytics is one. (The other is optimization.)

Precision is Contextual, But How Can You Get Precise?

Late last year, over on the public defender‘s blog, Pierre Lapree penned a post that asked how many decimals of π does Procurement really need? In short, the answer was, it depends on the context — some situations don’t need very precise calculations, and others need precision down to 1/100th of a decimal point. In his post, Pierre notes that in some situations, like savings, rounding can be to the closest power of 10. In others, like RFP, rounding to the dollar is more than enough, sometimes the closest hundred or thousand is enough. But in spend analysis, sometimes you need to match those financial statements down to the penny to get it right.

But how do you do that?

Your data is a mess, across multiple systems, in multiple formats, with varying levels of detail.

The financial reports are typically created from spreadsheets, which, even though they were output from the organization’s accounting systems, are typically riddled with errors.

And any hopes of matching, despite the fact that each system should be the checksum for the other, are as fantastical as J.K. Rowling’s beasts.

So how do you get precise?

You get out of the data and into the real world. When you don’t know where you are in the real-world, you geo-locate. How do you do that? In today’s world, you tri-angulate your position by taking measurements with respect to cell phone towers and/or satellites and using mathematics to estimate your position as close as possible – the more readings, the more accuracy.

In other words, you take measurements. Lots of measurements. And correlate them. The financial statements are just one set of checks.

Another set of checks are inventory levels. You’re paying for physical goods — you should have payments and invoices for the majority of physical goods and vice versa.

A third set of checks is the accounts receivable system — every part or good that was bought for (re)sale should not only have a corresponding inventory entry but an invoice, and vice versa.

In other words, every enterprise system that tracks goods and services is a data point for correlation, and should be used as such. Don’t just focus on the dollars and cents, as trying to balance erroneous totals can lead you down the wrong path — use all the data at your disposal to get it right — and precise.

There’s Still No Spend Analysis Without the Slice ‘N’ Dice


SI originally ran this post 10, yes ten, years ago today, and nothing has changed. Regardless of how fancy that drill-down dashboard is, how many pre-canned reports come with the system, or how many sub-views you can create, if it’s still off of 1, that’s one, cube, it’s still limited in the value you will get. Moreover, this post is especially relevant because it reminds us of how BIQ changed the spend analysis game and that the individuals that spearheaded the company are game changers. This is particularly relevant because Eric Strovink, the founder and the person that has now changed the spend analysis game twice (first at Zeborg, now part of [IBM] Emptoris, before BIQ) is going to be launching a new analytics company this year, and chances are it won’t just be the same-old same-old rebranded Tableau or QlikView solution).

When I was in Boston, I was lucky enough to spend the better part of the day with Eric Strovink of BIQ, and have a few extended conversations with individuals at some of the local consulting firms that specialize in sourcing, and am now more than convinced that any tool that mandates a single cube, or makes it difficult to change the cube, is not a spend analysis tool, merely a spend data warehouse with built in canned reporting (and, if you’re really lucky, limited ad-hoc capabilities).

Not that there’s anything wrong with a centralized spend warehouse with a consistent view of your total spend, especially one that integrates multiple internal and external data sources and allows you to drill down and understand your spend at a detailed level. Of all the e-Sourcing software tools, it is the one most likely to make your CFO do backflips, especially if it has good reporting (and this is a big if – not all spend analysis tools on the market do), since it makes it really easy for the CFO to tell the CEO where the money is going and comply with all those pesky reporting requirements.

However, the value of such a tool is quite limited to you as a purchasing agent. Now, it’s true that the first time you’ll use it you’ll save big-time, especially if it’s the first time you have visibility into the majority of your spend, but the reality is that this is the only time you’ll see such significant savings. After you’ve identified all of the low hanging fruit identified by the single view provided to you by the system, analyzed each instance of over-spending, and taken corrective actions, you’ll find that you’ll be unable to identify additional savings and the system will simply function as a glorified data warehouse that you only use once a quarter to create those reports for your CFO and check that your teammates our buying off the negotiated contracts – something that you could do almost as well with your existing ERP system and a significantly cheaper Business Intelligence / OLAP tool like (SAP) Business Objects or (IBM) COGNOS and some grunt work.

Remember, I’m not saying that traditional spend analysis systems like those provided by e-Sourcing providers like (SAP Ariba) Procuri and (IBM) Emptoris are not without value – if you do not have a good, integrated, data warehouse that integrates your various accounting, purchasing, and inventory systems to provide you a single view of your spend or a good reporting system to produce all of the reports your CFO needs, then you’ll find these systems very valuable. However, it’s important that you understand that the primary value of these systems is in the total spend visibility they provide from a financial viewpoint, not the spend analysis capability you really require to identify potential overspending and cut-costs, because you’ll only be able to do this once – thanks to the single organizational view they are built on. (In other words, you’ll save big when you fist implement the system but future savings will be limited to your capability to quickly catch and stop maverick spend.) So, if you need a system to consolidate your spend data, produce the tedious reports required by all of the new financial reporting requirements, and give you some basic across-the-board spend visibility, or, more importantly, you need a spend data warehouse that integrates with the rest of your e-Sourcing suite, be sure to check these systems out – but understand what they are really worth to you before you sign the check.

In order to help you understand where these systems fail in true spend-analysis, why you need to be able to dynamically create multiple cubes on the fly which support dynamic dimensions, meta-aggregation, cross-dimensional roll-ups, and even federated data sets, I’m happy to inform you that Eric Strovink has agreed to co-author a series of posts outlining what real spend analysis is, how it differs from basic spend visibility, what it does for you, and why you need to get there.


Eric Strovink was actually kind enough to contribute two insightful series to Sourcing Innovation. Here are the links for your reference.

I: The Value Curve
II: The Psychology of Analysis
III: Common Sense Cleansing
IV: Defining “Analysis”
V: New Horizons I
VI: New Horizons II

I: It’s the Analysis, Stupid
II: Why Data Analysis is Avoided
III: Crosstabs Aren’t Analysis
IV: User-Defined Measures, Part I
V: User-Defined Measures, Part II

Spend360 – Applying Deep Machine Learning to Spend Analysis

Regular readers will know that, generally speaking, the doctor has not been impressed with the auto-classification and mapping offerings by any spend analysis vendor he’s ever blogged about as all have failed pitifully on tail spend, performed poor on any supplier or category the provider hasn’t processed extensively, and worked poor in new geographies and even poorer in foreign languages.

However, this year, he’s been impressed by two vendors with auto-classification. TAMR, which are trying to tame the data deluge, and now Spend360. While a new name on this side of the pond, it is not a new name across the pond, having opened its doors for business in 2011, after two plus years of intense development. Plus, it is gaining reputation pretty quickly since it’s foray to this side of the pond a couple of years ago and now has over 100 North American clients, which brings its total client base to over 400 global customers, which is impressive for any company in this space. (Even more impressive is the fact that, to date, it has processed over 1 Trillion of spend.)

While it’s still not perfect, and still can’t outmatch the best human expert with a multi-level priority mapping engine, it is decades ahead of its competition and has the ability to learn and evolve and, over time, approach 98%+ mapping accuracy, leaving little that has to be mapped, or corrected, by a human user (which is quite valuable when the user is not an expert in spend analysis but still wants to reap the benefits).

Not only can its deep machine learning identify tail spend suppliers, company specific categories, and even individual items coded in obscure ways, but it can learn over time and adapt to different data models, especially since it can use evolving knowledge bases. Whereas the majority of first generation classifiers used naive statistical classification that could not learn and had to map to a fixed (UNSPSC) model, Spend360′s uses deep machine learning (based on LSTM and encoder/decoder technology) that maps to custom data models using extensible knowledge bases (which can be created and maintained by the organization) that can encode organization and industry specific knowledge (and negate the need for custom mappings or override rules).

The fact that the knowledge base can be extended anytime a mis-classification occurs negates the need for manual mappings or override rules common in so many first generation spend analysis systems is a very powerful concept. It means that every erroneous mapping need only happen once and will never need to be manually corrected again. Plus, the fact that the data model can be extended as analytic needs evolve means that the platform can continue to deliver value year over year over year, unlike most first generation platforms that only delivered top N reports and failed to deliver value after the first twelve to eighteen months.

But this isn’t all Spend360 has to offer. In addition to a powerful classification ability, which can be trained to actually work, it also has a very powerful front end that allows the user to drill through the cube using custom filters in real time, compared to first generation systems that had fixed OLAP with limited filter capability. Reports can be cross-linked and all linked reports auto-update as one is drilled into. And data can be uploaded and incorporated into the cube in real-time if additional data is required.

And, to top it off, based on the 1 Trillion in spend they have classified over the years, Spend360 also has deep spend benchmarks across all of the major verticals and categories, which is often mapped down to UNSPSC level 4. This allows an organization to quickly understand how its spend on a category compares to the average in its vertical. Simply augmenting this data with pricing trend data can give an organization quick insight into where some significant cost normalization opportunities may lie.

In short, Spend360 is a provider the doctor expects you to be seeing a lot more of in the years to come, and recommends that you check out the upcoming deep dive, co-written with the prophet, over on Spend Matters Pro [membership required] if you are able. This is one best-of-breed provider you want to know.