Trade Extensions is Redefining Sourcing, Part V

As per our last post, we began this series by informing you that Trade Extensions unveiled the upcoming version of their optimization-backed sourcing platform at their recent user conference in Stockholm, recently covered by the public defender over on Spend Matters UK, but we also told you that, with it, Trade Extensions are redefining the sourcing platform. We then reviewed a brief history of sourcing, identified the gaps in the majority of platforms, and then went on to describe how Trade Extensions, despite having a leading third generation solution, are finding ways to make their platform, and the user experience, even better.

We described a number of their improvements in Part III, namely in platform usability, workflow, user management, and repeat event support, but held back on describing the improved analytics support as we first needed to review a brief history of spend analysis, which we started in Part IV. Today we continue that review so we can clearly describe the leap forward that TESS 6 is bringing to the market.

Yesterday we noted that, depending on who you asked, spend analysis began as the set of canned OLAP-based spending reports that came with your sourcing, procurement, or analytics suite or the process of mapping Accounts payable spend and “drilling for dollars”. But the definition didn’t matter, as both had a lasting value problem. It didn’t take long to identify the few value-generating sourcing opportunities the organization didn’t know about, and then the value was limited at best. But that wasn’t the only problem.

There was (and is) also an accuracy problem. Namely, spend reports are only as accurate as the data that populates them, and if this data is not properly mapped, the spend reports are all out of whack. This happened a lot when the mappings weren’t done by a true spend master highly familiar with the organizational data and the tool. (In the beginning, there was no automated mapping technology.) If the mapping rules were poor, or full of conflicts (and applied in random order), mappings would be poorer, or almost random. And this leads us to the big problem.

Accurate manual mapping, in just about every spend analysis system ever created (with the exception of BIQ, which was acquired, absorbed, and for reasons unknown, pretty much retired, by Opera Solutions), was difficult, if not impossible on data sets with millions, if not hundreds of millions, of transactions. In most systems, you selected a transaction, or a set, created a mapping rule based on one or more fields, possibly using a regular expression on text data, and added it to the rule set. You continued until you believed that the rules would map most of the data, ran the rules, and totalled the mapped spend. If the mapped spend was deemed enough to do an initial analysis (90%+), the mapping exercise stopped, otherwise it continued.

Since meaningful sorting and grouping was difficult, if not impossible, due to lack of meaningful mappings, it often took weeks to create an initial mapping file (even though a good mapping tool in the hands of a pro could allow 95% of spend to be mapped on a Fortune 500 company in two days, but that only ever happened with BIQ in the hands of a true expert), and, to top it off, it was often riddled with errors. Most (untrained) analysts would create mapping rules that were too general and they would inadvertently map extra transactions to each category with each initial starting rule. (For example, “xerox paper” would map “xerox paper copier” to the paper category, where “xerox paper copier” clearly doesn’t belong.) And it wouldn’t be detected until a “real-time” report was presented in an executive meeting (and it would be located on drill-down). And to top it off, other rules would miss transactions. For example, the analyst would map “office chair” to the office furniture category and not realize that some buyers labelled office chair “leather backed chair”, and then would map “leather backed chair” to retail furniture using the “leather backed” mapping rule, which the organization has in place because it buys “leather backed couches” to sell to the market.

And the purported solution of automated mapping only made matters worse.

First of all, most first (and even second) generation spend analysis engines with automated mapping capability used naive statistical approaches which used “dumb” clustering to group what the algorithm thought were related transactions. So, since “xerox paper copier” was similar to “xerox paper shredder”, if the thresholds were low enough, they’d both be mapped to the same subcategory of general office equipment, when they should be mapped to separate sub (sub) categories since they are quite different (electronics vs cheap mechanical shredders).

Secondly, these automated mapping systems would allow users to create override rules to complement the rules that were automatically created, but they wouldn’t necessarily insure the rules got applied in the right order, so each execution could see the same transaction mapped differently.

Third, these systems would pretty much require the organization to adopt their spend taxonomy to the classification ability of the tool, as the tool would rarely adopt to the taxonomy of the organization, and this is just not the proper way to do spend analytics.

And while a few of the newer automated spend mapping solutions are improving on this (deep machine learning algorithms, user defined knowledge models, etc.), they still have their faults (but that’s a discussion for another series of posts).

In short, sourcing analytics has historically not worked well for advanced sourcing, and certainly hasn’t been the other, equally important, side of the advanced sourcing coin.

But TESS 6 is about to change all that!