In Part I, we not only told you that Trade Extensions unveiled the upcoming version of their optimization-backed sourcing platform at their recent user conference in Stockholm, recently covered by the public defender over on Spend Matters UK, but we also told you that, with it, Trade Extensions are redefining the sourcing platform.
Then, after discussing a brief history of sourcing platforms, and common limitations with most of the platforms on the market, we dived into many of the advancements Trade Extensions, despite already having one of the few third-generation sourcing platforms on the market, is making to take sourcing to the next level. Numerous usability enhancements, composable workflows, centralized fact sheets with even more powerful processing, easy repeat events, and better user and collaborator management. We also noted TESS 6 is coming with a whole new approach to in-platform analytics, but held back because we first need to provide a history of spend analytics (in Parts IV and V) and its shortcomings.
In our last post, we noted that the first thing Trade Extensions has done is to create a whole new analytics rule language which makes the definition of cleansing rules, enrichment rules, and the application of existing mappings and mapping rules a breeze. A single natural language rule with a single cleaning, enrichment, or mapping sheet can process an entire file of millions, tens of millions, or even hundreds of millions of transactions (as long as you don’t expect to view more than a million at a time). A few rules can link purchase orders, invoices, goods receipts, and payments and make it easy to see where the missing records are. Even a junior analyst can do it with ease.
But this isn’t the only significant advance that TE has made. Realizing that the hard part is the creation of an initial set of mapping rules for a new or not yet analyzed data source, they’ve created a new approach to mapping. Getting the message that, as per yesterday’s post, the traditional secret sauce isn’t always enough and a more generic recipe is needed, they’ve adopted the generic recipe outlined by the doctor and taken it to the next level. (At Trade Extensions, all amps go to 11.)
Often, when p-card data or invoice stores are the only data that is available, all the user has is a vendor (short) name and an associated receipt with abbreviated line item info or a vendor name and line item detail. And all the mappings have to be on the product description field. In this case, rules need to be built up as mappings on single words or phrases, with double word or phrase overrides for more complex mappings with further triple word or phrase overrides for sub (sub) categories, and so on. In this case the user will define a key phrase (or regular expression on a key phrase), look at the mappings, create a modification for special cases, and then move on in an attempt to identify another key phrase (that will trap a large number of transactions).
This works, but it’s a slow and cumbersome process. The small sample a user selects to try and manually identify keywords through a quick scan might not be representative, and the user might pick relatively low frequency words or phrases for the first few dozen, or hundred, rules and get nowhere fast*. So what’s the solution? AI? Definitely not. If there’s not enough data for you to always make a good decision, why would you blindly trust an algorithm (that may have been tuned on completely different data sets)?
The solution is AR (Automated Reasoning) and guided rule construction. In TESS 6, during the creation of an initial mapping file, the buyer can select a text column and the system will identify the most common words and phrases, in descending order, and guide the user through the selection and creation of appropriate mapping rules. The user will see, in a transaction file with a lot of office supplies, that “file”, “paper”, “pens”, and “boxes”, appear frequently; “copy paper”, “ballpoint pens”, and “file boxes” appear less frequently; and “xerox paper copier”, “bull pens”, and “junction boxes” appear even less frequently. Each time a general rule is created, the user can drill into the (potentially) affected transactions, see the next most common set of (sub) words and phrases, and, if necessary, easily define an override rule of higher priority, and then either drill in further, or back up to the unmapped set. The user can, very quickly, map the transactions until 90% to 99% are mapped … to an acceptable accuracy.
And, moreover, the rules are always run on a file in order of the least number of affected transactions. Since each override rule is designed to apply to a smaller set of transactions, running the defined rules in order of the least number of affected transactions means all the (super) special cases are defined first. It also means that if a rule defined first would fire first, that the initial rule was not defined as generic as the user believes it was or the nature of the data has evolved over time (and the appropriate mapping rules should be evolved as well). In addition, these mappings can be created in conjunction with one or more classification columns, such as vendor (if two different vendors use the same language to described what are two different products to the buyer) or some other categorization code (from the accounting system or the ERP).
Since many words and phrases are common, the reality is that even a million records can often be mapped 95%+ rather accurately (on a first pass) with only a few thousand rules. This can be done, by hand, in a few days, and be considerably more accurate than even the tenth pass over the data by a current generation automated mapping solution which has to be trained and corrected for weeks by the offshore data centre until enough accuracy is obtained that you could even consider looking at a spend report.
The rule language is easy. The interface is even easier. And the guided manual mapping capability puts analytics into the hands of every buyer. And since it’s all fact sheet based, the data can come from anywhere, anytime, be analyzed on the spot, and pushed anywhere it’s needed. It can come from the procurement system, be analyzed, and a subset used in an optimization scenario, or a set of award scenarios can be combined, analyzed and reported on. Data can flow back and forth with ease, and be classified and manipulated with ease.
It’s what analytics in a sourcing platform should look like. And it’s only in TESS 6.
* which is not a desirable state of affairs for spend analysis