In Part I, we not only told you that Trade Extensions unveiled the upcoming version of their optimization-backed sourcing platform at their recent user conference in Stockholm, recently covered by the public defender over on Spend Matters UK, but we also told you that, with it, Trade Extensions are redefining the sourcing platform. But we did not tell you how, as we first had to review the brief history of sourcing platforms.
Then, in Part II, we built the suspense even more by taking a step back and describing the key features that are missing in the majority of current platforms — namely usability, appropriate workflow, integrated analytics support, repeat event creation, limited visualization, and limited support for different types of users and collaborators. While not everything a user would want, these are certainly among the features a user will need to realize the full power of advanced sourcing.
Then, after making you wait two days, in Part III we finally discussed how Trade Extensions are not only redefining the sourcing platform, they are redefining the advanced sourcing process itself! Customized advanced sourcing workflows, the next level of enterprise software usability, better user and collaborator management, easy workflow and event duplication, improved fact sheet management, and a new integrated analytics capability that makes analytics the other side of the advanced sourcing coin.
But instead of detailing the new integrated analytics, in Parts IV and V, we took another step back and walked through a brief history of analytics, taking care to detail all of the issues with manual analysis — limited reports, even more limited value, accuracy issues, and mapping nightmares — and automated analysis — with poor clustering, poorer rules, and forced taxonomy, as there was no way to understand just what TESS 6 will be giving you without describing what is sorely missing. And now we are here. And here we will discuss how TESS 6 puts analytics side by side with optimization.
First of all, Trade Extensions has added a whole new analytics rule language. The current platform supports the construction of advanced formulas and rules for optimization using an advanced formula language, which can also be used for analytics, but the language was not designed for cleaning, mapping, and enrichment. The new rule language makes it easy to clean data by replacing erroneous values with valid values using validity tables, which can be stored and manipulated in standard fact sheets. All the user has to do is select the columns to be cleaned in the current sheet, the sheet with the valid data values, and specify the column mappings. Enrichment is just as easy. Pick the values (columns) that are required, pick the relationship columns (that allow the values to be related), and create another rule. A single rule. Each rule, expressed in natural language, will run over an entire (set of) sheet(s) and makes rule creation simple and elegant.
But most importantly, it not only makes mapping (which can be done in the same manner as cleaning and enrichment) easy, but also initial mapping easy.
As per our last post, initial mapping can be a nightmare, unless you have the secret sauce, which, as has been discussed many times here on SI, is:
- map the vendors
as many vendors only supply a single category
- map the GL codes
since they are usually for a category subcategory, even though they are not that useful for spend analysis
- map the vendors plus GL codes
since this gets you a subcategory or a sub-subcategory
- map the exceptions
where something is mapped according to GAAP and not according to a meaningful spend category
- map the exceptions to the exceptions
where something gets tacked on to a category because that’s where it seemed to fit
And if you have the vendor and GL code data, and a tool that makes it easy to map the vendors, GL codes, and (override) combinations, and then drill into each mapping and find, and map, exceptions, this works rather well. But the tool doesn’t always support this (or the concept of rule priority, as the rules have to be applied in reverse of their creation order, or there are mis-mappings), and you don’t always have the vendor or GL code, especially if the file is from a P-Card system, procurement system, or other non-accounting system. So the secret sauce is a bust.
And when the secret sauce is a bust, most systems fall flat on their UI faces. That says you need a secret sauce that is not file or system dependent. And the means to implement it.
Fortunately, there is a secret sauce that is system independent, and it’s not that hard to make. (Although you’d think otherwise as the doctor has been trying to teach the recipe for years, with no success … until now.)
If you look at the abstract baseline of the above algorithm, it’s:
- map a primary catch-all categorization field
so that all transactions can be mapped to a category, even if it is to “Other” which identifies transactions that need (better) mapping
- map a secondary catch-all categorization field
so that if the primary field is empty, the second field maps an otherwise unidentified transaction
- map the primary and secondary field pairings
where doing so is more accurate than either field on its own
- map exceptions to the field mappings
based on a third field
- map exceptions to the exceptions
based on more detailed rules and/or other fields
which, if need be, abstracts even more to:
- create a (set of) baseline catch-all rule(s)
- create a (set of) baseline catch-all backup rule(s)
- create a (set of) baseline rule(s) that work on field pairs and/or dual descriptor pairings
- create a (set of) detail rule(s) that catch exceptions
- create a (set of) detail rule(s) that map any special cases (that materialize as exceptions to the exceptions)
And when the rules are run in consistent reverse order of definition, you get consistent, accurate mappings (that can be corrected if new exceptions arise and that can fall in an “other” category if new, otherwise unidentifiable, transactions appear)). And this is key.
Why? Come back tomorrow for Part VII!