Monthly Archives: November 2016

The US Federal Election is in 3 Days …

So, American LOLCat, who do you predict will win, given that just a few days ago the polls were so close, as per this Telegraph article that put the candidates neck-and-neck in a recent poll as a result of the resurfacing e-mail scandal? (Now, FiveThirtyEight.com still gave Hilary a 66% chance two days later, and, historically, Nate Silver is pretty accurate in his predictions, but will this election buck the trend?)


American? American? Iz Canadian Cat!

Well, I guess this explains the recent increase in the Canadian Cat population! 😉

It’s 2016! Welcome Back to the Industrial Age of SRM!

State of Flux just released their 8th annual supplier relationship management research report entitled Digital SRM: Supplier Relationships in the New Technology Landscape and while it reveals the handful of leading supply chain organizations are, or are moving towards, digitization, it reveals the majority of organizations are not only stuck in the past, but moving back towards the industrial age in their supplier (relationship) management processes. Scary!

So scary in fact, that I hope that the purchasing wizard Pete Loughlin of Purchasing Insight does a follow up to his piece on how “we are now arriving in the digital economy – turn your watch back 40 years” entitled we are moving forward in the digital economy, turn your watch back another 40 years because some of the practices many global organizations are still practicing with respect to supplier relationship management could literally be straight out of Marshall Monroe Kirkman’s classic The handling of railway supplies. Their purchase and disposition.

And I’m not joking.

Many organizations are still doing nothing more than inviting bids by public advertisement for a year’s supply and taking the advice that the pulse of the market should be continually felt and, clearly, not thinking about the importance of managing relationships after the purchase order is cut.

And while it looked like we are making progress last year, the simple facts that:

  • the number of businesses failing to invest in any SRM-related training rose from 26% in 2015 to 39% in 2016
  • 80% of companies are not achieving on-going benefits from external spending (compared to what they could be)
  • 87% of companies are still using Excel (which is essentially just an electronic version of a general ledger at most companies) as their primary SRM tool

demonstrate that, for the majority of organizations, the digital age (which for the consumer has been here for almost two decades) is still decades away.

After all, why are Purchasing Manages still panicing when they receive the 2:00 am phone call from the CFO informing them that their primary supplier in China just filed for bankruptcy and the company needs to know ASAP what the impact will be. If they had modern supplier relationship management systems, it wouldn’t take them 48 sleepless hours pouring through accounting systems, ERP systems, and spreadsheets to figure out what products come from the supplier. With modern supply management best practices it wouldn’t take them weeks to identify a new supplier and months to switch. And with good supplier relations, they definitely wouldn’t have to absorb the price doubling mandated by the receivership for continued supply of the critical product lines.

With proper supplier relationship management, you know as much about the (financial) health status of your strategic supplier as you know about your own organization. With proper supplier relationship management, you know all the products that are being provided, in what volume, in what consumer product lines they are being used, and what the impact of a stockout or termination of the line will be. With proper supplier relationship management, a company knows which other suppliers it is using that could also produce the product, how long it would take to switch, and how much it would cost. And with good relations, the last thing the supplier personnel would be comfortable with is charging their best customers an unexpected, possibly contract violating, unmitigated price increase, and would fight any suggestions by the receivership management to increase prices to any degree.

And the sad thing is there is no shortage of basic SRM systems these days. Not all are industry leading like (and not all will deliver anywhere near the value of) State of Flux’s Statess solution, but there are so many ways for an organization to enter the digital age that it’s shocking just how hard they fight to stay in the industrial age.

Hopefully, now that the results have been demonstrated for eight years in a row, they’ll finally accept SRM is not a passing fad, its the foundation for a new reality, buy in, and go for it. At the very least, hopefully they’ll check out “Digital SRM: Supplier Relationships in the New Technology Landscape” and realize what could be.

Trade Extensions is Redefining Sourcing, Part VII

In Part I, we not only told you that Trade Extensions unveiled the upcoming version of their optimization-backed sourcing platform at their recent user conference in Stockholm, recently covered by the public defender over on Spend Matters UK, but we also told you that, with it, Trade Extensions are redefining the sourcing platform.

Then, after discussing a brief history of sourcing platforms, and common limitations with most of the platforms on the market, we dived into many of the advancements Trade Extensions, despite already having one of the few third-generation sourcing platforms on the market, is making to take sourcing to the next level. Numerous usability enhancements, composable workflows, centralized fact sheets with even more powerful processing, easy repeat events, and better user and collaborator management. We also noted TESS 6 is coming with a whole new approach to in-platform analytics, but held back because we first need to provide a history of spend analytics (in Parts IV and V) and its shortcomings.

In our last post, we noted that the first thing Trade Extensions has done is to create a whole new analytics rule language which makes the definition of cleansing rules, enrichment rules, and the application of existing mappings and mapping rules a breeze. A single natural language rule with a single cleaning, enrichment, or mapping sheet can process an entire file of millions, tens of millions, or even hundreds of millions of transactions (as long as you don’t expect to view more than a million at a time). A few rules can link purchase orders, invoices, goods receipts, and payments and make it easy to see where the missing records are. Even a junior analyst can do it with ease.

But this isn’t the only significant advance that TE has made. Realizing that the hard part is the creation of an initial set of mapping rules for a new or not yet analyzed data source, they’ve created a new approach to mapping. Getting the message that, as per yesterday’s post, the traditional secret sauce isn’t always enough and a more generic recipe is needed, they’ve adopted the generic recipe outlined by the doctor and taken it to the next level. (At Trade Extensions, all amps go to 11.)

Often, when p-card data or invoice stores are the only data that is available, all the user has is a vendor (short) name and an associated receipt with abbreviated line item info or a vendor name and line item detail. And all the mappings have to be on the product description field. In this case, rules need to be built up as mappings on single words or phrases, with double word or phrase overrides for more complex mappings with further triple word or phrase overrides for sub (sub) categories, and so on. In this case the user will define a key phrase (or regular expression on a key phrase), look at the mappings, create a modification for special cases, and then move on in an attempt to identify another key phrase (that will trap a large number of transactions).

This works, but it’s a slow and cumbersome process. The small sample a user selects to try and manually identify keywords through a quick scan might not be representative, and the user might pick relatively low frequency words or phrases for the first few dozen, or hundred, rules and get nowhere fast*. So what’s the solution? AI? Definitely not. If there’s not enough data for you to always make a good decision, why would you blindly trust an algorithm (that may have been tuned on completely different data sets)?

The solution is AR (Automated Reasoning) and guided rule construction. In TESS 6, during the creation of an initial mapping file, the buyer can select a text column and the system will identify the most common words and phrases, in descending order, and guide the user through the selection and creation of appropriate mapping rules. The user will see, in a transaction file with a lot of office supplies, that “file”, “paper”, “pens”, and “boxes”, appear frequently; “copy paper”, “ballpoint pens”, and “file boxes” appear less frequently; and “xerox paper copier”, “bull pens”, and “junction boxes” appear even less frequently. Each time a general rule is created, the user can drill into the (potentially) affected transactions, see the next most common set of (sub) words and phrases, and, if necessary, easily define an override rule of higher priority, and then either drill in further, or back up to the unmapped set. The user can, very quickly, map the transactions until 90% to 99% are mapped … to an acceptable accuracy.

And, moreover, the rules are always run on a file in order of the least number of affected transactions. Since each override rule is designed to apply to a smaller set of transactions, running the defined rules in order of the least number of affected transactions means all the (super) special cases are defined first. It also means that if a rule defined first would fire first, that the initial rule was not defined as generic as the user believes it was or the nature of the data has evolved over time (and the appropriate mapping rules should be evolved as well). In addition, these mappings can be created in conjunction with one or more classification columns, such as vendor (if two different vendors use the same language to described what are two different products to the buyer) or some other categorization code (from the accounting system or the ERP).

Since many words and phrases are common, the reality is that even a million records can often be mapped 95%+ rather accurately (on a first pass) with only a few thousand rules. This can be done, by hand, in a few days, and be considerably more accurate than even the tenth pass over the data by a current generation automated mapping solution which has to be trained and corrected for weeks by the offshore data centre until enough accuracy is obtained that you could even consider looking at a spend report.

The rule language is easy. The interface is even easier. And the guided manual mapping capability puts analytics into the hands of every buyer. And since it’s all fact sheet based, the data can come from anywhere, anytime, be analyzed on the spot, and pushed anywhere it’s needed. It can come from the procurement system, be analyzed, and a subset used in an optimization scenario, or a set of award scenarios can be combined, analyzed and reported on. Data can flow back and forth with ease, and be classified and manipulated with ease.

It’s what analytics in a sourcing platform should look like. And it’s only in TESS 6.

* which is not a desirable state of affairs for spend analysis

Trade Extensions is Redefining Sourcing, Part VI

In Part I, we not only told you that Trade Extensions unveiled the upcoming version of their optimization-backed sourcing platform at their recent user conference in Stockholm, recently covered by the public defender over on Spend Matters UK, but we also told you that, with it, Trade Extensions are redefining the sourcing platform. But we did not tell you how, as we first had to review the brief history of sourcing platforms.

Then, in Part II, we built the suspense even more by taking a step back and describing the key features that are missing in the majority of current platforms — namely usability, appropriate workflow, integrated analytics support, repeat event creation, limited visualization, and limited support for different types of users and collaborators. While not everything a user would want, these are certainly among the features a user will need to realize the full power of advanced sourcing.

Then, after making you wait two days, in Part III we finally discussed how Trade Extensions are not only redefining the sourcing platform, they are redefining the advanced sourcing process itself! Customized advanced sourcing workflows, the next level of enterprise software usability, better user and collaborator management, easy workflow and event duplication, improved fact sheet management, and a new integrated analytics capability that makes analytics the other side of the advanced sourcing coin.

But instead of detailing the new integrated analytics, in Parts IV and V, we took another step back and walked through a brief history of analytics, taking care to detail all of the issues with manual analysis — limited reports, even more limited value, accuracy issues, and mapping nightmares — and automated analysis — with poor clustering, poorer rules, and forced taxonomy, as there was no way to understand just what TESS 6 will be giving you without describing what is sorely missing. And now we are here. And here we will discuss how TESS 6 puts analytics side by side with optimization.

First of all, Trade Extensions has added a whole new analytics rule language. The current platform supports the construction of advanced formulas and rules for optimization using an advanced formula language, which can also be used for analytics, but the language was not designed for cleaning, mapping, and enrichment. The new rule language makes it easy to clean data by replacing erroneous values with valid values using validity tables, which can be stored and manipulated in standard fact sheets. All the user has to do is select the columns to be cleaned in the current sheet, the sheet with the valid data values, and specify the column mappings. Enrichment is just as easy. Pick the values (columns) that are required, pick the relationship columns (that allow the values to be related), and create another rule. A single rule. Each rule, expressed in natural language, will run over an entire (set of) sheet(s) and makes rule creation simple and elegant.

But most importantly, it not only makes mapping (which can be done in the same manner as cleaning and enrichment) easy, but also initial mapping easy.

As per our last post, initial mapping can be a nightmare, unless you have the secret sauce, which, as has been discussed many times here on SI, is:

  1. map the vendors
    as many vendors only supply a single category
  2. map the GL codes
    since they are usually for a category subcategory, even though they are not that useful for spend analysis
  3. map the vendors plus GL codes
    since this gets you a subcategory or a sub-subcategory
  4. map the exceptions
    where something is mapped according to GAAP and not according to a meaningful spend category
  5. map the exceptions to the exceptions
    where something gets tacked on to a category because that’s where it seemed to fit

And if you have the vendor and GL code data, and a tool that makes it easy to map the vendors, GL codes, and (override) combinations, and then drill into each mapping and find, and map, exceptions, this works rather well. But the tool doesn’t always support this (or the concept of rule priority, as the rules have to be applied in reverse of their creation order, or there are mis-mappings), and you don’t always have the vendor or GL code, especially if the file is from a P-Card system, procurement system, or other non-accounting system. So the secret sauce is a bust.

And when the secret sauce is a bust, most systems fall flat on their UI faces. That says you need a secret sauce that is not file or system dependent. And the means to implement it.

Fortunately, there is a secret sauce that is system independent, and it’s not that hard to make. (Although you’d think otherwise as the doctor has been trying to teach the recipe for years, with no success … until now.)

If you look at the abstract baseline of the above algorithm, it’s:

  • map a primary catch-all categorization fieldso that all transactions can be mapped to a category, even if it is to “Other” which identifies transactions that need (better) mapping
  • map a secondary catch-all categorization fieldso that if the primary field is empty, the second field maps an otherwise unidentified transaction
  • map the primary and secondary field pairingswhere doing so is more accurate than either field on its own
  • map exceptions to the field mappings
    based on a third field
  • map exceptions to the exceptions
    based on more detailed rules and/or other fields

which, if need be, abstracts even more to:

  • create a (set of) baseline catch-all rule(s)
  • create a (set of) baseline catch-all backup rule(s)
  • create a (set of) baseline rule(s) that work on field pairs and/or dual descriptor pairings
  • create a (set of) detail rule(s) that catch exceptions
  • create a (set of) detail rule(s) that map any special cases (that materialize as exceptions to the exceptions)

And when the rules are run in consistent reverse order of definition, you get consistent, accurate mappings (that can be corrected if new exceptions arise and that can fall in an “other” category if new, otherwise unidentifiable, transactions appear)). And this is key.

Why? Come back tomorrow for Part VII!

Trade Extensions is Redefining Sourcing, Part V

As per our last post, we began this series by informing you that Trade Extensions unveiled the upcoming version of their optimization-backed sourcing platform at their recent user conference in Stockholm, recently covered by the public defender over on Spend Matters UK, but we also told you that, with it, Trade Extensions are redefining the sourcing platform. We then reviewed a brief history of sourcing, identified the gaps in the majority of platforms, and then went on to describe how Trade Extensions, despite having a leading third generation solution, are finding ways to make their platform, and the user experience, even better.

We described a number of their improvements in Part III, namely in platform usability, workflow, user management, and repeat event support, but held back on describing the improved analytics support as we first needed to review a brief history of spend analysis, which we started in Part IV. Today we continue that review so we can clearly describe the leap forward that TESS 6 is bringing to the market.

Yesterday we noted that, depending on who you asked, spend analysis began as the set of canned OLAP-based spending reports that came with your sourcing, procurement, or analytics suite or the process of mapping Accounts payable spend and “drilling for dollars”. But the definition didn’t matter, as both had a lasting value problem. It didn’t take long to identify the few value-generating sourcing opportunities the organization didn’t know about, and then the value was limited at best. But that wasn’t the only problem.

There was (and is) also an accuracy problem. Namely, spend reports are only as accurate as the data that populates them, and if this data is not properly mapped, the spend reports are all out of whack. This happened a lot when the mappings weren’t done by a true spend master highly familiar with the organizational data and the tool. (In the beginning, there was no automated mapping technology.) If the mapping rules were poor, or full of conflicts (and applied in random order), mappings would be poorer, or almost random. And this leads us to the big problem.

Accurate manual mapping, in just about every spend analysis system ever created (with the exception of BIQ, which was acquired, absorbed, and for reasons unknown, pretty much retired, by Opera Solutions), was difficult, if not impossible on data sets with millions, if not hundreds of millions, of transactions. In most systems, you selected a transaction, or a set, created a mapping rule based on one or more fields, possibly using a regular expression on text data, and added it to the rule set. You continued until you believed that the rules would map most of the data, ran the rules, and totalled the mapped spend. If the mapped spend was deemed enough to do an initial analysis (90%+), the mapping exercise stopped, otherwise it continued.

Since meaningful sorting and grouping was difficult, if not impossible, due to lack of meaningful mappings, it often took weeks to create an initial mapping file (even though a good mapping tool in the hands of a pro could allow 95% of spend to be mapped on a Fortune 500 company in two days, but that only ever happened with BIQ in the hands of a true expert), and, to top it off, it was often riddled with errors. Most (untrained) analysts would create mapping rules that were too general and they would inadvertently map extra transactions to each category with each initial starting rule. (For example, “xerox paper” would map “xerox paper copier” to the paper category, where “xerox paper copier” clearly doesn’t belong.) And it wouldn’t be detected until a “real-time” report was presented in an executive meeting (and it would be located on drill-down). And to top it off, other rules would miss transactions. For example, the analyst would map “office chair” to the office furniture category and not realize that some buyers labelled office chair “leather backed chair”, and then would map “leather backed chair” to retail furniture using the “leather backed” mapping rule, which the organization has in place because it buys “leather backed couches” to sell to the market.

And the purported solution of automated mapping only made matters worse.

First of all, most first (and even second) generation spend analysis engines with automated mapping capability used naive statistical approaches which used “dumb” clustering to group what the algorithm thought were related transactions. So, since “xerox paper copier” was similar to “xerox paper shredder”, if the thresholds were low enough, they’d both be mapped to the same subcategory of general office equipment, when they should be mapped to separate sub (sub) categories since they are quite different (electronics vs cheap mechanical shredders).

Secondly, these automated mapping systems would allow users to create override rules to complement the rules that were automatically created, but they wouldn’t necessarily insure the rules got applied in the right order, so each execution could see the same transaction mapped differently.

Third, these systems would pretty much require the organization to adopt their spend taxonomy to the classification ability of the tool, as the tool would rarely adopt to the taxonomy of the organization, and this is just not the proper way to do spend analytics.

And while a few of the newer automated spend mapping solutions are improving on this (deep machine learning algorithms, user defined knowledge models, etc.), they still have their faults (but that’s a discussion for another series of posts).

In short, sourcing analytics has historically not worked well for advanced sourcing, and certainly hasn’t been the other, equally important, side of the advanced sourcing coin.

But TESS 6 is about to change all that!