Category Archives: Vendor Review

Trade Extensions is Redefining Sourcing, Part VI

In Part I, we not only told you that Trade Extensions unveiled the upcoming version of their optimization-backed sourcing platform at their recent user conference in Stockholm, recently covered by the public defender over on Spend Matters UK, but we also told you that, with it, Trade Extensions are redefining the sourcing platform. But we did not tell you how, as we first had to review the brief history of sourcing platforms.

Then, in Part II, we built the suspense even more by taking a step back and describing the key features that are missing in the majority of current platforms — namely usability, appropriate workflow, integrated analytics support, repeat event creation, limited visualization, and limited support for different types of users and collaborators. While not everything a user would want, these are certainly among the features a user will need to realize the full power of advanced sourcing.

Then, after making you wait two days, in Part III we finally discussed how Trade Extensions are not only redefining the sourcing platform, they are redefining the advanced sourcing process itself! Customized advanced sourcing workflows, the next level of enterprise software usability, better user and collaborator management, easy workflow and event duplication, improved fact sheet management, and a new integrated analytics capability that makes analytics the other side of the advanced sourcing coin.

But instead of detailing the new integrated analytics, in Parts IV and V, we took another step back and walked through a brief history of analytics, taking care to detail all of the issues with manual analysis — limited reports, even more limited value, accuracy issues, and mapping nightmares — and automated analysis — with poor clustering, poorer rules, and forced taxonomy, as there was no way to understand just what TESS 6 will be giving you without describing what is sorely missing. And now we are here. And here we will discuss how TESS 6 puts analytics side by side with optimization.

First of all, Trade Extensions has added a whole new analytics rule language. The current platform supports the construction of advanced formulas and rules for optimization using an advanced formula language, which can also be used for analytics, but the language was not designed for cleaning, mapping, and enrichment. The new rule language makes it easy to clean data by replacing erroneous values with valid values using validity tables, which can be stored and manipulated in standard fact sheets. All the user has to do is select the columns to be cleaned in the current sheet, the sheet with the valid data values, and specify the column mappings. Enrichment is just as easy. Pick the values (columns) that are required, pick the relationship columns (that allow the values to be related), and create another rule. A single rule. Each rule, expressed in natural language, will run over an entire (set of) sheet(s) and makes rule creation simple and elegant.

But most importantly, it not only makes mapping (which can be done in the same manner as cleaning and enrichment) easy, but also initial mapping easy.

As per our last post, initial mapping can be a nightmare, unless you have the secret sauce, which, as has been discussed many times here on SI, is:

  1. map the vendors
    as many vendors only supply a single category
  2. map the GL codes
    since they are usually for a category subcategory, even though they are not that useful for spend analysis
  3. map the vendors plus GL codes
    since this gets you a subcategory or a sub-subcategory
  4. map the exceptions
    where something is mapped according to GAAP and not according to a meaningful spend category
  5. map the exceptions to the exceptions
    where something gets tacked on to a category because that’s where it seemed to fit

And if you have the vendor and GL code data, and a tool that makes it easy to map the vendors, GL codes, and (override) combinations, and then drill into each mapping and find, and map, exceptions, this works rather well. But the tool doesn’t always support this (or the concept of rule priority, as the rules have to be applied in reverse of their creation order, or there are mis-mappings), and you don’t always have the vendor or GL code, especially if the file is from a P-Card system, procurement system, or other non-accounting system. So the secret sauce is a bust.

And when the secret sauce is a bust, most systems fall flat on their UI faces. That says you need a secret sauce that is not file or system dependent. And the means to implement it.

Fortunately, there is a secret sauce that is system independent, and it’s not that hard to make. (Although you’d think otherwise as the doctor has been trying to teach the recipe for years, with no success … until now.)

If you look at the abstract baseline of the above algorithm, it’s:

  • map a primary catch-all categorization fieldso that all transactions can be mapped to a category, even if it is to “Other” which identifies transactions that need (better) mapping
  • map a secondary catch-all categorization fieldso that if the primary field is empty, the second field maps an otherwise unidentified transaction
  • map the primary and secondary field pairingswhere doing so is more accurate than either field on its own
  • map exceptions to the field mappings
    based on a third field
  • map exceptions to the exceptions
    based on more detailed rules and/or other fields

which, if need be, abstracts even more to:

  • create a (set of) baseline catch-all rule(s)
  • create a (set of) baseline catch-all backup rule(s)
  • create a (set of) baseline rule(s) that work on field pairs and/or dual descriptor pairings
  • create a (set of) detail rule(s) that catch exceptions
  • create a (set of) detail rule(s) that map any special cases (that materialize as exceptions to the exceptions)

And when the rules are run in consistent reverse order of definition, you get consistent, accurate mappings (that can be corrected if new exceptions arise and that can fall in an “other” category if new, otherwise unidentifiable, transactions appear)). And this is key.

Why? Come back tomorrow for Part VII!

Trade Extensions is Redefining Sourcing, Part V

As per our last post, we began this series by informing you that Trade Extensions unveiled the upcoming version of their optimization-backed sourcing platform at their recent user conference in Stockholm, recently covered by the public defender over on Spend Matters UK, but we also told you that, with it, Trade Extensions are redefining the sourcing platform. We then reviewed a brief history of sourcing, identified the gaps in the majority of platforms, and then went on to describe how Trade Extensions, despite having a leading third generation solution, are finding ways to make their platform, and the user experience, even better.

We described a number of their improvements in Part III, namely in platform usability, workflow, user management, and repeat event support, but held back on describing the improved analytics support as we first needed to review a brief history of spend analysis, which we started in Part IV. Today we continue that review so we can clearly describe the leap forward that TESS 6 is bringing to the market.

Yesterday we noted that, depending on who you asked, spend analysis began as the set of canned OLAP-based spending reports that came with your sourcing, procurement, or analytics suite or the process of mapping Accounts payable spend and “drilling for dollars”. But the definition didn’t matter, as both had a lasting value problem. It didn’t take long to identify the few value-generating sourcing opportunities the organization didn’t know about, and then the value was limited at best. But that wasn’t the only problem.

There was (and is) also an accuracy problem. Namely, spend reports are only as accurate as the data that populates them, and if this data is not properly mapped, the spend reports are all out of whack. This happened a lot when the mappings weren’t done by a true spend master highly familiar with the organizational data and the tool. (In the beginning, there was no automated mapping technology.) If the mapping rules were poor, or full of conflicts (and applied in random order), mappings would be poorer, or almost random. And this leads us to the big problem.

Accurate manual mapping, in just about every spend analysis system ever created (with the exception of BIQ, which was acquired, absorbed, and for reasons unknown, pretty much retired, by Opera Solutions), was difficult, if not impossible on data sets with millions, if not hundreds of millions, of transactions. In most systems, you selected a transaction, or a set, created a mapping rule based on one or more fields, possibly using a regular expression on text data, and added it to the rule set. You continued until you believed that the rules would map most of the data, ran the rules, and totalled the mapped spend. If the mapped spend was deemed enough to do an initial analysis (90%+), the mapping exercise stopped, otherwise it continued.

Since meaningful sorting and grouping was difficult, if not impossible, due to lack of meaningful mappings, it often took weeks to create an initial mapping file (even though a good mapping tool in the hands of a pro could allow 95% of spend to be mapped on a Fortune 500 company in two days, but that only ever happened with BIQ in the hands of a true expert), and, to top it off, it was often riddled with errors. Most (untrained) analysts would create mapping rules that were too general and they would inadvertently map extra transactions to each category with each initial starting rule. (For example, “xerox paper” would map “xerox paper copier” to the paper category, where “xerox paper copier” clearly doesn’t belong.) And it wouldn’t be detected until a “real-time” report was presented in an executive meeting (and it would be located on drill-down). And to top it off, other rules would miss transactions. For example, the analyst would map “office chair” to the office furniture category and not realize that some buyers labelled office chair “leather backed chair”, and then would map “leather backed chair” to retail furniture using the “leather backed” mapping rule, which the organization has in place because it buys “leather backed couches” to sell to the market.

And the purported solution of automated mapping only made matters worse.

First of all, most first (and even second) generation spend analysis engines with automated mapping capability used naive statistical approaches which used “dumb” clustering to group what the algorithm thought were related transactions. So, since “xerox paper copier” was similar to “xerox paper shredder”, if the thresholds were low enough, they’d both be mapped to the same subcategory of general office equipment, when they should be mapped to separate sub (sub) categories since they are quite different (electronics vs cheap mechanical shredders).

Secondly, these automated mapping systems would allow users to create override rules to complement the rules that were automatically created, but they wouldn’t necessarily insure the rules got applied in the right order, so each execution could see the same transaction mapped differently.

Third, these systems would pretty much require the organization to adopt their spend taxonomy to the classification ability of the tool, as the tool would rarely adopt to the taxonomy of the organization, and this is just not the proper way to do spend analytics.

And while a few of the newer automated spend mapping solutions are improving on this (deep machine learning algorithms, user defined knowledge models, etc.), they still have their faults (but that’s a discussion for another series of posts).

In short, sourcing analytics has historically not worked well for advanced sourcing, and certainly hasn’t been the other, equally important, side of the advanced sourcing coin.

But TESS 6 is about to change all that!

Trade Extensions is Redefining Sourcing, Part IV

In Part I, we not only told you that Trade Extensions unveiled the upcoming version of their optimization-backed sourcing platform at their recent user conference in Stockholm, recently covered by the public defender over on Spend Matters UK, but we also told you that, with it, Trade Extensions are redefining the sourcing platform. But we did not tell you how — instead reviewing the brief history of sourcing platforms, of which we’ve seen only three generations (with the third generation being optimization-backed sourcing platforms, which can be counted on one hand — and this should not be a surprise as there are only six true providers of strategic sourcing decision optimization as it is).

Then, in Part II, we built the suspense even more by taking a step back and describing the key features that are weak or missing in current platforms — namely usability, appropriate workflow, integrated analytics support, repeat event creation, limited visualization, and limited support for different types of users and collaborators. While these are not all of the features a platform might need, they are among the most significant and are certainly necessary for for the full power of advanced sourcing to be realized.

Then, in Part III, we finally discussed how Trade Extensions, realizing the need to not only offer these capabilities, but be best of breed in their offering, decided to tackle the creation of these capabilities head on (even though, unlike many of their competitors, they already had a current generation sourcing platform) in an effort to redefine not only their sourcing platform, but the advanced sourcing process itself.

And with TESS 6’s ability to support as many customized advanced sourcing workflows as the organization requires, where the workflow is not bound to the concept of a traditional sourcing workflow and can instead be defined using any combination of workflow elements in any order the buyer wants, TESS 6 is truly redefining the advanced sourcing process itself. Plus, it is in an elite class of the most usable enterprise software ever (despite supporting extreme complexity in the cost, constraint, and optimization models under the hood), with user management taken to a whole new level. But, as we noted in Part III, it is also coming with a new analytics capability that finally places analytics on the other side of the two-sided advanced sourcing coin, a piece that has, until now been missing. How? We’ll get to that but first, as promised, a brief history of spend analysis.

In the beginning, spend analysis was, depending on who you asked, the set of canned OLAP-based spending reports that came with your sourcing, procurement, or analytics suite or the process of mapping Accounts Payable spend and “drilling for dollars” (because, if you drill deep enough, there is always oil, or value, to be found).

This worked great, until it didn’t. In fact, depending on the skill of the user operating the “drill”, the organization would identify savings for somewhere between six and eighteen months. After that, savings would dwindle off. Why?

Most of the platforms limited the user to variations of Top N reports, which could only be drilled on a pre-defined set of dimensions; scatter plots, that allowed the user to see pricing trends and variances; and year-over-year trend reports. Top N reports are only so useful as most buyers know 7 to 8 of their top 10 suppliers, categories, geographies, departments, etc. Scatter plots are only good for as long as the supplier is still under contract, as you never really recover overspend after the fact. And year-over-year can typically be produced by the AP or ERP system, possibly sans graphics, so how much do they really add in the spend analysis package?

In other words, there was a lasting value problem.

Unfortunately, this wasn’t the only problem. If it was, it might have been partially overcome by switching to a service model where every 18 to 36 months the organization worked with a service organization to identify top categories with top waste. But there was a bigger problem. And we’ll get to that tomorrow in Part V.

Trade Extensions is Redefining Sourcing, Part III

In part I, we not only told you that Trade Extensions unveiled the upcoming version of their optimization-backed sourcing platform at their recent user conference in Stockholm, recently covered by the public defender over on Spend Matters UK, but we also told you that, with it, Trade Extensions are redefining the sourcing platform. But we did not tell you how.

Instead, we briefly reviewed the history of the sourcing platform, discussing the hallmarks of first generation, second generation, and (third generation) optimization-backed sourcing platforms.

Then, in Part II, we dived into some of the key features that are missing in many current platforms: namely usability, appropriate workflow, integrated analytics support, repeat event creation, limited visualization, and limited support for different types of users and collaborators. These weren’t all of the missing features, but some of the most significant ones. Moreover, for sourcing requirements to be adequately addressed, and the full power of advanced sourcing to be realized, a platform has to address these issues in a way that enables buyers of all levels of experience and capability to take equal advantage of the platform and realize equal savings for the organization with equal effort.

Trade Extensions realizes this, and is redefining their sourcing platform, and in some ways sourcing itself, so that, at least to a reasonable degree, it can not only overcome the limitations it has, but advance its capabilities to the next level. And it’s doing so in a way that makes advanced sourcing a natural exercise for every category — not just the high dollar, strategic, or complex.

So how is Trade Extensions achieving this?

First of all, their interface, re-designed in conjunction with a leading usability firm, is more user friendly to average and junior buyers. It’s so user friendly that few sourcing platforms can stand beside it from a usability perspective. With functionality grouped into key areas (and not modules), and lesser used advanced functionality buried under appropriate categories and subcategories, it’s easier to find what you need when you need it.

Moreover, since the platform is centered on fact-sheets, it’s quick and easy to access, create, import, and edit these sheets — as well as export them to Excel for editing (and re-import) — from anywhere in the sourcing process.

Secondly, instead of being centered around a relatively standard (but adjustable) workflow (with optional steps), the platform has been redesigned to allow the user to not only select the appropriate workflow, but define the workflow that is needed. That’s right, a senior buyer can define the appropriate workflow for any (and all) sourcing projects and then a junior buyer can follow it through, or even modify it slightly if needed.

And when we say the user can select any workflow that is required, we mean that. Whereas most platforms allow a buyer to select or ignore a pre-defined workflow action, every action in the new TESS Platform has been separated out as a workflow element that can be assembled in any order. If you want to start with an RFI, import historical and market data from spend analysis, run a what-if optimization, select a group of suppliers, go back to an RFP, push the data into an auction, push a subset of winning bids into a fact sheet for optimization, augment it with transportation data from third party carriers, run multiple optimization scenarios, copy one and manually modify it to create an award scenario, push the award into contract management, and create a contract, this custom configured workflow can be created and run as needed.

Plus, it can be duplicated, and modified, as many times as needed for similar categories. Moreover, not only can the workflow be customized as needed, but each step can be annotated and documented as needed. This makes setting up repeat events easy, since the workflow can be copied, and modified, with or without data elements, and repeating events becomes truly easy. So even though the workflow capability in TE is on par with the best, they are cranking it to 11.

And there is ample support for different types of users and collaborators. Whereas most platforms have a limited number of roles which are platform wide, the roles in the next generation of the Trade Extensions’ platform can be defined at the platform, project, or even workflow element level. This makes sure that anyone who needs access can get it, and get only the access they need.

And while the usability and workflow elements are great, one of the best abilities is the new integrated analytics capability — which, finally, gives us a solution where analytics and optimization go hand in hand and give us both sides of the advanced sourcing coin? How?

We’ll get to that, but first we’ll need to provide a bit of history on spend analytics … in Part IV.

Trade Extensions is Redefining Sourcing, Part II

In part I, we noted that Trade Extensions are redefining sourcing with their TESS 6 sourcing platform, unveiled at their recent user conference (which has already been discussed by the public defender over on Spend Matters UK), and indicated that what they were soon to release was significantly beyond first generation, second generation, even modern (third generation) optimization backed sourcing platforms, even though we didn’t precisely say what that was yet.

We outlined first generation platforms, and what the glaring issues and omissions were; second generation platforms, and noted how the rigid workflows limited not only flexibility but capability; and optimization-backed sourcing platforms, which, while much more flexible and capable and powerful, are also much harder to use for the average user. And we noted that in today’s post, we’d give a little more background on what’s missing.

First of all, especially in first and second generation platforms, usability is missing. The more the user wants to do, the less they are able to accomplish. Workflows are rigid, features are limited, and the optimization model is often fixed. And in third generation platforms, even though the flexibility is there and the optimization model is fluid, the reality is that the more the user wants to do, the more math they need to know, the more formulas they need to write, and the more difficult it is to build the cost models and scenarios.

Secondly, the more involved the process, the harder it is to determine the appropriate workflow. Since the most powerful platform is data centric with few controls, it’s just as easy to create a workflow that results in the core data being overwritten and destroyed or a workflow that results in an incomplete model as it is to design the right model under the right workflow that results in the right model and right scenario. Sometimes only the experienced can figure it out and get it right.

Third, most platforms centred around optimization have little or disassociated analytics support, when analytics and optimization need to go hand-in-hand in advanced sourcing. Even though Trade Extensions, like many leading platforms, has some analytics and reporting support, like many leading platforms, they are not currently best in breed in analytics. Optimization success requires appropriate cost models on accurate data in categories where there exists additional value to be captured. Without good analytics, the best categories may not be identified (and inappropriate categories may be rigorously pursued, wasting valuable time and resources to identify little or no additional value), the cost models may not capture all the relevant components at the appropriate level of detail to maximize the opportunity, and bad data can slip in and invalidate the entire result. Plus, forget about cleansing, mapping, and enriching without strenuous effort. (Considering this is the case in even most standard-faire analytics platforms, which claim to be best of breed, what can one really expect in a sourcing suite where analytics is just a component?)

Fourth, repeating events or creating similar events is typically quite cumbersome. Many platforms allow “copy”, but then a buyer has to go in and delete all the inappropriate scenarios, old suppliers, invalid starting bid data, etc. Some allow templates to be created, but these typically only capture the basic RFX questionnaires and cost models, and significant work is required to set up the timelines (which change little in phase duration), supplier pool, scenarios of interest, current constraints, etc. Trade Extensions allows templating, copying, initiation from fact sheets, and any combination thereof and is better than most, but there is always room for improvement and streamlining.

Fifth, visualization is limited, especially in first and second generation platforms. Most platforms limit visualization to standardized reports with graphics options limited to line charts, bar charts, and pie charts – which are not at all appropriate to visualizing global sourcing requirements and award scenarios. This is one area where Trade Extensions really shines, with Google Maps integration and the ability to plot global supply chains and proposed awards, but their dashboard reports are only average, and new 3-D charts are hitting the market.

Sixth, most platforms have limited support for different types of users and collaborators. While the lead buyer should be the one that builds and runs the event and identifies the award that is most likely the best, stakeholders need to weigh in on initial supplier identification, RFX response scoring, model design, constraints, scenario analysis, and contract proposals. Some collaborators can score, other collaborators can comment, and some users will need to create their own views, reports, and even scenarios. Depending on the affiliation of the user (organization or consultancy helping on the sourcing project) and their role, their access needs to be defined appropriately. Most platforms only have the concept of an admin or a buyer or a reviewer, and these are applied on a platform, and not an event, or task, basis. Again, Trade Extensions does quite well here, but anything to simplify user management for the buyer is a plus.

And so on. But these missing or incomplete requirements are key to advanced sourcing success.

For sourcing to advance, and the full power of advanced sourcing to be realized, a platform has to address these issues (and more) and do so in a way that enables buyers of all levels of experience and capability to take advantage of the platform and realize savings for their organization at an equal level. But this will only be possible in a platform that changes the way sourcing is supported, and that is why TESS 6 is redefining sourcing — so that, to at least a reasonable degree, it can overcome all of these limitations (and more). And do so in a way that makes advanced sourcing a natural exercise for every category — not just the high dollar, strategic or complex.

So just how is TESS 6 going to accomplish all of this?

We’ll finally get to that in Part III.