Category Archives: Vendor Review

ScoutRFP – Spreading their Silicon Sunlight from the Western Shore

When we last covered ScoutRFP back in 2014, they were hoping to help laggard Procurement organizations leave the dark ages (Part I and Part II) and enter the modern age. Launching with nothing more than an easy RFP solution (which was a 15 year old solution at the time), ScoutRFP has taken off like a rocket in those organizations that needed an easy, lightweight, solution for everyday events with a price tag they could afford.

The RFP solution was, and still is, 100% SaaS and designed to work with minimal inputs. It guided the user through a minimal workflow to create the RFX, select the suppliers, evaluate the responses, and make a decision. It was very flexible, allowing the user to create the RFX to the level of detail they wanted, or keep it high level (and cut and paste the instructions and questions from Word). And it gave the organization visibility into, and some control of, spend. The CPO could define a hierarchy and see what everyone was doing, the directors could see what their teams were doing, and the buyers would see their events — and all the reports could roll up as well. It was simple, but it hit the suite spot of low complexity and low price for organizations trying to crawl out of the unlit Procurement dungeons.

It was such a hit that, based on this capability and reception alone, ScoutRFP was able to secure $2.75M of funding in 2015 (from NEA, Zapis Capital, and Google Ventures) to extend the platform and raise an additional $9M of funding this summer in a series A funding round. And move west (to San Francisco).

Since then, ScoutRFP has added basic e-Auction capability, project management and savings tracking, Supplier Information Management, and an improved Supplier Portal.

The platform now has the ability to track all requested, current, and upcoming sourcing events and their associated status; categorize the events using any desired organizational categorization scheme; quickly initiate new events (RFX or Auction) from the pipeline; and even auto-include re-sourcing events when contracts are set to expire. Requested events can come from any organizational stakeholder with budget or spending authority, and all spend can be placed under (minimal) management.

In addition to this new project management capability, the savings tracking capability can sum up all savings for a period of interest, in real-time, based upon (negotiated) price differentials and (expected/purchased) volumes, or savings numbers (to date) provided by appropriate Procurement or AP reps. The data is tracked in a drillable fashion and a manager can quickly see how the totals compare across categories, departments, and employees. This allows the manager to ensure that high-value categories get sourced first and that buyers who aren’t delivering value get training (or replaced).

The SIM functionality is basic. It allows the organization to track all supplier information of interest, tag the suppliers with key-phrases of interest (for quick selection by category capability, geography, performance, etc.), and build lists for quick selection in sourcing events. There’s no scorecarding or performance monitoring, but it can be used as a supplier master and it’s easy to get data in as supplier data can be loaded from existing platforms, and updated data can be pushed back out to existing platforms, using the API. And the platform makes it easy to track supplier activity — events they participated in, questions they asked, bids they made, and so on.

In the current version of the platform, suppliers can have their own portal where all of the bids they have been invited to by all of their customers are accessible through a single log in, or, if the supplier prefers [or customer(s) demand(s)], they can have a separate portal for each customer. The suppliers also have the same collaboration features available to the buyers and can invite their peers to collaborate on bids and survey responses.

The system is shaping up nicely and for an in-depth dive on ScoutRFP, and the platform, including its strenghts and weaknesses, see the recent Pro series [membership required] over on Spend Matters (Part I, Part II, and Part III) [membership required] by the doctor and the prophet.

Freightos: Still Flippin’ Freight Quotes Faster than a Fleet-Footed Feline on Guarana

When we last checked in on Freightos a year ago, they were serving up real-time freight quotes for global shipping and were just launching the marketplace where buyers could search by “lane”, see public freight quotes from shippers serving those “lanes”, compare them, and book quotes. (And a buyer can define a lane by zip code or city, and the software will automatically identify all relevant [air]ports.) Since then, the Freightos marketplace has been growing, and a few noticeable improvements have been made:

More, and bigger, carriers.

Now that big global companies have publicly announced their adoption of the platform — including Sysco, Marks & Spencer, and Panasonic US — bigger forwarders and carriers are signing up and there are a plethora of good, competitive, economical options for all major lanes between Asia and North America — which includes complete multi-modal options from just about any zip code to any zip code in the regions of interest (and almost all major ports and major distribution centers are covered).

More refined cost tracking and rate comparison. 

Upon launch, Freightos provided buying organizations the ability to upload all of their contracts and associated rates. The UI has been improved and it’s easy to compare the contract rate against the current market rate of a carrier as well as the market rates of other carriers side-by-side and to see the relative delivery times that correspond to the rates. (The models break down the cost and delivery time component of each leg of the journey. Truck to port, ocean or air cargo from port to port, truck to distribution center, etc.)

The detail provided on quote breakdown is incredible compared to most platforms that simply collect an all-in-one delivery free for each segment and the government tariff rate(s). If relevant, the platform will break out delivery fee (per unit), fuel surcharges, messenger charges, e-document charges (at origin and destination), lift gate charges, manifest system charges, customs charges, each export and import tariff, SOLAS administration fees, docking fees, temporary storage fees, freight station fees, pier pass fees, cleaning fes, chasis fees, handling fees, and local charges.

Immediate Online Payment with Booking

Since Freightos can now collect payment immediately upon booking through the marketplace, this provides two major advantages over the initial version of the platform where a buyer requested a quote, a supplier replied, and then a booking was made at a later time. The buyer gets the booking they need when they need it, no fear of the lowest cost or preferred carrier maxing their quota (and the option disappearing because someone else selects and pays first). Secondly, since all marketplace payments flow through the platform, Freightos is able to offer the service free for buyers and at a low cost to service providers, who pay a small transaction fee (which should cost them much less than it does to hire multiple sales people to respond to offline RFQs all day with the same quotes cut-and-pasted into multiple Excel sheets of various formats).

More Powerful and More Responsive Drill Down Filters

Not only can you select/deselect ports, modes, forwarders/carriers, intermediate routings, and intermediate ports/distribution centers, you can also include or exclude additional requirements such as lift gate, cross-docking, etc. in your search and comparison. The platform is effectively doing hundreds of searches across (potentially) thousands of carriers with dozens of options in real-time.

Streamlined Document Management

The platform can store, index, and cross reference all contracts and documents (such as insurance certificates, compliance certificates, etc.) related to all carriers used by an organization and they can be easily retrieved when a quote is accessed or easily managed through a carrier management interface.

A Full Featured API

You can include the power of their marketplace in your sourcing application. You don’t have to use their web-interface, you can embed the search functionality in any platform you are currently using to get worldwide shipping estimates and available carriers in real-time.

Freightos is getting very close to becoming the powerful freight management solution that will not only be Supply Management’s best friend but the default platform for all logistics tenders and spot buys performed by the organization. Stay tuned. We’re sure we will be hearing more from Freightos in 2017.

Trade Extensions is Redefining Sourcing, Part VII

In Part I, we not only told you that Trade Extensions unveiled the upcoming version of their optimization-backed sourcing platform at their recent user conference in Stockholm, recently covered by the public defender over on Spend Matters UK, but we also told you that, with it, Trade Extensions are redefining the sourcing platform.

Then, after discussing a brief history of sourcing platforms, and common limitations with most of the platforms on the market, we dived into many of the advancements Trade Extensions, despite already having one of the few third-generation sourcing platforms on the market, is making to take sourcing to the next level. Numerous usability enhancements, composable workflows, centralized fact sheets with even more powerful processing, easy repeat events, and better user and collaborator management. We also noted TESS 6 is coming with a whole new approach to in-platform analytics, but held back because we first need to provide a history of spend analytics (in Parts IV and V) and its shortcomings.

In our last post, we noted that the first thing Trade Extensions has done is to create a whole new analytics rule language which makes the definition of cleansing rules, enrichment rules, and the application of existing mappings and mapping rules a breeze. A single natural language rule with a single cleaning, enrichment, or mapping sheet can process an entire file of millions, tens of millions, or even hundreds of millions of transactions (as long as you don’t expect to view more than a million at a time). A few rules can link purchase orders, invoices, goods receipts, and payments and make it easy to see where the missing records are. Even a junior analyst can do it with ease.

But this isn’t the only significant advance that TE has made. Realizing that the hard part is the creation of an initial set of mapping rules for a new or not yet analyzed data source, they’ve created a new approach to mapping. Getting the message that, as per yesterday’s post, the traditional secret sauce isn’t always enough and a more generic recipe is needed, they’ve adopted the generic recipe outlined by the doctor and taken it to the next level. (At Trade Extensions, all amps go to 11.)

Often, when p-card data or invoice stores are the only data that is available, all the user has is a vendor (short) name and an associated receipt with abbreviated line item info or a vendor name and line item detail. And all the mappings have to be on the product description field. In this case, rules need to be built up as mappings on single words or phrases, with double word or phrase overrides for more complex mappings with further triple word or phrase overrides for sub (sub) categories, and so on. In this case the user will define a key phrase (or regular expression on a key phrase), look at the mappings, create a modification for special cases, and then move on in an attempt to identify another key phrase (that will trap a large number of transactions).

This works, but it’s a slow and cumbersome process. The small sample a user selects to try and manually identify keywords through a quick scan might not be representative, and the user might pick relatively low frequency words or phrases for the first few dozen, or hundred, rules and get nowhere fast*. So what’s the solution? AI? Definitely not. If there’s not enough data for you to always make a good decision, why would you blindly trust an algorithm (that may have been tuned on completely different data sets)?

The solution is AR (Automated Reasoning) and guided rule construction. In TESS 6, during the creation of an initial mapping file, the buyer can select a text column and the system will identify the most common words and phrases, in descending order, and guide the user through the selection and creation of appropriate mapping rules. The user will see, in a transaction file with a lot of office supplies, that “file”, “paper”, “pens”, and “boxes”, appear frequently; “copy paper”, “ballpoint pens”, and “file boxes” appear less frequently; and “xerox paper copier”, “bull pens”, and “junction boxes” appear even less frequently. Each time a general rule is created, the user can drill into the (potentially) affected transactions, see the next most common set of (sub) words and phrases, and, if necessary, easily define an override rule of higher priority, and then either drill in further, or back up to the unmapped set. The user can, very quickly, map the transactions until 90% to 99% are mapped … to an acceptable accuracy.

And, moreover, the rules are always run on a file in order of the least number of affected transactions. Since each override rule is designed to apply to a smaller set of transactions, running the defined rules in order of the least number of affected transactions means all the (super) special cases are defined first. It also means that if a rule defined first would fire first, that the initial rule was not defined as generic as the user believes it was or the nature of the data has evolved over time (and the appropriate mapping rules should be evolved as well). In addition, these mappings can be created in conjunction with one or more classification columns, such as vendor (if two different vendors use the same language to described what are two different products to the buyer) or some other categorization code (from the accounting system or the ERP).

Since many words and phrases are common, the reality is that even a million records can often be mapped 95%+ rather accurately (on a first pass) with only a few thousand rules. This can be done, by hand, in a few days, and be considerably more accurate than even the tenth pass over the data by a current generation automated mapping solution which has to be trained and corrected for weeks by the offshore data centre until enough accuracy is obtained that you could even consider looking at a spend report.

The rule language is easy. The interface is even easier. And the guided manual mapping capability puts analytics into the hands of every buyer. And since it’s all fact sheet based, the data can come from anywhere, anytime, be analyzed on the spot, and pushed anywhere it’s needed. It can come from the procurement system, be analyzed, and a subset used in an optimization scenario, or a set of award scenarios can be combined, analyzed and reported on. Data can flow back and forth with ease, and be classified and manipulated with ease.

It’s what analytics in a sourcing platform should look like. And it’s only in TESS 6.

* which is not a desirable state of affairs for spend analysis

Trade Extensions is Redefining Sourcing, Part VI

In Part I, we not only told you that Trade Extensions unveiled the upcoming version of their optimization-backed sourcing platform at their recent user conference in Stockholm, recently covered by the public defender over on Spend Matters UK, but we also told you that, with it, Trade Extensions are redefining the sourcing platform. But we did not tell you how, as we first had to review the brief history of sourcing platforms.

Then, in Part II, we built the suspense even more by taking a step back and describing the key features that are missing in the majority of current platforms — namely usability, appropriate workflow, integrated analytics support, repeat event creation, limited visualization, and limited support for different types of users and collaborators. While not everything a user would want, these are certainly among the features a user will need to realize the full power of advanced sourcing.

Then, after making you wait two days, in Part III we finally discussed how Trade Extensions are not only redefining the sourcing platform, they are redefining the advanced sourcing process itself! Customized advanced sourcing workflows, the next level of enterprise software usability, better user and collaborator management, easy workflow and event duplication, improved fact sheet management, and a new integrated analytics capability that makes analytics the other side of the advanced sourcing coin.

But instead of detailing the new integrated analytics, in Parts IV and V, we took another step back and walked through a brief history of analytics, taking care to detail all of the issues with manual analysis — limited reports, even more limited value, accuracy issues, and mapping nightmares — and automated analysis — with poor clustering, poorer rules, and forced taxonomy, as there was no way to understand just what TESS 6 will be giving you without describing what is sorely missing. And now we are here. And here we will discuss how TESS 6 puts analytics side by side with optimization.

First of all, Trade Extensions has added a whole new analytics rule language. The current platform supports the construction of advanced formulas and rules for optimization using an advanced formula language, which can also be used for analytics, but the language was not designed for cleaning, mapping, and enrichment. The new rule language makes it easy to clean data by replacing erroneous values with valid values using validity tables, which can be stored and manipulated in standard fact sheets. All the user has to do is select the columns to be cleaned in the current sheet, the sheet with the valid data values, and specify the column mappings. Enrichment is just as easy. Pick the values (columns) that are required, pick the relationship columns (that allow the values to be related), and create another rule. A single rule. Each rule, expressed in natural language, will run over an entire (set of) sheet(s) and makes rule creation simple and elegant.

But most importantly, it not only makes mapping (which can be done in the same manner as cleaning and enrichment) easy, but also initial mapping easy.

As per our last post, initial mapping can be a nightmare, unless you have the secret sauce, which, as has been discussed many times here on SI, is:

  1. map the vendors
    as many vendors only supply a single category
  2. map the GL codes
    since they are usually for a category subcategory, even though they are not that useful for spend analysis
  3. map the vendors plus GL codes
    since this gets you a subcategory or a sub-subcategory
  4. map the exceptions
    where something is mapped according to GAAP and not according to a meaningful spend category
  5. map the exceptions to the exceptions
    where something gets tacked on to a category because that’s where it seemed to fit

And if you have the vendor and GL code data, and a tool that makes it easy to map the vendors, GL codes, and (override) combinations, and then drill into each mapping and find, and map, exceptions, this works rather well. But the tool doesn’t always support this (or the concept of rule priority, as the rules have to be applied in reverse of their creation order, or there are mis-mappings), and you don’t always have the vendor or GL code, especially if the file is from a P-Card system, procurement system, or other non-accounting system. So the secret sauce is a bust.

And when the secret sauce is a bust, most systems fall flat on their UI faces. That says you need a secret sauce that is not file or system dependent. And the means to implement it.

Fortunately, there is a secret sauce that is system independent, and it’s not that hard to make. (Although you’d think otherwise as the doctor has been trying to teach the recipe for years, with no success … until now.)

If you look at the abstract baseline of the above algorithm, it’s:

  • map a primary catch-all categorization fieldso that all transactions can be mapped to a category, even if it is to “Other” which identifies transactions that need (better) mapping
  • map a secondary catch-all categorization fieldso that if the primary field is empty, the second field maps an otherwise unidentified transaction
  • map the primary and secondary field pairingswhere doing so is more accurate than either field on its own
  • map exceptions to the field mappings
    based on a third field
  • map exceptions to the exceptions
    based on more detailed rules and/or other fields

which, if need be, abstracts even more to:

  • create a (set of) baseline catch-all rule(s)
  • create a (set of) baseline catch-all backup rule(s)
  • create a (set of) baseline rule(s) that work on field pairs and/or dual descriptor pairings
  • create a (set of) detail rule(s) that catch exceptions
  • create a (set of) detail rule(s) that map any special cases (that materialize as exceptions to the exceptions)

And when the rules are run in consistent reverse order of definition, you get consistent, accurate mappings (that can be corrected if new exceptions arise and that can fall in an “other” category if new, otherwise unidentifiable, transactions appear)). And this is key.

Why? Come back tomorrow for Part VII!

Trade Extensions is Redefining Sourcing, Part V

As per our last post, we began this series by informing you that Trade Extensions unveiled the upcoming version of their optimization-backed sourcing platform at their recent user conference in Stockholm, recently covered by the public defender over on Spend Matters UK, but we also told you that, with it, Trade Extensions are redefining the sourcing platform. We then reviewed a brief history of sourcing, identified the gaps in the majority of platforms, and then went on to describe how Trade Extensions, despite having a leading third generation solution, are finding ways to make their platform, and the user experience, even better.

We described a number of their improvements in Part III, namely in platform usability, workflow, user management, and repeat event support, but held back on describing the improved analytics support as we first needed to review a brief history of spend analysis, which we started in Part IV. Today we continue that review so we can clearly describe the leap forward that TESS 6 is bringing to the market.

Yesterday we noted that, depending on who you asked, spend analysis began as the set of canned OLAP-based spending reports that came with your sourcing, procurement, or analytics suite or the process of mapping Accounts payable spend and “drilling for dollars”. But the definition didn’t matter, as both had a lasting value problem. It didn’t take long to identify the few value-generating sourcing opportunities the organization didn’t know about, and then the value was limited at best. But that wasn’t the only problem.

There was (and is) also an accuracy problem. Namely, spend reports are only as accurate as the data that populates them, and if this data is not properly mapped, the spend reports are all out of whack. This happened a lot when the mappings weren’t done by a true spend master highly familiar with the organizational data and the tool. (In the beginning, there was no automated mapping technology.) If the mapping rules were poor, or full of conflicts (and applied in random order), mappings would be poorer, or almost random. And this leads us to the big problem.

Accurate manual mapping, in just about every spend analysis system ever created (with the exception of BIQ, which was acquired, absorbed, and for reasons unknown, pretty much retired, by Opera Solutions), was difficult, if not impossible on data sets with millions, if not hundreds of millions, of transactions. In most systems, you selected a transaction, or a set, created a mapping rule based on one or more fields, possibly using a regular expression on text data, and added it to the rule set. You continued until you believed that the rules would map most of the data, ran the rules, and totalled the mapped spend. If the mapped spend was deemed enough to do an initial analysis (90%+), the mapping exercise stopped, otherwise it continued.

Since meaningful sorting and grouping was difficult, if not impossible, due to lack of meaningful mappings, it often took weeks to create an initial mapping file (even though a good mapping tool in the hands of a pro could allow 95% of spend to be mapped on a Fortune 500 company in two days, but that only ever happened with BIQ in the hands of a true expert), and, to top it off, it was often riddled with errors. Most (untrained) analysts would create mapping rules that were too general and they would inadvertently map extra transactions to each category with each initial starting rule. (For example, “xerox paper” would map “xerox paper copier” to the paper category, where “xerox paper copier” clearly doesn’t belong.) And it wouldn’t be detected until a “real-time” report was presented in an executive meeting (and it would be located on drill-down). And to top it off, other rules would miss transactions. For example, the analyst would map “office chair” to the office furniture category and not realize that some buyers labelled office chair “leather backed chair”, and then would map “leather backed chair” to retail furniture using the “leather backed” mapping rule, which the organization has in place because it buys “leather backed couches” to sell to the market.

And the purported solution of automated mapping only made matters worse.

First of all, most first (and even second) generation spend analysis engines with automated mapping capability used naive statistical approaches which used “dumb” clustering to group what the algorithm thought were related transactions. So, since “xerox paper copier” was similar to “xerox paper shredder”, if the thresholds were low enough, they’d both be mapped to the same subcategory of general office equipment, when they should be mapped to separate sub (sub) categories since they are quite different (electronics vs cheap mechanical shredders).

Secondly, these automated mapping systems would allow users to create override rules to complement the rules that were automatically created, but they wouldn’t necessarily insure the rules got applied in the right order, so each execution could see the same transaction mapped differently.

Third, these systems would pretty much require the organization to adopt their spend taxonomy to the classification ability of the tool, as the tool would rarely adopt to the taxonomy of the organization, and this is just not the proper way to do spend analytics.

And while a few of the newer automated spend mapping solutions are improving on this (deep machine learning algorithms, user defined knowledge models, etc.), they still have their faults (but that’s a discussion for another series of posts).

In short, sourcing analytics has historically not worked well for advanced sourcing, and certainly hasn’t been the other, equally important, side of the advanced sourcing coin.

But TESS 6 is about to change all that!