Why the doctor Loves SolutionMaps, Besides the Obvious …

the doctor loves SolutionMaps. He loves them for many reasons.

But most important of all, it’s because of the scientist in him.

SolutionMaps are expert. And constant. And data driven.

Unlike some analyst reports whose methodology and “qualifications” often insult the name of the authors and parent firm (e.g., requiring baseline revenue or geographic footprint requirements further tailored to weed out smaller providers or fiercest competitors) and typically have participation and validation requirements that change from report to report, SolutionMaps are based on rigidly defined capabilities (where every capability has a pre-defined scale at least to 3, if not to 5) that must be demoed for a 2 and rigorously demoed for a 3.

And these requirements will not change from iteration to iteration. More specifically, as long as the doctor is involved, the scale for any requirement will be static for at least one year and while requirements may be added, they will persist for at least one year before being dropped. And, furthermore, since SolutionMaps are designed so that the average score for any vendor should be 2.5 to 3 (or less, depending on the “Map”), the number of 4′s should be 10% or less, and the number of 5′s should be 1% (or much less), and since functionality does not typically improve that much quarter or over quarter, when we have to do a renormalization after a year or more (when we hit enough 5′s and 4′s), we can do a fair, equal adjustment against historical scores (which can’t be done when everyone starts at a 5).

And, as we noted yesterday, these rigid objective scales not only allow an expert analyst to score providers consistently against a common goal, but multiple analysts to do so, because the baselines are all set in stone! Whether it’s the doctor, the prophet, the maverick, or the revolutionary doing the scoring, it’s all consistent on one scale.

Moreover, we know that technology is only half the battle, so half of the scoring is based on average customer reference scores, which are scored over a dozen or more factors, not just one or two! This also means that, just as in real life, there’s more than one right answer depending upon whether you value technology innovation or customer service more, or just want an equal mix. Plus, the ability to differently weight the analyst vs. the customer dimension allows vendor profiles to be constructed against different personas, because the right vendor changes based upon your actual needs.

But that’s not the only reason the doctor loves SolutionMaps. The other reason is that — they are surprising!

Given all the angst that is out there from some current and many former SAP Ariba customers, would you have expected them to score so well on the customer references? Not by a mile! Obviously this vendor is a love it or hate it or nothing in between type of vendor, with a lot of customers lovin’ it! (Kinda like McDonald’s.)

Also, I bet not a single person, including the entire analyst team, would pick EC Sourcing to the be the value leader for Nimble Sourcing. But one thing is for sure, it’s days of being overlooked as a small Best-of-Breed vendor are now over! (Especially since you can set up decent size RFX projects in 15 minutes. If you don’t believe the doctor, ask for a live demo and see it for yourself.) Also, while we always knew it was the little-engine-that-could, no one predicted that Keelvar had made it so far up the hill. Sure, it still has a lot of work to do on rounding out its RFX and Auction features (for example, it’s easy to lock lots in optimization, but you can’t force vendors to fill all or nothing in setup), it’s hit on every major optimization requirement, developed a novel approach to parametric bidding, and is currently weaving in AI in a novel way (which, when released in a quarter, could bump it even higher in the rankings).

But sourcing and optimization weren’t the only surprises. In analytics, large pieces of the pie went to, wait for it, Anydata and Spendency! While the analyst team expected (even though the market may not have) SpendHQ to do well, this was a bit of a surprise. These are both new players with small customer bases relative to the big boys, but they both made an amazing grade.

In SRM, we have State of Flux and PRGX Lavante killing it on the capabilities front (even if not so much on the customer references, but customer response counts for State of Flux, who participated at the last minute, were low and PRGX forgot about the customer references entirely), and are likely to fair better overall next iteration as more customer references come in. And from a reference perspective, Ivalua and SAP Ariba (yes, SAP Ariba again) are just killing it. Ivalua is really proving that a single-platform home-grown S2P suite can really do it all (well, almost all, they need work in optimization and direct sourcing, but then again, name any S2P platform that has both of these capabilities) and do most of it well. You can learn a lot of lessons in sixteen years, and Ivalua is showing it.

In other words, when you dive past the marketing, and the vision, and the pretty, pretty deck, and just get to the heart of the problem, or in this case, the solution, sometimes the results surprise you. But that’s a good thing. This knowledge benefits everyone and we don’t expect these (relative) rankings to be static. Not in the least.

the doctor looks forward to the next iteration of SolutionMaps, and another 8 to 12 vendors being added, and seeing how things shake up. It will be illuminating to say the least.

And to those vendors who didn’t fair as they expected, shake it off and get back to work.  You know what you need to do to get a better score.  Nothing is hidden here. You have the RFI.  You have the scoring scale.  You have our feedback.  It’s all up to you!

Let’s Get Ready To Rumble!

While these words may have suited Michael Buffer well in his role as the exclusive ring announcer for WCW main events in the WCW heyday, these are not the words you should be uttering as you begin your quest for a new Supply Management Technology Solution. However, when one considers the way that many organizations go about their new technology acquisition process, it’s the words that ring loud and true in our ears.

Consider what typically happens:

  1. Teams scour analyst and consultant reports, sometimes constructed almost entirely on impressions based on PowerPoint presentations and customer reference calls, for potentially relevant vendors and build a starting list.
  2. They do a few Google searches, followed by one or two (Bada-)Bing(!) searches just for completeness, and add a couple more names.
  3. They do a few website reviews to narrow down to half a dozen vendors that look good.
  4. They send out RFIs (requests for interest).
  5. The first three respondents are invited to an all out winner-takes-all Battle Royale.

And that’s what it is. The vendors fight it out until there is only one left standing, and then that vendor gets the deal.

Do you know what’s wrong this picture?

Besides the fact that it shouldn’t be a battle but an effort to illustrate who solves your problem best?

First of all, Step 1. Relying on analyst reports that are often based in large part on “expert” interpretation of providers, their products and users, rather than objective and transparent analysis expert of features and data (a Nobel Laureate has some curious things to say on this very topic — i.e., ask an expert his opinion on a topic, and it’s often no more accurate than a non-expert, but ask the same expert to rate a capability against a defined scale and you get a far better result)

Last year one of the big analyst firms literally said “we’re not doing demos anymore, just overviews and customer references“. That’s scary! Get a few customers where the blush has not yet faded from the rose to say “this is the greatest thing since sliced bread” and you can literally shoot decades old tech and vapourware to the top of the rankings!

Second of all, assuming all providers of a certain tech that score equal on an arbitrary ranking scale are equal for your organization needs. Every organization is different and requires different levels of technology, process, service, and focus.

So what’s the answer? Better insight. What’s the form? Spend Matters SolutionMaps.
These are different, and that’s why the doctor has been collaborating on the development, scoring, and delivery of the Strategic Procurement Technology SolutionMaps Suite (designing and co-leading Sourcing, Analytics, and SRM and supporting CLM) for the past six months.

These maps rank vendors not on subjective impressions of an analysts’ predilection to the colour scheme used in the PowerPoint, but on hundreds (and in the case of Sourcing, thousands) of technical requirements, each of which has a hard scoring scale (from 1 to 5) that is rigorously defined (by the doctor in the majority of cases) to at least 3 (if not 5) with hard and fast “must have” requirements that leave no wiggle room.

And then, instead of combining these scores into one-size fits all scores for all vendors, they are combined in different ways (and weightings) into five different scores (and in the case of Sourcing, six) that map to the different persons that are representative of many of the Procurement organizations out there, namely:

Nimble: The need for speed

Dynamic, results-focused, limited IT department involvement, risk-tolerant of new approaches and providers; Often decentralized, rapidly growing, and/or middle market

Deep: A best practice team that demands the broadest and best tools

Highly sophisticated, rigorous, somewhat complex, risk-tolerant, happy to push limits of tech to create more value

Configurator: We are unique

Moderately to highly sophisticated; Unique process requirements from unique, often complex supply/value chains

Turn-Key: We care about results … not software

Outcome-focused; TCO approach to implementations; Often risk-averse and skeptical based on previous experiences

CIO-Friendly: We need to get IT on board

Strong IT backbone, high IT influence and investment for buying decisions; Big focus on security, standardization, control, and risk/compliance

Optimizer: We eat complexity for lunch.

Large, complex, and/or sophisticate organization with truly strategic SCM and Procurement functions which has already achieved all easily attainable improvements.

Plus, instead of assuming we know all, we also get deep feedback scores from customers across a variety of dimensions, which are not only weighted to the persona, but constitute half of the ranking as the final quadrants for each persona are analyst score vs. customer score (which are not co-mingled, and this is a key point).

And the end result you get is entirely different from a typical analyst report where the A(riba), B(ravoSolution), and C(oupa) suites always take all. For example, in Sourcing, EC Sourcing (yes, EC Sourcing) comes out on top as the most Nimble sourcing platform and Keelvar (yes, Keelvar) has almost caught up to the market leader (Coupa Trade Extensions) in the Optimizer persona. And you (more-or-less) get the expected A, B, C results when you go Deep. And rankings aren’t static across the Configurator, Turn-Key, or CIO-Friendly personas either. (And similar ranking shifts exist across Analytics and SRM too.)

In other words, in SolutionMaps, we’ve done the rumbling* for you and identified not only which vendors have which solutions, but which personas they best suit to help you identify the right vendors to invite to your RFI so you can focus on figuring out which one can provide you with the most value, not which one can survive the longest in a Battle Royale.

* And when the doctor says rumbling, trust him. Some vendors who participated in multiple SolutionMaps are as weary as he is, after multiple rounds in the ring.

It’s Hard to Find Fraud in Big Spend Stacks …

Let’s start with T&E spend. While most organizations might believe that this spend, which is primarily for low value amounts on fairly well understood products and services, does not hide much in the way of fraud, that’s not always the case. Nor is the fraud limited to employees upgrading to business class, upgrading from rooms to suites, and spending a bit too much on drinks at the client dinner. (But even this can be very expensive. If this off-policy spend results in negotiated volume-based rebates failing to materialize, this can be very costly.) But that’s not the case. It cal also contain:

  • the same receipt for a $500 business entertainment submitted two (three, or even five) times, one month apart, on different claims and never noticed
  • a pet hosteling bill that looks just like a hotel bill
  • an invoice from Benny’s buddy Bob for 20% above market rates who drove him to the airport (instead of a licensed service at market rates)
  • that double billing by your no-longer favourite hotel for a room charged to your guest and then charged on your tab is really hard to spot (especially when some rooms were picked up and some rooms weren’t at your recent event)
  • collusion between an employee and a spouse who owns a travel “services” company can account for a lot of extra travel “services” billings that weren’t delivered
  • suppliers who know you have holes in your T&E monitoring can submit fake invoices for services never delivered
  • etc.

It’s really hard to find these low-impact fraud needles in a T&E haystack, but these needles can add up quickly — especially for products and services never even delivered! Only automated processing that can compare multiple entries across multiple dimensions and learn typical patterns can identify the majority of errant fraud that passes through your T&E system.

Moreover, as an organization learns to detect certain types of fraud, the fraudsters get smarter. No static system can keep up! AI based systems are key to an organization’s success.

In particular, AI-based systems that can work on multiple types of spend. T&E is just one category. There’s also invoice data for sourced and procured products and services that can be six to eight times the T&E volume in an average organization. And when we go broad, there are even more options for creative fraud from less-than-honourable parties. For example, you could see things like:

  • $4.95K shipping fees for $5 items because the tolerances in the system don’t kick anything up for review with shipping less than $5K
  • invoices from fake suppliers with the same name as your tendered suppliers with faked registry numbers and different bank information for payment
  • invoices from corporates owned by spouses of employees for services not delivered submitted by the employees and approved by colluding associates doing the same thing
  • etc.

For some of these instances, humans have almost zero chance of surfacing the infraction when its 1 invoice in 1000. A new solution is needed. A number of players are tackling the problem with modern AI solutions, but do the approaches have what it takes to find the gold in them there hills? Only time will tell.

Sourcing the Day After Tomorrow … Part XVI

In this series we have been reviewing sourcing today, the primary phases and sub-steps, and how they look strategic on the surface but often hide a lot of tactical work underneath. Moreover, sometimes “strategic” is simply a decision that is entirely based on the results of a sophisticated analysis that can be encoded in a very complex rule.

What does all this mean? It means that systems can do more of the work and with next generation sourcing systems, the strategic decisions will be made by expert buyers who know the market in ways designers of systems can’t. Expert buyers who can identify external stimuli that occur, and impact, the market once every five to ten years (that a new system wouldn’t know). Expert buyers who can better judge the impact of a new supplier on the market that the system doesn’t have the history on. Expert buyers who know the best way to handle unexpected demands or change requests in a negotiation process.

Strategic will change from data gathering to data analysis to knowledge evaluation where the analyst first learns to analyze the data gathered to better train and correct the system to knowledge evaluation where the analyst learns to identify the gaps in the analysis or the weightings that need to change. It’s going to become primarily an intelligence exercise, not an analysis exercise. Computers can do considerably more analysis and number crunching than we can in an exponentially smaller amount of time. As a result, more and more analysis will be given to the computers, and more and more intelligence will be expected of the user.

And the entire sourcing process will be affect. How much? In the beginning, more and more of each step, and then of each phase will be automated. But then, in the longer term, the sourcing process will change and adapt to one that is more suitable for the knowledge-based endeavour that it is. What will this look like? Time will tell, but we have our ideas. And we will address them in at a future time.

Sourcing the Day After Tomorrow Part XV

In this series we are doing a deep dive into the sourcing process today, and, in particular discussing what is involved, what is typically done (manually), and whether or not it should be that way. We have already completed our initial discussion of the initial project request review phase, the follow up needs assessment, the strategy selection phase, the communication phase, the analysis phase, and the negotiations phase. Now we are in the final contracting phase. At first glance, it looks like this is the second most strategic and human-driven phase there is, second only to negotiation, as it is humans (and lawyers in particular) who typically define standard terms and conditions, humans who identify risk and mitigation strategies, humans who define obligations, and humans who analyze the contract for compliance to goals. But is this the case?

So in this final step, the contract step, we have these final sub-steps:

  • Standard Terms and Conditions
  • Modification & Risk Mitigation to Supplier & Country
  • Key Metadata definition and obligation specification
  • Contract Analytics

If all of the standard terms and conditions are in existing contracts and the contract clause / template repository, there’s no reason that a system cannot automatically scan the contracts and repositories, identify the standard organizational terms in every contract, identify the standard terms for the category, and identify any terms, often not included, that would be relevant to the category. Probabilities can be applied and contract terms organized by weight. The buyer can then just bulk select or bulk reject the relevant clauses.

In the modification and risk mitigation step, a contract analytics engine can be applied to determine how well a particular clause addresses a certain risk of relevance to the organization based on context models and differentials. It can then compare that clause to the clauses that best address the risk and identify the necessary modifications, and do so specifically from a supplier or geographic context.

In the key metadata definition and obligation specification step, the goal is to identify the right metadata that needs to be tracked against the contract. This will be dependent on the terms and conditions, the goals, the obligations, and other key information that will be specific to the contract. However, contract analytics can identify, or at least suggest, much of this as well automatically based upon similar contracts, similar terms, similar goals, and similar obligations. This can greatly reduce the effort required by a buyer.

In the final step, the contract analytics step, the identification of risks, variances from a norm, and non-standard clauses can often be better identified by a contracts analytics engine that can cross-compare potentially risky clauses and variant clauses across hundreds, if not thousands, of contracts and identify deviations from the norm. A user just has to decide whether the variance is enough to be of interest to them, and properly setting a threshold can eliminate the majority of those variances that are not.

In other words, at the end of the day, contract analytics identifies the majority of standard terms and conditions that are of interest, the majority of standard clauses that will need modifications to address supplier and country risk, the relevant metadata and obligations associated with the contract, and any clauses that can be considered variant enough to warrant special consideration.

The majority of the work can be automated with a good contract analytics engine — the role of the buyer is to apply their intelligence to determine how accurate and effective it is. As the buyer trains the engine, it will become more and more accurate over time and the strategic work will be reduced to hours, sometimes minutes for simple contracts, compared to days or weeks.

In other words, the more we explore the sourcing process, the more we find out how truly tactical, or at least automatable, the majority of it is.