Category Archives: Technology

Don’t Get Misled By Overly Simplistic Comparisons!

A recent post on LinkedIn on Coupa vs. I-Valua that implies it’s always Coupa vs I-valua or that Coupa is better is missing the point entirely. So much so, that the doctor had to call it out (see the initial LinkedIn response here) because it ends up being very deceptive (even if that wasn’t the intent).

The post made a very simple comparison between Coupa vs. I-Valua in big graphical format that basically said the following:

Coupa I-Valua
1 Billion in Annual Sales, inc. 2006 200 Million in Annual Sales, inc. 2000
Considered Innovation Leader Can Be Customized to Specific Needs
Generally Good Customer References Customers Have Mixed Success

So much wrong with this!

1) Revenue size is in no way indicative of a company’s particular ability to serve YOU. As long as the company is financially stable and has enough support staff for an organization of your size, that’s all you care about. (And it’s obvious they both do since once a company surpasses 100 Million in annual sales, it can serve the vast majority of enterprise clients.)

1b) Neither is time in business relevant once the company has been in business long enough to have a mature solution.

2) “Considered the Innovation Leader” is either opinion, not fact, or bland, marketing BS. By who? The market at large? Well, guess what, in this scenario neither Coupa nor I-valua qualify — Zip is the current darling of ProcureTech. (But don’t go there … please … don’t go there! [Or we’ll have to rip into that assumption too. For now, we’ll be content in reminding you that, despite what Zip claims, there are NO FREE RFPs.] To keep it short and sweet, Zip’s S2P capabilities are still relatively non-existent as it was built as an orchestration platform to connect existing systems and make them work better, and what they offer to plug the gaps you don’t have is not anywhere close to Best in Class.)

3) As Joel was also quick to point out in the comments, good customer references depends upon who you ask (and many of us who have been in the space a long time know that both vendors have very happy customers, some unhappy [former] customers, and customers who are generally satisfied (but wouldn’t go out of their way to give a recommendation). At Spend Matters, where I developed the Source-to-Contract Solution Maps, in the first release, I-valua was top dog and Coupa was average on the customer ratings. As more references poured in, I-valua dropped down to average and Coupa climbed slowly. In other words, both have great customer references, both have average customer references, and both sets of providers have a customer base with mixed success. (And you can’t always blame the company for the success or failure, both sell very advanced solutions and sometimes customers insist on a module they aren’t ready for.)

Furthermore this comparison misses multiple key points that need to be taken into consideration in any comparison, which include, but are definitely not limited to:

4) Simplification is key — and both platforms can simplify extensively! However, the approach is different — Coupa, in simple terms, gives you default configurations that are easy and widely adopted. I-valua built the infinitely customizable platform, and YOU have to work through that process to get it simple. In technical terms, I-valua was built for power users, Coupa for tech novices, but both can be configured to a middle ground.

5) There are more than 2 suites! While Coupa is a finalist in most deals (due to market size), depending on the industry and geography, the final “2” could also include SAP Ariba (yes, still), Jaggaer, GEP, Zycus, Oracle, Corcentric (Determine) or Synertrade, especially in enterprise deals, with another half dozen or so smaller suites emerging in the mid-market. And, for a subset of those deals, Coupa is definitely NOT the best. Sometimes it’s not even close!

5b) While Coupa is undisputedly one of the indirect (sourcing) market leaders, it is still very weak in direct sourcing compared to some of its peers (especially when compared to emerging players built for direct from the ground up). Classically, it had no direct support. The Trade Extensions acquisition gave it support in advanced sourcing and the Llamasoft acquisition gave it direct support in supply chain demand planning, but direct was never at Coupa’s core. For direct industries, it makes a difference. (To be fair, most of Coupa’s peers weren’t built for direct either, but Jaggaer acquired Pool4Tool, I-valua acquired and rebuilt DirectWorks in their platform from the ground up, GEP built NEXXE for supply chain to supplement its weak direct capabilities in SMART, and Synertrade was built from the ground up for direct – one of the few suites that was.)

I could go on, but, with over 666 companies to choose from, it’s never just Coupa vs. someone else, or I-vlaua vs something else. Sometimes neither of them should be in the room. Evaluate the alternatives. And do so after you know your core requirements, as that’s what you need to narrow down to a relevant pool of providers.

And also, you need to consider your sources when you see very simplistic one-side comparisons like these. While there may not be intentional bias, the relative knowledge the author has of different solutions will weight the comparison if the author is not an analyst who has rigorously, and objectively, weighted each platform side by side on its technical merits alone! (Which the doctor did for six years in this case, along with many of the other big names listed above.) (The Spend Matters solution map was a deep technical solution map with over 600 areas of feature/function/process evaluation on the tech axis [and dozens of questions on the customer axis] for a reason. Comparisons are NEVER this easy between suites and sometimes the usual market leader, for your organization, is the default market loser.)

In this situation, the post author’s company does a LOT of Coupa-related platform advisory, the post author has experience with Coupa that predates that in professional CPO or equivalent roles, and is one of the few consultants out there who has a good understanding of the Coupa platform. (And, by the way, there aren’t many of these consultants, especially when you consider that Coupa doesn’t really know Coupa anymore! The only two employees who knew the entire platform end-to-end, that contains over 20 acquisitions over the years, left last year. And the last few years also saw the departure of key personnel from acquisitions that gave them their advanced analytics, optimization, and risk capabilities. As for the doctor, he’s been following Coupa since Procurement Independence Day and consulted for, advised, or did diligence on half their acquisitions over the years. He’s one of the few that probably now knows the core of Coupa better than Coupa, and knows when someone, like the post author, knows a platform well.)

So if you need help identifying the right vendors to consider, and guidance on how you should be comparing them, seek out the niche analyst firms and independent analysts who have been covering the space for over two decades — they’ll give you the right list of vendors to look at, the right factors to consider, and can even help you craft the right RFP. (Unlike the big firms who just publish the same maps with the same vendors who happen to get a ranking that often just happens to be highly correlated to how much they pay the firm. [Remember, vendors have lured big analyst firms astray.])  And when you need help on a shortlist, seek out the consultants who have actually implemented multiple players on that list for their advice.

There Are Best-in-Class Solutions for End-to-End Indirect Sourcing Processes …

… you just have to do your research!

A while ago I posted that your standard sourcing solution doesn’t work for direct (because it doesn’t, and, relatively speaking, very few sourcing solutions do work for direct), and one of the comments I received implied that it doesn’t even work for indirect. And while some of the solutions out there are so minimal / antiquated / poorly designed, it can be considered a fair question (as there are certainly a number of solutions that would never make a recommendation list by the doctor under any circumstance), the reality is that there are lots of solutions that work well for indirect sourcing.

Now, if you are thinking about a best-in-class 7-step sourcing process, then it’s true you might just need on the supplier side:

  • one module for supplier discovery
  • one for extensive supplier qualification from a 360-degree risk, compliance, sustainability, quality, service, etc. perspective
  • one for supplier onboarding and communications
  • one for supplier performance management and development

As well as a sourcing platform that supports:

  • multiple RFX Formats
  • fluid multi-round events
  • strategic sourcing decision optimization so you understand baselines, the cost of business rules, etc.

And possibly a separate best-in-class analytics solution that:

  • lets you dig deep into costs, trends, and outliers

And then an “orchestration” platform that

  • helps you integrate them all so that all data is available in all platforms all of the time

So while you could need as many modules as steps, they exist, and you can build a fantastic solution for your organization and process and get great results. Just don’t expect it from an average suite (that won’t be BiC across the board, will only be tailored for large enterprise, and may only be super appropriate for certain industries). The sheer number of companies in this space (see the Mega Map) means that the odds of you not being able to put together a good solution are small (although it also means that the workload of finding those solutions is quite large, as it takes work to weed through 666 solutions, which is a number that is ruining Procurement, as Joël Collin-Demers indicates).

The lack of solutions for indirect is not the problem, the lack of solutions that not only allows, but can be configured, to enforce a good process is!

More specifically, we are talking about mandatory dual-sourcing! Which, sadly, is still not being done in direct, even though JIT supply chains have been out-the-window at least since Eyjafjallajokull (remember that? it should have been the first push to start properly dual sourcing), with the situation getting progressively worst (on a sometimes daily basis) since March, 2020. (Five years of natural and man-made disasters should be more than enough of a wake-up call, right?)

This is not something indirect has normally done because the view has been “it’s a standard finished off-the-shelf product, I’ll just get it from someone else if I need to“, not recognizing that, for some products, 90% still ultimately come from a single destination country (which is often China) and any disruption to that country (pandemics [as China’s, often impossible, zero tolerance policy will close entire cities for months without any regard to the consequences to the rest of the world], border closings on key land routes, port strikes, and now extremely high [never seen before] tariffs) will jeopardize almost all supply. And for other products, they have chosen a smaller supplier with limited scalability and no nearby options (and resourcing will take time as it will also involve rerouting and ripple effects through the supply chain — and this could also add to cost).

At the end of the day, the platform has to allow you to understand, track, and address your biggest risks, or, as we wrote sixteen (16) years ago (and stand by it to this day), your platform will be your biggest risk because it’s the unexpected that you don’t plan for that kills you, not the expected, no matter how severe.

And while this is not a risk-centric post (as we have written series on that), the largest cause of risk is not natural disasters (even though we are now seeing dozens of major disasters every year, the reality is that most are still localized) or pandemics (while epidemics are increasing, true pandemics still work out to only be a twice-a-century event [although if we don’t step up our global management thereof, the rate will double]), but human generated risks. Stupid humans create more risk and chaos than the planet does!

A Shiny New SaaS or AI Wrapper Doesn’t Make Tech Any Better

Just like painting a hammer bright shiny pink doesn’t change it’s fundamental function, putting a new shiny SaaS wrapper on a traditional desktop application or adding a Gen-AI interface to allow for a “conversational” interaction doesn’t fundamentally change what the application can do.

What an application can do depends upon the data model it can support, the core algorithms that process that data, and the workflows that connect them together to take raw inputs and produce necessary outputs. If the data model is not sufficient, the algorithms not appropriate, and the workflow lacking, a shiny new wrapper won’t change anything … the software will be no more effective than the software that is being replaced.

Pick any significant application, and the best results usually depend on intense or complex calculations, using a proper algorithm that works on a proper model populated by the right inputs, and if any piece is missing, the solution doesn’t work. In our area, it’s Source to Pay, and that starts with sourcing. In sourcing, the right decision is that which results not in the lowest bid, but the lowest lifecycle cost of the purchase, which takes into account not just unit costs, and not just shipping and tariffs and interim warehousing costs for landed costs, but also utilization/waste costs, local warehousing and inventory costs, (amortized) service costs, disposal costs, and even carbon costs if they vary by option. It considers all of the available product/SKU options, plants, shipping routes, and localized plant/warehouse/store needs and uses optimization and analytics to identify the optimal award that minimizes the overall cost while maintaining service levels and minimizing risk. If the solution doesn’t allow you to build the right models, collect all the options, identify the plants and routes, and determine optimal mixes that meet your criteria, then it’s not a modern sourcing solution no matter how SaaSy it is, how new it is, or how much BS Gen-AI gets shoved into it. A good application solves your core problem. If it doesn’t do that, it’s not good. And at the end of the day, it doesn’t matter how slick and SaaSy it is, because if the only application that gets it right is a green screen desktop application, then that is the best solution to your problem. (We hope it’s not — but given how little there is behind many of these SaaS apps, which are built to look good by developers with little to no knowledge of the domain they think they can satisfy with simple algorithms, and sometimes just fancy interfaces to a classic desktop application wrapped in a web container which slaps on a web-friendly API interface to the classic app and classic algorithm — we can’t say it’s not going to be the case that you have to keep using that decades old green screen application.)

At the end of the day, it’s algorithms that work, and the reality is that these are often the algorithms that were developed decades ago by leading minds, stress tested and sharpened by brilliant minds, proven to work, and just waiting for the computing power to catch up to where they need it in order to shine. (The best data structures and algorithms text book ever written is over 35 years old. Most of the revolutionary developments were between the 70s and 90s.) MILP is decades old, but we really didn’t have the computing power to solve large, complex, real world models until about two decades ago (and then only if you didn’t mind waiting a few hours to a few days for a scenario to solve). But now we can solve them in minutes, if not seconds, and that allows for next-generation strategic analysis and planning, as long as you have a modern platform that uses a modern algorithm that can take advantage of multi-core cloud processing capabilities, the right data model, and the data inputs you need.

And therein lies the hitch — it all comes down to the data model, algorithm, and application design — not the UX, the intake and orchestration, or the “conversational” Gen-AI interface.

Remember this the next time someone tries to sell you a shiny new interface or an upgrade to what you have. Remember that most upgrades are because software stacks change, functionality that should have been in the last release is finally added (since many SaaS companies now release untested alphas), or major security or performance issues are resolved. Now, you need the fixes for sure, but you shouldn’t be paying any more than the maintenance fee for those. If the buyer rolls them in “functionality updates”, you should insist you get those for free. If you got buy without the missing functionality (either because you had complementary systems or added it yourself), then do you really need more untested functionality now?

And at the end of the day, the primary reason software stacks change is that if they didn’t, you’d have to buy a lot less tech, and then the investors wouldn’t make money. Not all tech stacks offer significant improvements in functionality or even security. They just allow developers to work on the new hotness and enterprises to force you into spending more money, without any guarantee of more value in what you’re delivered.

So don’t get fooled by new tech. Do your homework. Sometimes the best tech is the old busted hotness.

P.S. Yes, Joel the number 666 is ruining Procurement*, but not necessarily, or just, in the way you appear to believe it is.

* see the Mega Map

Despite Attempts to Simplify It, There Are MANY Categories of ProcureTech Solutions

When selecting a ProcureTech Solution, you have all the following buckets:

Function X Classic Type X SaaS Category X Integration
Sourcing
SXM
CLM
Analytics
e-Procurement Best-of-Breed Standalone App
(full function)
Suite EcoSystem
Invoice-to-Pay Mini-Suite Lightweight App
(task specific)
I2O Ecosystem(s)
ESG/Sustainability Suite Bolt-On
(extends a module)
Open API
GRC
Category/Cost Intel
Niche (Legal, Marketing,
Hospitality, SaaS/Tech, etc.)
I2O

And if you do the multiplication, that’s 297 combinations … and that’s just the tip of the iceberg when there are 10 core areas of SXM, multiple niche areas being addressed (some classic solutions were just for print/telco), multiple buckets of risk management solution, generic and scope-3 specific sustainability solutions, different approaches to intake-to-orchestrate, and that’s just addressing the functional areas of Source-to-Pay+.

Then you have the situation where some vendors only offer a single best of breed (BoB) module, others offer a mini-suite, and others still offer a mega-suite with all of the core modules and often a half dozen more on top of that.

While most are SaaS apps these days, they vary from heavy standalone apps that implement full functions to lightweight apps designed for specific tasks (that are usually missing from larger standalone apps that purport to completely cover a function but don’t) to bolt-ons that offer advanced functionality, but require a core module to work on top of.

One also has to consider how you integrate them into a comprehensive workflow that supports Source-to-Pay+. Sometime modules integrate into one-or-more suite ecosystems out of the box (like the SAP Store or The Coupa Store), other times they just come with a (semi) open API, and now some, not built for integration, are integrating into one or more of the new orchestration ecosystems.

And while functionality should come first, you have to consider all of these other factors as well because if you select a suite for a module, you’re probably locking yourself into the other modules you need as those the suite offers due to cost and integration cost considerations, if you select light-weight or bolt-on apps, then you better have something to integrate them into, and you better be sure the ecosystem has all of the modules you will need to implement over the next five years or so before locking yourself into an ecosystem.

So even though THE REVELATOR believes that everything is going to be a bolt-on or an app and that’s all your going to have to worry about, unfortunately the ProcureTech world is NOT going to make it that simple. Overlooking traditional category and integration can completely destroy the value you require if you can’t easily integrate with complementary modules/apps (and especially if you are in a [primarily] direct industry and need to integrate with supply chain applications for the data you need to make good supply chain aware decisions).

However, it will be interesting to see the primary solution category, breadth, and integration of ProcureTech Solutions (by, and independent of, function) in the future.

When Someone Says “Real AI”, Ask For Details!

We shouldn’t have to remind you, but since too many people are falling for, and buying into, the hype and selecting tech that does not, and can not, ever,work, we are going to remind you yet again.

Computers do NOT think!

To think is to direct one’s mind … where one is an intelligent being, not a dumb box. Computers thunk … they compute using algorithms (which are hopefully advanced and encapsulate expert guidance and knowledge, but that is far from guaranteed).

Computers do NOT learn.

Appropriately selected and implemented probabilistic / statistical / machine learning algorithms will improve their performance over time as more data becomes available, but they do not learn. Learn is to acquire knowledge (or skill), and by definition, knowledge can only be acquired by an intelligent being.

Computer Programs Can Adapt …

but there’s no guarantee the adaption is going to improve their performance under your definition, or even maintain their performance. Their performance could actually decrease over time.

What is critically important is that there are two primary types of algorithms that can be used to create an AI application:

Deterministic and Probabilistic

A deterministic algorithm is one that, by definition, given a particular input will, no matter what, always produce the same output, with the underlying machine always passing through the same sequence of states. As long as you don’t screw up the input, or the retrieval of the output, (and, of course, the hardware doesn’t fail), it is 100% reliable.

A probabilistic algorithm, in comparison, is an algorithm that incorporates randomness or unpredictability into its execution, and may or may not produce the same output given successive iterations of the same input. Nor is there even any guarantee that the algorithm will produce a correct, or even an acceptable, input a given percentage of the time. Well designed, these algorithms may allow for consistently faster computation, better identification of edge cases, or even a lower chance of error, on average, for a certain class of inputs (but with the caveat that other classes of inputs may suffer a higher error rate).

Deterministic algorithms can be relied on to execute certain tasks and functions autonomously with no oversight and no worry. Probabilistic cannot. In other words, you cannot assign a probabilistic algorithm a task for autonomous computation unless you can live with the worst possible outcome of the algorithm getting it wrong. And this is what Gen-AI, and most of today’s “AI” tech, is based on.

This is the critical problem with today’s AI-tech and AI-Hype. Especially when a probabilistic system can, by definition, use any method it likes to determine a probability (which may or may not be at all appropriate, since a model is only valid if it accurately captures the “population” dynamics) and may, or may not, be accurate. For some of these situations, it will be the case that neither the company nor the provider of the system will have enough historical data (market situation and outcome) to even attempt to make a reasonable prediction, and there definitely won’t be enough data to know the accuracy, because standard measures of model accuracy (like the Brier Score), tend to require a lot of data, especially if you have a situation where you need to accurately identify rare events as this could require 1,000 or more “data points” (which, in a typical market scenario, would require enough data to identify the market condition and then the unexpected change”).

(And this is exacerbated by the reality that, for many of these situations, one could likely employ more traditional “statistical techniques” like trend analysis, clustering, classical machine learning, etc. to solve much of the problem at hand.)

It’s important to remember that Gen-AI LLMs, which power most of the new (fake) agentic tech, are all probabilistic based (and designed in such a way that hallucinations are a core function that CAN NOT be eliminated), and much of it is complete and utter garbage for what it was designed for, and even worse for tasks it wasn’t defined for (like math and complex analyses). (Everyday we see a new example of complete and utter failure, often due to hallucinations, of this tech. For example, you can’t even get a list of real books out of it — as per a recent contribution to the Chicago Sun Times which which published its Summer Reading List of 15 books, of which only 5 of which actually exist. And then there are numerous examples of lazy lawyers getting raked over the coals by judges for using ChatGPT to do their homework and quoting fake cases!)

While we do need to augment purely deterministic tech with more adaptive tech that uses the best “statistical techniques” to more quickly adapt to situations, we need to spell out the techniques and restrict ourselves to what is now “classic machine learning” where the algorithms have been well researched and stress tested over decades (not modern Gen-AI powered agentic tech that has worse odds than your local casino). At least then we’ll have confidence and can enforce bounds on what the solution can actually do (to limit any potential damage).

Especially now that we finally have the computing power we need to effectively use tried-and-true “classic” ML/AI techniques that require large data stores and huge processing power for highly accurate predictions. The reality is that even though this tech has existed for at least 25 years, the computing power required made it totally impractical for all but the most critical situations. Twenty-five years ago, a large Strategic Sourcing Decision Optimization (SSDO) model would run all weekend. Today you can solve it in a few seconds on a large rack server (with 64 cores, GB of cache, and high-speed access to TB of storage). The fact that we finally have (near) real time capability means that this tech is not only finally usable in all situations, but finally effective.

[And if vendors actually hired real computer scientists, applied mathematicians, and engineers and built more of this tech, instead of script kiddies cobbling together LLMs they don’t understand, we would be a decade ahead of where we are today.]