Category Archives: SaaS

Want Procurement Technology Success? This is Your Anthem!

Show Don’t Tell by Rush

From now on, when you are contacted by a vendor, the first thing(s) you do is

  • refuse to listen to (or read) any marketing material, especially if they use any buzzwords (and, in fact, point out the minute they say one that you will hang up on them if they say it, or a prepared list of buzzwords you keep beside your phone, because it’s all Hogwash)
  • refuse to review any “case studies” until you see the solution in action, and even then refuse them unless they contain hard numbers based on standard metrics that you can compute yourself pre- and post-implementation (a client stating they saw “significant” savings is NOT a case study, it’s hearsay)
  • refuse their pre-recorded / scripted 30 minute intro demo until they tell you, in plain English, what problems their platform solves and why they think it will help you (and maybe even ensure the contract they will ask you to sign is also in plain English)

Once the vendor tells you:

  • what procurement problems and pain points their platform solves
  • how it does it (no techno-babble bullcr@p, plain English; no mention of AI, and definitely no mention of Gen-AI [which is bad for Procurement] because if it uses real ML then they can tell you what algorithms they use, and what confidence you can have based on what level of data is available)
  • and why the vendor is the right vendor to deliver the solution to you

Then you agree to a demo that covers the specific pain points you want addressed, provided that, if you like what you see, you can take it for a test drive. Nothing stops a true multi-tenant SaaS solution provider from spinning up a sandbox instance for you to play with, at zero cost, as it should literally be the flick of a switch (or letting you run one real sourcing event at a trial cost). The only significant charge should be if you want to see it in action on your data, in which case you should be prepared to pay a standard daily consulting rate to load your data (but, to be honest, you shouldn’t do this until you are sure the vendor is going to be the finalist, or the runner up, in your selection process as a final step, since a test drive on standard dummy data will illuminate most of the functionality supported).

You need to see a platform in action to understand if you can, and will, use it to solve your problems.

So, if it isn’t already, your new mantra for procurement software solution providers is Show Don’t Tell!

M&A Mania is Coming Again … but will it be the same as last time?

the doctor agrees with THE PROPHET that M&A in Procurement, Supply Chain and Finance Tech is Back On For Q4 and 2025, because M&A Mania is part and parcel with the The Marketplace Madness that the doctor told you is coming back in May. The only question is, will this M&A cycle look like the last few during Covid (when every investment firm had to have an online collaboration platform, since they couldn’t do business in person, and an online e-Payment FinTech solution, since they still needed to make, and most importantly receive, payments) and in the late 2010s when companies were getting scooped up left, right, and centre. It was kind of like that first year in Chemistry where you were told to look to your left, look to your right, and look in the mirror and realize that only one of you would survive the end of the course (except the odds had worsened and there was only a 1/6 chance that any of you would be left standing at the end of the M&A cycle and less than a 1/9 chance that more than one of you would be left standing).

But first, let’s review THE PROPHET‘s reasons why:

Reduced interest rate climate coming
Not necessarily in your country, but in the US and a few other major investment markets, and for global funds, that’s enough.
Valuations back up (including a recent one)
the doctor is seeing a bit of this beyond just over-hyped fake-take and (now failing daily) Gen-AI, which indicates a return to value for real solution capability that solves real problems, and not just glam UX or tech buzzwords, could soon be coming.
Dry powder is the size of an ammo depot
And this is a rather conservative estimate. Broaden your definition of our Source-to-Pay space, and it could go well beyond the 666 providers in the mega-map.
Constrained target/asset pool to pursue
Too many providers not focussed on Gen-AI bullcr@p were not (well) funded and in need of funding to grow and too many providers who raised too much on Gen-AI bullcr@p blew too much on failed dev and marketing and need someone to infuse them with fresh funding while taking in the reigns and refocussing them on core problems.
No clear leader in many markets
Even if you constrain by target enterprise size, vertical groupings, and module, you’re usually looking at over a dozen vendors. Too many. By core module alone, you’re usually looking at over eighty (80) potential providers.
Counter-cyclical sector defensibility as a hedge
Most definitely. the doctor has always said the best time to develop/expand is on the verge of a coming financial or supply chain crisis, and it’s even better if it corresponds with the end of a hype-cycle (when everyone realizes that grandiose claims are just that, claims, and usually not realized and it’s time to return to the next generation of tried and true technology).
Times of increasing global uncertainty favours supply chain, supply and supplier risk management
Yes, and this will be constant for years. The outsourcing crisis the doctor and a handful of others have been predicting for over a decade (which is why he was telling you to near-source and home-source in the late 2000s) materialized during COVID, anti-globalization is at a high not seen in the remembered lifetime of most of the global population (and increasing by the day), we likely haven’t been this close to World War III since the cuban missile crisis of 1962 (since the Soviet radar malfunction of 1983 was caught by an alert Soviet air defence forces officer) putting global political tensions at a near all time high since World War II, ever increasing natural disasters and supply shortages are escalating costs at levels of inflation not seen since the 1970s, and in some markets, since the late 1920s (and the Depression era), and it’s just doom and gloom all around. Only our space has the tech to combat this.
Corporate spend flowing into tech, not new jobs
This is unfortunately true since

  • most executives don’t realize that tech only increases productivity and success in the hands of a human, it doesn’t replace them (since Aritificial Idiocy can’t even replace real idiocy, how can you expect it to replace Human Intelligence [HI!])
  • big companies don’t like high fixed costs, and the see people has the highest fixed cost
  • the dream of the new robber baron billionaires is to replace people with machines, which they think will help them realize their vision of constantly increasing profits from constantly increasing revenue (from a workforce that never needs to take a break) at a constantly declining cost to serve (not possible, but that’s their dream)
Nearly all big tech firms (ERP, business applications and stack) aside from SAP have not made any material moves yet — and will need to at some point
You can’t wait for a lumbering giant … by the time they buy someone, it’s ready for sunset. Remember IBM and Emptoris? A sad end to the APE circus! That means that the time to strike as an investor is before they awake!

Add add the following:

  • money has been idling in these funds from lack of investment over the last couple of years (as they got antsy last year with the predicted recession and the SVB failure and the fallout of both), and their investors aren’t happy
  • many of the more progressive funds have realized that fintech is useless if there’s no money moving through it, which means you have to look for broader business solutions that can assure the flow of money as well as information
  • companies are starting to realize that ridiculous 10X, 15X, 20X valuations are a thing of the past (or at least until we get a whole new generation of freshly minted investors who didn’t bother to study their history, like the new generation of founders that didn’t study theirs) and that if you can get a solid 5X to 7X valuation (which is the most a company can expect to realize at an aggressive 40% annual growth rate, which is the most they can hope to realistically support) for tech, that’s great, and this makes acquisitions a lot more attractive than during the last cycle when you’d have to bid 10X on something that might not scale as an investor just to get invited to the table

The M&A market is returning. But there will be some differences this time. The last two times it was valuation run up until the money ran dry or there were no companies left that were worth it. This time will be more reminiscent of the first M&A Mania to hit our space in the late 2000s and it will come with a little kiss, like this:

1. Valuations will be more realistic.

As simply stated, 10X, 15X, 20X growth doesn’t happen in five years for anything but a Unicorn, and even then it’s rare, and investors aren’t going to pay this any more. That being said, they will invest for value and firms who focussed on building real solutions, not slick UX with no substance, will be valuated quite well (at first).

2. The cycle will have 3 parts.

2A. Existing Growth Opportunities

Look for PE firms to buy suites or modules that can be sold and grown stand-alone or as complementary solutions to offerings in their stable. The market for these solutions could mature quickly as the Gen-AI and intake hype cycles crash and the global situation destabilizes and risk-focussed Sourcing and Procurement become paramount. This will be done at fair to very good valuations, depending on the offering and the financial situation of the firm being acquired … those that can wait and play the field will get better valuations.

2B. Fill the Gaps

As new competitors enter the scene, existing providers with aging tech are going to want to counter them and will start buying up point-plays to fill the gaps. This will take two forms.

  1. stable, stand-alone players who can survive without investment will wait for the right offer, get a very good to great valuation, and survive relatively unscathed in personnel and offering (and will continue to be available standalone for some time)
  2. cash-crunched desperate players who won’t survive long without a cash infusion will be bought in a fire sale, folded in quickly, and only key personnel will remain

2C. Liquidation Opportunities

Everyone loves a steal, err, deal. Investors included. As companies start to run out of money left, right and centre because they were underfunded (and struggled to compete with the overfunded overhyped companies) or overfunded and burned money like it grew on Central American fruit trees that produce two healthy crops a year, investors and buyers will be looking for companies with pieces of tech they can use to enhance their offering for pennies on the dollar. These companies will be broken up across talent and technology, with the acquirer keeping only what they want.

Dear SaaS Provider, Where’s Your Substance? Being SaaSy is No Longer Enough.

As per our January article, Half a Trillion Dollars will be Wasted on SaaS Spend This Year and, as per a recent article over on The CFO, CFO’s are wising up to the hidden bill attached to SaaS and cloud, which might just be growing faster than the US National Debt (on a per capita basis).

As the CFO article notes, per-employee SaaS subscriptions alone are now costing businesses $2,000 (or more) annually on average, and that’s including ALL employees from the Janitor (who shouldn’t be using any SaaS) to the CEO (who likely doesn’t use any SaaS either and just needs a locally installed PowerPoint license).

To put this in perspective, this says a small company of only 1,000 people is spending 2 MILLION on SaaS (and a mid-size company of 10,000 people is spending 20 MILLION), most of it consumer, and likely a good portion of it through B2B Software Marketplaces because it’s easier for AP. If the average salary is 100K with 30K base overhead, that’s costing the organization 15 (or 150) people, or a 1.5% increase in workforce, which is substantial if it’s an organization that needs people to grow.

And the worst part is that a very significant portion of this spend is overspend or unnecessary spend, with many SaaS auditors and SaaS management specialists finding 33% (or more) overspend as a result of duplicate tools, unused licenses, and sometimes outright zombie subscriptions that just need to be cancelled. Plus, poor management and provisioning leads to unnecessary surcharges that is almost as bad as unused licenses.

There’s no excuse for it, and CFOs are not going to put up with it anymore. SaaS Audit and Management tools are going to become a lot more common, and once the zombie subscriptions, unused licenses, and cloud subscriptions are rightsized, when these companies realize they are still spending at least 1,500 per employee on SaaS and cloud, they are going to start grouping tools by function and analyzing value. If there are two tools that do lead management, workforce management, or catalog management, one is going to go. More specifically, the one providing the least value to the organization. It doesn’t support multiple what-if scenario creation yet or true SSDO, but its more than just simple side-by-side comparison and more analysis capability is on the roadmap for later this year.

So, dear SaaS Provider, it’s important to ask:

  • what’s your substance
  • how do you provide more hard dollar value for that substance than your peers
  • how do you measure it and prove it to the customer
  • … and make sure you’re not the vendor that is cancelled during the audit

And, dear organization who hasn’t done a SaaS audit recently, why haven’t you? You’re sitting on 30% overspend in a category which is likely, with most of the spend split between departments and hidden on P-Cards and expense reports, $2,000 per employee and growing daily. You need to do the audit, rightsize your SaaS, and then centralize SaaS management and SaaS acquisition policy. It’s not a minor expense, it’s a major, business altering, outlay.

GEN-AI IS NOT EMERGENT … AND CLAIMS THAT IT WILL “EVOLVE” TO SOLVE YOUR PROBLEMS ARE ALL FALSE!

A recent article in the CACM (Communications of the ACM) referenced a paper by Dan Carter last year that demonstrated that the claims of Wei et.al in their 2022 “Emergent Abilities of Large Language Models” were unsubstantiated and merely wrong interpretations of visual artifacts produced by computing graphs using an inappropriate semi-log scale.

Now, I realize the vast majority of you without advanced degrees in mathematics and theoretical computer science won’t understand the majority of technical details, but that’s okay because the doctor, who has advanced degrees in both, does, can verify the mathematical accuracy of Dan’s paper, and the conclusion:

LLMs — Large Language Models — the “backbone” of Gen-AI DO NOT have any emergent properties. As a result, they are no better than traditional deep learning neural networks, and are, at the present time, ACTUALLY WORSE since our lack of deep research and understanding means that we don’t have the same level of understanding of these models, and, thus, the ability to properly “train” them for repeatable behaviour or the ability to accurately “measure” the outputs with confidence.

And while our understanding of this new technology, like any new technology, will likely improve over time, the realities are thus:

  • no amount of computing power has ever hastened the development of AI technology since research began in the late 60s / early 70s (depending on what you accept as the first paper / first program), it’s always taken improvements in algorithms and the underlying science to make slow, steady progress (with most technologies taking one to two DECADES to mature to the point they are ready for wide-spread industrial use)
  • the technology currently takes 10 times the computing power (or more) to compute “results” that can be readily computed by existing, more narrow, techniques (often with more confidence in the results)
  • the technology is NOT well suited to the majority of problems that the majority of enterprise software companies (blindly jumping on the bandwagon with no steering wheel and no brakes for fear of missing out on the hype cycle that could cause a tech market crash unequally by any except the dot-com bust of the early 2000s) are trying to use it for (and yes, the doctor did use the word “majority” and not “all” because, while he despises it, it does have valid uses … in creative (writing, audio, and video) applications [not business or science applications] where it has almost unequalled potential compared to traditional ML designed for math and science based applications)

And the market realities that no one wants to tell you about are thus:

  • former AI evangelists and some of the original INVENTORS of AI are turning against the technology (out of a realization that it will never do what they hoped it would, that its energy requirements could destroy the planet if we keep trying, and/or that maybe there are some things we should just not be meddling with at our current stage of societal and technological evolution), including Weizenbaum and Hinton
  • Brands are now turning against AI … and even the Rolling Stone is writing about it
  • big tech and companies that depend on big tech (like Pharma) are starting to turn against AI … and CIOs are starting to drop Open AI and Microsoft CoPilot because, even when the cost is as low as $30 a user, the value isn’t there (see this recent article in Business Insider)

Now, the doctor knows there are still hundreds of marketers and sales people in our space who will consistently claim that the doctor is just a naysayer and against progress and innovation and AI and modern tech and blah blah blah because they, like their companies, have gone all in on the hype cycle and don’t want their bubble burst, but the reality is that

the doctor is NOT against “AI” or modern tech. the doctor, whose complete archives are available on Sourcing Innovation back to June 2006 when he started writing about Procurement Tech, has been a major proponent of optimization, analytics, machine learning, and “AI” since the beginning — his PhD is in advanced theoretical computer science, which followed a math degree — and, after actually studying machine learning, expert systems, and AI, he used to build optimization, analytics, and “AI” systems (including the first commercial semantic social search application on the internet)

what the doctor IS against is Gen-AI and all the false claims being made by the providers about its applicability in the enterprise back office (where it has very limited uses)

because the vast majority of the population does not have the math and computer science background to understand

  1. what is real and what is not
  2. what technologies (algorithms) will work for a certain type of problem and will not
  3. whether the provider’s implementation will work for their problem (variation)
  4. whether they have enough data to make it work

and, furthermore, this includes the vast majority of the consultants at the Big X and mid-sized consultancies who graduate from Business Schools with very basic statistics and data analytics training and a crash course in “prompt engineering” who can barely use the tech, couldn’t build the tech, and definitely couldn’t evaluate the efficacy and accuracy of the underlying algorithms.

The reality is that it takes years and years of study to truly understand this tech, and years more of day-in and day-out research to make true advancement.

For those of you who keep saying “but look at how well it works” and produce 20 examples to prove it, the reality is that it’s only random chance that it works.

With just a bit of simplification, we can describe these LLMs as essentially just super sophisticated deep neural networks with layers and layers of nodes that are linked together in new and novel configurations, with more feedback learning, and structured in a manner that gives them an ability to “produce” responses as a collection of “sub-responses” from elements in its data archive vs just returning a fixed response. As a result they can GENerate a reply vs just selecting from a fixed one. (And that’s why their natural language abilities seem far superior to traditional neural network approaches, which need a huge archive of responses to have a natural sounding conversation, because they can use “context” to compute, with high probability, the right parts of speech to string together to create a response that will sound human.)

Moreover, since these models, which are more distributed in nature, can use an order of magnitude more (computational) cores, they can process an order of magnitude more data. Thus, if there is ten to one hundred times the amount of data (and it’s good data), of course they are going to work reasonably well for expected queries at least 95% of the time (whereas a last generation NN without significant training and tweaking might only be 90% out of the box). If you then incorporate dynamic feedback on user validation, that may even get to 99% for a class of problems, which means that it will appear to be working, and learning, 99 times out of 100 instead of 19 out of 20. But it’s NOT! It’s all probabilities. It’s all random. You’re essentially rolling the bones on every request, and doing it with less certainty on what a good, or bad, result should look like. And even if the dice come “loaded” so that they should always roll a come out roll, there are so many variables that there are never any guarantee you won’t get craps.

And for those of you saying “those odds sound good“, let me make it clear. They’re NOT.

  • those odds are only for typical, expected queries, for which the LLM has been repeatedly (and repeatedly) trained on
  • the odds for unexpected, atypical queries could be as low as 9 in 10 … which is very, very, bad when you consider how often these systems are supposed to be used

But the odds aren’t the problem. The problem is what happens when the LLM fails. Because you don’t know!

With traditional AI, you either got no response, an invalid response with low confidence, or a rare (compared to Gen-AI) invalid response with high confidence, where the responses were always from a fixed pool (if non-numeric) or fixed range (if numeric). You knew what the worst case scenario would be if something went wrong, how bad that would be, how likely that was to happen, and could even use this information to set bounds and tweak the confidence calculation on a result to minimize the chance of this ever happening in a real world scenario.

But with LLMs, you have no idea what it will return, how far off the mark the result will be, or how devastating it will be for your business when that (eventually) happens (which, as per Murphy’s law, will be after the vendor convinces you to have confidence in it and you stop watching it closely, and then, out of the blue, it decides you need 1,000 custom configurations of a high end MacBook Pro in inventory [because 10 new sales support professionals need to produce better graphics] in a potentially recoverable case or it decides to change your currency hedge on a new contract to that of a troubled economy (like Greece, Brazil, etc.) because of a one day run on the trading markets in a market heading for a hyperinflation and a crash [and then you will need a wheelbarrow full of money to buy a loaf of bread — and for those who think it can’t happen, STUDY YOUR HISTORY: Germany during WWII, Zimbabwe in 2007, and Venezuela in 2018, etc.]). You just don’t know! Because that’s what happens when you employ technology that randomly makes stuff up based on random inputs from you don’t know who or what (and the situation gets worse when developers [who likely don’t know the first thing about AI] decide the best way to train a new AI is to use the unreliable output of the old AI).

So, if you want to progress, like the monks, leave that Genizah Artificial Idiocy where it belongs — in the genizah (the repository for discarded, damaged, or defective books and papers), and go find real technology built on real optimization, analytics, machine learning, and AI that has been properly researched, developed, tested, and verified for industrial use.

Procurement Organizations Need Automation, But that DOES NOT Necessarily Mean AI!

A number of leaders in our space, including Sarah Scudder in the comments to this post, have been noting to me that they are seeing AI resonate with companies of all sizes.

Sarah notes that:

1. She’s seeing AI agent automations resonate with smaller companies.

Smaller companies need automation desperately, but it’s important we educate smaller companies that doesn’t mean they need AI. We’ve had adaptive rules-based automation and tailored machine learning in this space for almost 20 years and they can get fantastic results without having to risk being pre-alpha testers for unproven AI while getting the solution they really need for a fraction of the cost of this new, relatively unproven, AI tech! (Remember, firms that dumped millions into this bandwagon need to recoup those millions fast before their investors abandon them, which means high prices for unproven tech!)

2. She’s seeing copilot intelligence resonate with bigger companies who understand risk.

Which makes sense for a small segment of the market who are ready for it because augmented intelligence and automated suggestions with yes/no approvals are great for organizations who

  1. understand risk and
  2. understand the categories/markets/domains they are applying the technology in, because a true expert will identify the 95% of the time it’s working just fine; the 3% of the time it’s probably okay (and not worth the effort to double check manually due to the risk threshold); and the 2% of the time they need to slam the breaks and take over.

However, that’s not a very large segment of the market. What most companies still need is better analytics, category intelligence, and guidance from category experts on how to use it and then where and when to integrate automation and co-pilot capabilities.

Furthermore, I’m also being told that:

3. Mid-Markets are looking for technology they can roll out to the organization at large to get tail-spend under control, manage intake, and/or relieve pressure on Procurement to focus on more strategic efforts.

Which resonates, but, again, this is an area where AI is typically not needed. Catalogs, be they hosted, punch-out, hybrid, etc. with the ability to also request/book standard, pre-negotiated, services, easy search, and easy RFQ where there is no standard item but the buyer has budget authority, the vendors are preferred, and the amount doesn’t hit a threshold is often enough. Maybe a natural language search to find the right policy documents or bring up the right products or forms, but that doesn’t require modern AI either — we’ve had that for quite some time as well.

And, as Sarah implies, while organizations of all sizes need help to overcome their excessive workload and limited market insight so that they can prioritize risk management and mitigation in their procurement activities, this doesn’t mean they need AI. Automation yes, advanced technology a definite yes, but AI, rarely! Remember that when building and recommending ACTUAL solutions and not just buzzwords.