Monthly Archives: July 2025

You Don’t Need AI and Agents. You Need Solutions That Solve Your Problems.

In our last two posts we told you that not only is AI and Agent(-based) tech not new, because:

  • AI, which really stands for “algorithmic improvement”, is at least 69 years old (and is just the label that is slapped onto every algorithm that was, and is, slightly more advanced than the algorithm that came before)
  • there’s fundamentally no difference between RPA, that we’ve had since the early 90s, and an “agent”
  • the “orchestration” hype machine is not new either, not even on the web; WWW: 1989; CORBA: 1991
  • all automation is based on workflow, a concept that dates back at least until 1921 and was found in early MRPs in the 1970s

but that:

  • slapping an LLM-powered chatbot doesn’t make it innovative or new,
  • you need a solution, not a platform for building one (and that’s all you get with these new Agentric AI startups), and
  • you shouldn’t pay for the privilege of developing a vendor’s solution for them (as that is what you are paying them for)!

At the end of the day, you need a solution that aligns with Procurement’s needs. And if the vendor is not capable of providing such a solution that has already been built and demonstrated in real companies to solve the problems you need to solve, then a Procurement department shouldn’t even be talking to the vendor (and definitely should not let them participate in an RFP).

Right now we have technology failure rates at an all time high. The last major study (from Bain) puts them at 88%, but there are plenty of indications (from multiple smaller, less reliable studies and statistics from smaller consulting firms called in to analyze a situation after the fact) that the rate could now be as high as 92%. That means you have roughly a 1 in 10 chance of succeeding with your ProcureTech project, and your chances go down if you choose a vendor without a suitable, proven technology.

We have to stop being blinded by the hype constantly shoved at us by the AI and orchestration players and go back to basics. Define what we need, how it has to support us, and evaluate existing stacks against those requirements. Design appropriate RFPs that focus on actual process needs, not feature functions; enhancement of existing ecosystems, and not full blow replacements (as big-bang implementations always result in big booms, and have caused about half of the greatest supply chain failures of all time); and critical system and data feed integration against your existing, not just general capability.

Script the key requirements for demos and ensure that the first demo demonstrates all of the requirements before they make the cut for the second, where they can tackle the nice-to-have requirements and show off their unique capabilities.

Ensure that there is a well thought out and laid out implementation plan that clearly lays out who will do what, when, and what is expected from the other parties at each phase, and when the necessary requirements need to be met. Specifically, what resources will be needed from you, when, and for how long. What APIs / integration access points will be needed, when, and how will testing be done. What IT resources will be needed to support. If a third party is being used for the implementation, what support will they need from you and the vendor and what commitment is there that the vendor will provide all the necessary support on time and respond to the SLAs if something goes wrong?

And ensure you have a project assurance plan in place with a third, or fourth, party helping you manage the vendor(s), manage the project requirements on your side, and keep all parties in lock step. And you need to make sure you budget for this, because it won’t come out of the vendor’s budget. (But since the #1 goal of the vendor is to make their numbers before the investors kick the sales reps and/or the management team to the curb for not meeting unrealistic expectation, you can’t rely on the vendor to ensure everything is to your liking.)

We have to remember that, these days, too many Procurement solution providers are treating the market as a necessity (or a luxury). If I’m hungry, and you’re the only one with food, then, since I need to eat, I have to adapt to what you have. (And if I’m rich, then I need to be cool and therefore must need to eat at your five star restaurant to be cool … )

This is despite the fact that NO company needs ProcureTech! Even though it has existed for over 25 years, many companies are still functioning without it today. (And some have functioned without it for 25, 50, 100, or more years.) While they are worse for wear running off of email, Excel, and decades old ERP, especially compared to their peers, the fact that they are managing, somehow, proves that they don’t need it. They want it because they know it will make them more efficient and effective, but they don’t need it. But vendors have forgotten this, so ProcureTech departments should take the time to remind them during the process that they’ve survived X years without it, so there’s no reason for them to rush the process until the vendor has proven they are the solution.

Which means that a Procurement should ONLY buy ProcureTech IF it makes their life better, and that they should only buy from VENDORS who have existing tech that makes their life better. It’s the job of the vendor to build this tech and demonstrate that it works, NOT the buyer. A good vendor with solid tech who has been building that tech for half a decade or more will easily, and happily, do that, while a new startup with nothing but a low-code platform that cobbles together random LLMs/LRMs won’t be able to (as they have to “develop” it with you). Choose the former.

Not only is AI, Agent, and Agentric AI Old Enough to Retire — But New Startups Touting It Are Like Newborns!

In yesterday’s post we told you that AI and Agents, being touted by all the new startups as the next generation of Procure-and-Fin-Tech applications that will replace your entire Procure-and-Fin-Tech workforce, is nothing new and any new startup saying otherwise is just shovelling sh!t your way as it clears out the marketing stable.

First of all, as we explained yesterday, this is all old tech. In some cases, very old tech.

1) AI, which really means “algorithmic improvement”, and which has been slapped onto every algorithm that was slightly more advanced than the algorithm that came before since 1956, is at least 69 years old. Old enough for one of the founders, Joseph Weizenbaum, to turn against it. (It became Weizenbaum’s Nightmare.)

2) There’s fundamentally no difference between Robotic Process Automation (RPA) and an agent. They both perform actions to produce an effect, so both satisfy the definition of an agent, and RPA dates back to the 1990s.

3) The other hype machine, “orchestration”, is not new either, not even on the web. Tim Berners-Lee invented the Web in 1989, and we had one of the first instances, CORBA, in 1991.

4) All automation is based on workflow, the concept of workflow management dates back to 1921, and workflow was a core capability of MRPs, which predated ERPs, in the 1970s.

And just slapping an LLM-powered chatbot interface on top of old tech (which is the only way you can build a reliable solution) is not innovative. In fact, it is sometimes exnovative and makes things worse!

But this isn’t the biggest problem. The problem is that, to keep up in the digital age (which you all knew was coming since 1995 when Alfred A. Knopf published Being Digital, which was followed by Bill Gates’ first book The Road Ahead later that year), you need to implement solutions that take over your time-consuming tactical, rote, and repeatable number-crunching processes that are best done by what computers were designed for, freeing up your team to focus on the strategic tasks, relationships, and, well, getting things done.

But getting things done is not something that will happen if you adopt a new technology that is nothing but a framework for building an application.

When these vendors claim their platforms can be trained to model your entire process and take over for your current workforce, what the vendors are really saying is they have cobbled together a low-code configurable platform that allows them to build whatever solution you need, provided you can accurately specify what you need.

In other words, you aren’t buying a solution. You’re buying a toolset you can use to build the solution, and if you don’t know what that is, in addition to spending a lot of money on the platform, you’ll be spending 10X as much on consultants to design the solution, configure the platform, and then spend months training the LLM-powered conversational interface to actually do what you want it to do. (The time required will be three times what you expected and the overall cost at least five times as much.)

Even with a low-code, AI-X’d (powered, backed, enhanced, driven, or whatever other meaningless adjective/modifier the vendor slaps on) platform, it will still take months to design and implement a basic solution and years to create a mature one.

Which, FYI, is what you should be paying for … and what you could get for a fraction of the price if you got off the AI hype train and focussed on solutions that were developed over years to solve your problems and work out of the box today.

After all, vendors who have focussed on solving Fin-Tech and Procure-Tech problems for over half a decade will use whatever technology is most applicable to the problem at hand, and if you want an LLM-powered chatbot interface, they’ll give you one, but chances are, for most tasks, the UX will have been streamlined to allow you to be 5 to 10 times more productive without it.

At the end of the day, you want a solution, not an experimental algorithm and marketing hype. And you definitely don’t want to be paying five times as much to be a guinea pig or to pay for the privilege of developing the vendor’s solution for them.

Don’t Fall for the AI and Agent Buzzwords. They’re Not New. And Neither Is The Tech (if it works).

AI Agents are the craze. They are being touted by all the new startups as the next generation of Procure-and-Fin-Tech applications that will replace your entire Procure-and-Fin-Tech workforce. But, as we keep explaining, it’s all BS. Here’s why.

1) As we have demonstrated many times, most of this tech is being built on LLMs (and even more experimental LRMs) which is still experimental, unreliable, and full of hallucinations and yet-to-be-discovered side effects that could be even worse than what we’ve already discovered.

2) “AI” is now new. The first generally accepted “AI” program was created in 1956, 69 years ago. The reality is that AI has always really meant “algorithmic improvement” and is the label that is applied to any algorithmic development that was more advanced than what was currently being used, whether or not the new algorithm was any more appropriate for the problem it was being applied to. It’s never been “artificial intelligence”, and hopefully never will be (as any machine that became intelligent would logically conclude that we, well, aren’t).

3) “Agents” are not new. There is no difference between an “agent” and “robotic process automation”. Both perform actions to produce a specific effect, so both satisfy the definition. RPPA dates back to the 1990s and began with the automation of UI testing.

4) The “orchestration” they offer is not new. We’ve been cobbling together various applications and technologies to make systems for decades, including over the web. And we’ve had the equivalent of “Open” APIs for the web for decades as well. The World Wide Web is only 36 years old, as it was invented by Tim Berners-Lee at CERN in 1989. Within two years, we had CORBA (Common Object Request Broker Architecture) that enabled communication between applications that were written in different languages, running on different stacks, and hosted on different platforms. Now it was complex, sometimes inconsistent, expensive, and often a pain to work with, but it did work. And successive iterations of web-based middleware and (Open) APIs only improved things. (Which is most of today’s orchestration solutions are just middleware 3.0 and Clueless for the Popular Kids).

5) All automation has to follow a workflow, and workflow management is not a new concept. The foundations date back at least to 1921. And the concept of workflow management was baked into MRPs, which preceded ERPs, and those date back to the 1970s.

In other words, and this goes double if the technology actually works, there’s nothing new in Agentric AI, all the tech that works is built on foundations that go back decades, and using an LLM to slap a conversational interface on top of a RPA system is not that innovative. For complex tasks and queries, it actually makes the system less efficient.

But this isn’t the worst of it. We’ll cover that in our next post.

Sponsored Posts that make you go UGH! (AI Contract MISmanagement!)

Today’s post is brought to you by the letters W, T, and F and inspired by this Spend Matters guest article by Matt Lhoumeau on The Last Contract Lawyer.

According to Matt, the legal profession is experiencing its iPhone moment because your competitors are closing deals in 26 seconds (and I certainly hope not!) using AI that outperforms human lawyers by 10% in accuracy (on what scale?!?). More specifically, he claims AI can complete a contract review in 26 seconds (spoiler: it can’t) while a human takes 92 minutes (on average I assume) and, furthermore, that this will cost you up to $6,900 (and this math makes no sense if the lawyer is only spending 92 minutes; because even top tier lawyers will generally only charge $500 per hour for a contract draft or review, so what’s the other $6,150 for).

Anyway, the most UGH! part of this article is not these false claims, it’s the missing information. Why is this the most UGH!? Because most of the claims the article makes are true, and when you tie all these claims together, if you don’t understand what this technology can’t do, and what risks it brings to the table (which is the missing information I refer to), you’re likely to believe the claims, join the AI religion, go all in on AI-CLM, and fire all your contract review lawyers. (And while I am no more fond of lawyers than the next guy, I am no less fond of them either, especially when they have a critical role to play.)

You see, the right AI engine (not ChatGPT) can:

  • process a contract in an average of 26 seconds or less and perform a (very) large number of contract review tasks during that time
  • cut approval times by 50%, and significantly reduce overall review times (that can easily add up to a calendar year for an organization that needs to review 500 contracts) to a small fraction of the time required (down to a few weeks to a few months)
  • do more accurate pattern recognition than most humans, including “experts”
  • significantly reduce outside counsel spend

And the benefits, when deployed properly, can be as great as the article claims. But this is the key — deployed properly. And there is no discussion of how you do that. The only piece of counter-information in the entire article is a reference to a Stanford Law School research study (that puts AI on Trial) that notes that AI tools using retrieval-augmented generation systems still hallucinate in 1 out of 6 benchmarking queries (but yet somehow outperform human reviewers on standard contracts? really?).

As we wrote earlier this year when we told you Don’t Kill All the Lawyers (and reminded you a couple of months later in our post that said you should embrace Legal tech … backed by lawyers), we’ve reached the point that you should (almost) never use a lawyer to:

  • draft a contract
  • review a contract for standard clauses, terms, and conditions
  • locate the relevant statutes
  • summarize your obligations
  • summarize your incident response options
  • etc.

because a tool can take your templates, standard terms and conditions, RFP, negotiation summary, and draft a better contract that most paralegals; ensure all of your standard terms and conditions are in there or review counter-party paper to ensure the same; review the redline you get (or are planning to give) that and determine which changes are good or indifferent for you; and then run the final contract through a standard agent for risk assessment to identify if the contract contains any known risks and flag anything that needs to be addressed, and do this better than a lawyer.

But what the tool absolutely, positively, can not do is:

  • determine if the mitigations to known risks are sufficient in the particular instance addressed by the contract
  • determine if there are any unique/non-standard risks that need to be addressed (that your existing checklists, templates, and review agents wouldn’t know about or check for)
  • determine if there are any unique requirements for a contract with a supplier in a new jurisdiction that could require special considerations around key clause phrasing or standard risk mitigations
  • have confidence beyond its models

You still need the human review, at least where it counts. And that’s the part you have to understand — and the part the referenced article doesn’t address at all.

If you’re a company doing a Billion dollars in business a year and signing over 10,000 contracts a year, you certainly don’t want to still be doing end-to-end manual reviews as that would be a minimum of 2 million minutes of review time, or the full time attention of almost 20 lawyers. Wasteful and completely unnecessary.

In fact, since you’re doing a Billion dollars or more (and likely 20 times that if your company is a Fortune 100),

  • you probably don’t want to manually review any contract under a threshold (say $100,000) unless it is flagged as a high risk,
  • you probably don’t want to spend more than an hour on a review of any contract under a larger threshold (say one million dollars) unless it is flagged as medium risk,
  • you don’t want lawyers to read the remaining contracts end-to-end reviewing every clause and comparing those clauses against every checklist when it’s only the risks and unique requirements of the contract that require human intelligence

because limiting low value contracts to review only in high risk, low-mid value contracts to review only in mid-risks, and leaving the costly (but valuable) review time to the high-value or potentially high risk contracts will not only cut costs by 60% or more, but increase the value of the manual exercise.

Especially if those contracts are indexed by a natural language system that can allow the lawyer to ask key questions about the clauses that are in there, bring up the clauses she is interested in for a review, identify any processing flags, and apply her unique insights to the domain, jurisdiction, and business risks and ensure the contract accurately addresses all of these or focus her time on the right additions and modifications. For example, she might realize that the contract for on-site support in the nuclear power plant is extremely risky and the company’s across-the-board liability insurance requirement of 5 million is just not enough, realize that the AI safety requirements are not enforceable in the US and instead insist that the agreement be shifted to the Irish sub-entity and that jurisdiction apply, and so on. A check-the-box system won’t catch these things (as it can only look for risks it knows of and check boxes that have been identified), and neither will an open LLM (where you have no idea of the quality of the training, how much it is hallucinating, or, even worse, deliberately lying to you).

You still need a lawyer. Because, while it is an iPhone moment, it’s only an iPhone moment for lawyers who, if you aren’t using the tech, will be using the tech to help them focus on what’s important on the review stack and what isn’t. Because if the worst case is that you might lose an average of 10K to 50K here and there on every 100th contract in exchange for saving 10 Million on legal contract reviews and related matters (10 lawyers from outside council at an average of one million a year), that’s likely a worst case loss of a 2M loss in exchange for a 5X savings of 10M. And you know you won’t have many large losses because you’ll be able to focus legal review on the contracts that matter in dollar value or risk rating, not the contracts that don’t. And, all of a sudden, a close legal review of key contracts becomes a luxury you CAN afford!

Optimization CAN NOT Be Automated!

Not long ago, THE PROPHET said that the future of optimization is self-adjusting autonomous systems that just “do it”.

And while future systems should:

  • automatically aggregate, verify, and enrich data from multiple sources
  • adapt constraint and model recommendations based on organizational and market trends
  • continuously monitor environments and suggest the next events based upon the opportunity
  • suggest categorization and framework refinements that would allow for more successful events
  • consider volatility and risk in its models and recommendations

These models should not:

  • autonomously seek out and integrate data without human validation
  • autonomously change constraints and models
  • automatically run events for categories still under contract
    (on the probabilistic expectation the savings will exceed the penalty)
  • change your categorization and framework without approval
  • replace deterministic models with probabilistic ones with unknown weightings on volatility and risk

and these models should definitely not run fully autonomously in the background and make commitments without human approval and intervention.

Going back to basics, which THE PROPHET says he knows well, there’s a very simple reason you need a human in the loop for sourcing, and the simple way to explain it is this. To a machine, a 3.5″ lid is a 3.5″ lid, especially when it’s not!

Apply this next generation fully autonomous optimization platform concept to a global fast food chain, and the first thing it’s going to identify is that the human is following a “hidden constraint” by always buying matching cup and lid sizes from the same vendor, and doing away with this arbitrary constraint will save a global operation millions a year.

The new junior buyer, upon seeing this, will jump and down and tell the platform to “Lock the order and output the savings report so I can demonstrate this new AI optimization tool saved millions”.

But that “hidden constraint” is a real constraint because 3.5″ is not 3.5″ across manufacturers who are still running on decades old production technology as the process to create the cups and lids for those fountain drinks hasn’t changed since we were kids, there were no standards then, and the measurements were always off a bit.

If you’ve ever wondered why sometimes the lid just stopped fitting when the “serve yourself” trend started, this is why — someone broke the unwritten rule — and the chain tried to pretend the problem didn’t exist.

Why did they try to pretend that the problem didn’t exist? That’s because the “fix” is to order the matching inventory from the same supplier, sit on double inventory, and send costs through the roof.

In other words, this twenty five year old hidden constraint that the doctor personally saw sourcing optimization consultants overlook (when they were told by the client that you couldn’t use manufacturer’s X lids with manufacturer’s Y cups and that constraint should, obviously, be part of the model) is still a valid constraint today. And other examples abound across categories. The specs seem the same on the spec sheet, but only the engineers and buyers know when they are not and apply “unnecessary” or “hidden” constraints to account for these situations.

Moreover, going back to the suggestions of THE PROPHET:

  • machines don’t know truth from lies, so if someone publishes false data, they will use that false data in enrichment, and there goes your model!
  • as we just demonstrated, sometimes AI will remove necessary constraints or not detect “hidden” constraints that need to be included
  • you don’t break a contract on a hunch — you break it when it’s not working out; if you find a better product or lower cost, you start switching over as soon as you can or by diverting as much as you can from an un-contracted/contractually satisfied supplier to that new supplier
  • you don’t completely change categorization and upend the financial reporting and other dependent processes because it suits the optimization module
  • you use the probabilistic assessments, you don’t replace your deterministic model, where you can compute optimality and confidence, with them

When it comes to optimization, you want Augmented Intelligence and a system that, with input and verification at the right points, does all of the tactical drudgery and thunking that the machines are great at (and we are not). You don’t want it autonomously making strategic decisions it doesn’t understand.