Category Archives: AI

Yes, Gen AI will Have to be Consumed By …

Orchestration along with Intake if any of these loud, overfunded, mostly useless (but, unfortunately, not mostly harmless) startups are going to survive!

Yes, the doctor said it and yes, it’s totally true.

So why this diversion? the doctor was recently asked a variation of the question by a very knowledgeable, observant, and forward thinking executive with a track record of getting it right (and growing companies) who wanted to know if he was grasping the situation accurately and likely correct about how this whole mess is going to shake out once the mass extinction begins later this year/early next year (where the doctor is predicting at least twice the typical percentage of failures, rivalling or exceeding that of the first mass extinction post the funding frenzy and market crash of 2008, as well as a large number of mergers that will happen just so companies can partially survive; and where THE REVELATOR is predicting less than one fourth of companies will make it through unscathed, because the space cannot support 666+ companies).

As the doctor has previously penned in Marketplace Madness is Coming Because History WILL Repeat Itself:

Stand-alone Intake(-to)/Orchestrate solutions, the current poster children of the space, will soon have a fall from grace (and only the smart will survive)! Call me Scrooge if you like, but there’s a logic behind why I’m developing a bah-humbug attitude towards most of these. And it goes something like this.

Intake

  • Pay For View: if modern procurement solutions are completely SaaS, then they should be accessible by anyone with a web browser, so why should you have to buy a third party solution to see the data in those applications? Wouldn’t it make more sense to just switch to modern source to pay solutions that allow you to give variable levels of access to everyone who needs access instead of paying for two solutions AND an integrator?

Orchestrate

  • Solution Sprawl: while orchestration is supposed to help with solution sprawl, it’s yet another solution and only adds to it. Wouldn’t it make more sense to invest in and switch to a core sourcing and/or procurement platform with a fully open API where all of the other modules you need can pull the necessary data from and push the necessary data to that platform?

I2O (Intake-to-Orchestrate)

  • Where’s the Beef?: Talk to an old Pro who was doing Procurement back before the first modern tools began to be introduced in the late 90’s and they’ll tell you that they don’t get this modern focus on “orchestration” and managing “expenses” and low-value buys because, when they were doing Procurement, it was about identifying and strategically managing multi-million (10, 50, 100+) categories where even 2% made a significant improvement to the bottom line, and way more than 10% on a < 100K category.
  • Where’s the Market? This is only a problem in large enterprises — right now, many of these I2O solutions are going after the mid-market who are eating it up because of ease of use, but as soon as they realize the emperor has no clothes, and there’s no support for real strategic procurement (yet alone strategic sourcing) and you have to go out and buy more platforms, what’s going to happen? The reality is that the mid-market is better served by a federated catalog management / punch-out platform, or next-gen marketplace (they’re coming, tech is cyclical like fashion, and it’s due) and will likely be better served still by a new breed of e-commerce B2B solutions for end-user Procurement.

Moreover, as the doctor has penned in many posts, Gen-AI is only useful for tasks that ultimately reduce to

  • large document/corpus summarization
  • large document/corpus query
  • language translation (including natural to system and system to natural)

That’s why the doctor listed so few valid uses in More Valid Uses for Gen-AI … this time IN Procurement!, and why most of those were utterly useless such as:

  • Create meaningless RFPs from random “spec sheets”.
  • Auto-fill your RFPs with vendor-ish data.
  • Generate Kindergarten level summaries of standard reports for the C-Suite.

In other words, on its own, each technology is mostly useless. (But not mostly harmless. On its own, consistently misused, Gen-AI is very harmful. See our other articles for a discussion of that.)

  • Intake is useless on its own because capturing an input is worthless if you can’t do anything with it
  • Orchestration is useless on its own because it’s yet another piece of SaaS you need to maintain that provides no value beyond linking two or more pieces of software together that could both be linked direct through their APIs (since it couldn’t link the software in the first place if it didn’t have APIs)
  • Gen-AI is mostly uses on its own as most of its valid uses are in CLM or RFP query (not creation!), which is only a small part of the S2P cycle

However, if you put it all together, and do it right, the whole may be more than the sum of its parts.

If it’s all expertly glued together:

  • Gen-AI creates a natural language interface where a user can make any type of request, not just a purchase request, that is translated to a standardized system format
  • Intake can process those formats, ensure completeness (relative to the needs of the different enterprise applications and modules that are integrated), send complete requests to the orchestration module, get back the responses, and feed them through the Gen-AI interface to translate them to natural language before being fed back to the user
  • Orchestration links all the applications in a way that directs the request to the right application, or application chain, ensures it gets properly processed and executed and ensures the right results get returned to the right applications in the chain and, ultimately, the user … providing, of course, it’s enterprise wide back-office orchestration, NOT just Procurement!

Which means that the only way any of these players are going to survive is if orchestration gobbles it all up AND does it right.

Gen-AI Won’t Work For Procurement … And Neither Will Agentric AI if the foundation is Gen-AI!

Right now every vendor is pushing “AI”, and the vast majority of that “AI” they are pushing is a Gen-AI LLM, and often that is just a wrapper of a third party Gen-AI LLM, like Chat-GPT (which only the French know how to pronounce properly).

And they are pushing this as a cure-all for all your procurement ills. It’s the new magic elixir. The new panacea. But, in reality, it’s the ultimate silicon snake oil, because it almost works. And it makes you feel really good when you use it. In medical terms, it’s not a treatment, it’s a psychedelic that takes all your pain away (until it wears off that is). But, just like the spoonfuls of LSD that allowed Bender to become the Iron Chef, it will only last long enough for the vendor to win the contract from you, and then it will start to fade. Until it fades completely when you need it most and fails you utterly when you need to figure out how to deal with a border closing that just happened, a critical raw material shortage due to an unexpected natural disaster, or a trade war no one saw (but should have seen) coming.

This is because, as we keep telling you, Gen-AI, which was built as a predictor technology to predict what block of text, in natural language, should follow an existing block of text (using chain-of-compute), based on training across a very large corpus of existing documents. It’s no more, no less. That’s why it’s only good for tasks that can be reduced to large document search and summarization. (And natural language translation tasks, because it understands basic semantics and can easily be trained to translate to and from any machine language you train it to.)

However, this doesn’t help you with any task that requires actual computation! It’s not analytical data processing, it’s not optimization, and it’s definitely not advanced machine learning for advanced mathematical pattern detection. These are the majority of your tasks and the tasks you need to do to analyze a situation. Buys should be based on the lowest total cost of ownership at the maximum acceptable risk level. Sales predictions, and thus demand, should be based on tried and true mathematical trends, not hunches or market hype. Basic invoice processing should be against business rules for validation, approval, and payment, and that should be primarily based on rules-based automation.

Note that none of these core technologies you need to solve the majority of your problems are AI, as we pointed out in our recent article that said you don’t need Gen-AI to revolutionize procurement and supply chain management. Not to say that these technologies can’t be enhanced by the right application of AI — for example, AI could predict the optimization paths most likely to arrive at the optimal answer, the right curve fitting algorithms to match the trend lines, and the right outlier analysis to identify missing, off, or fraudulent information.

Real solutions come from real tried-and-true AI technology developed over years, or decades, that was designed to solve a specific type of problem, not generic text processing technology that was not designed for the problem, has no understanding of the problem, and will make stuff up in an attempt to solve the problem (which is referred to as a hallucination, but is not a bug, but a core feature of Gen-AI / LLM technology).

This is also why Agentric AI built on Gen-AI won’t work — you can’t automatically build an RPA sequence from a chain of compute that could be completely hallucinatory, and you certainly can’t rely on it to solve your problem.

This doesn’t mean there isn’t a use for Gen-AI, it can be trained to be a natural language interface to these other tools that will work reliably the vast majority of the time (say 95%+ if trained over time), but the use is definitely NOT what you are being promised.

Scientists Take Us One Step Closer to AI-Led Destruction

Is it just me or does it take a special kind of idiot to watch Terminator 2 and say to himself “unstoppable shape-shifting robots would be so cool!”?

Scientists Just Created Shape-Shifting Robots That Flow Like Liquid and Harden Like Steel

Especially when Hugo Drax is executing his world domination plan to ensure his AI-backed SkyNet project gains global dominance!

I guess they figured a certain President wasn’t getting the job done fast enough …

Why Does Everyone Believe the AI Hype?

the doctor used to love AI. He spent a decade and half actively promoting it (and wrote two extensive series on The Complete AI in Procurement, Sourcing, and Supplier Management), until Gen-AI and all the false promises bundled with it came along. (Neither it, nor its successor, will be your saviour. It’s not intelligent, not general purpose, and unless your problem ultimately reduces to large document summarization and query, will not solve your problem. Any claims to the contrary are, and, for the foreseeable future will continue to be, false.)

Recently, THE REVELATOR, who is also becoming a little jaded, decided to ask Why does everyone believe the AI hype? (Source)

Of course, the doctor needed to answer.

Why did the American public believe the administration would be any different this time?
(For that matter, why does any first world nation believe their newly elected administration will be any different this time?)

Why does the public at large still believe in the lies that have been fed to them since they were born?
(Primarily American, but Canadians are doing their best to learn from their neighbours!)

Because there is no better producer, packager, and purveyor of Bullsh!t than American Media!
(Although we try, we Canadians can only dream of producing BS that good!)

That’s what the Big AI players use to their advantage
(with their hundreds of millions to billions of dollars and their huge marketing budgets)!

The Procurement Dynamo put it best in a recent comment when he said that we are wired as humans to be lazy and it’s easier to just believe what is being pumped out to us on all the digital channels we consume everyday than do our research, understand the half truths being fed to us, and draw our own conclusions (especially when Math, where the US is now 35th in the OECD PISA rankings, is concerned).

But it doesn’t stop there, not only are we plagued with:

Laziness: Overworked workers being tasked with the nigh-impossible on a daily basis with limited TQ don’t want to design systems, especially when that’s what the vendor is being paid for.

We also have to deal with greed and stupidity making matters worse.

Greed: Investors and rich big company CEOs don’t want workers who want to be paid fair wages, as then they have to deal with worker’s rights (for now at least, but maybe not for much longer in the USA at the rate the government is being dismantled), maternity and sick leave, paid overtime, etc. when they are being promised a software robot that will work 24/7/365 without complaint for a “small” annual fee.

Stupidity: The zealots at many vendors have adopted tech as their religion and messiah and refuse to learn the domain and how to solve a problem with a human centric point of view, believing that, with just a little more development, the tech will magically get there.

And this is why we have so many people blinded by the hype and so many people buying into it.

This isn’t to say that there aren’t real vendors with real AI-backed technology that actually works (because there are, such as ForeStreet that we just covered), it just means that unless you find one of these vendors (which are now in the minority, but SI WILL cover these vendors as it identifies them), and hire intelligent, hard working people who WANT to solve problems and give them the necessary resources to identify these vendors and properly implement ad configure these solutions, you’re not going to get results. Just false promises.

Now that you have the unfiltered answer, do you need to keep asking the question? 😉

Blind AI “Agents” Will Only Worsen Any Situation!

THE PROPHET recently posted that The AI Overton Window is Open in Government Procurement and that makes the doctor scared for you. The damage they can do in private situations is bad. The damage they can do in public situations is much, much worse.

The following obvious outcomes that the doctor already noted in his rebuttal are just the tip of the iceberg:

  • biased awards
  • overpriced awards to holdings of the billionaires that provide the tech
  • non-compliant awards because submitting a form is NOT verifying quality
  • billions lost to fraud as foreign bad actors use their AI to game our AI and direct Billions to accounts that will quickly be emptied to offshore accounts and then untraceable crypto!

For those of you that haven’t figure it out yet, all AI is biased as it is trained to repeat the patterns found in the training data provided, and all of that data is biased to existing providers and decision patterns of biased award judges who find sneaky ways to direct contracts to the recipients they want to give the business too (whether or not they are the best value for the taxpayer’s money). If your President and his DOGE are telling you the truth, fraud (and thus bias) is rampant, and “AI” will just perpetuate that.

Since there are only a few players who are big enough to handle the data volumes and computational workload that would be required to support the US Federal Government, they have an effective monopoly. As a result, they can charge pretty much whatever they want and get it. (And we have already seen how overpriced this technology is. Total Open AI funding to date: 17.9B [TrackXn] compared to total DeepSeek funding to date: 1B [Pitchbook]. The model is more or less as good as the OpenAI model at less than 1/18th the cost [although there is the issue of the controlling company and country]. The next iteration will probably be built for under 100M. Just don’t expect any improvements in performance. There are inherent limitations in the underlying model/technology they keep building on, we don’t have anything better, and given that it usually decades between real breakthroughs in research, we likely won’t until the late 2030s.]) The end result is that the government will probably end up paying twenty (20) to one hundred (100) times what the technology itself is worth because of the lock on the market the big players have in the US.

Applications can only process the data given to them, they cannot confirm it’s validity. All a supplier has to do is lie on a form or get a third party to (electronically) sign a false form (with a small bribe), and, voila, the AI thinks the supplier meets all the requirements. As long as the supplier is the lowest cost and/or highest score on other metrics (which can be achieved through the submission of false data that matches what the algorithm is looking for), it gets the award. And the taxpayer suffers.

Taking this one step further, if awards come with an up-front payment, all a foreign actor has to do is register a fake front company on American soil, bribe third parties to help it submit a lot of false forms, game the system, get the award, get the up-front payment, wire it to an untraceable offshore account, and disappear and if that up-front payment is millions of US dollars, its easy money. Now, if the government is smart and insists that there is no payment until delivery, depending on what that delivery is, if cheap knockoffs can be produced at a fraction of the price (that don’t have the reliability, lifespan, etc.), then this trick could be used, and then, after a few large shipments are delivered, and before the poor quality products break down, the supplier could all of a sudden close shop and disappear. If this doesn’t work, if the foreign actors are training their AI to generate realistic looking data to be fed into America’s AI, it’s just a matter of faking a delivery receipt to accompany an invoice for goods not delivered, getting that first payment, and then disappearing. This is just the tip of the iceberg of obvious fraud opportunities (and every worst case hypothetical situation in your espionage movies and books will come to pass, and more).

In other words, only bad things will happen if you try to deploy AI “agents” to do a human’s job!

We need to stop this ridiculous focus on AI Agents and instead focus on AI helpers. We need to end these bullsh!t claims that we are going to achieve full artificial intelligence and instead focus on augmented intelligence and build tools that enable white collar workers to become super human in their jobs and do the work that used to take ten people. Because that IS possible today (and has been for a while, especially since that was the route we were going down before “chat, j’ai pété” came along with its false promises of artificial intelligence, reasoning, etc.).

All we have to do is, for every problem, apply our human intelligence (HI), design, or redesign, a the process to solve it so that all of the tactical data processing (the thunking the machines can do a Billion times better than us) is separated from the strategic decision making (the thinking the machine cannot do) and the machine automatically does all of the data processing and thunking that needs to be done at each step so that we have the knowledge (processed data) we need to make the right decision (and a well designed interface that allows us to quickly absorb the summary, identify factors that might change the typical decision, and dive into the knowledge and underlying data) and be confident in it.

In other words, we shouldn’t be doing the same analysis and running the same reports over and over again, the machine should automate all of that [as well as various outlier analysis] and present us with the summary, whether it is typical or atypical, the decisions and actions we typically make in similar situations, and the results typically achieved. In many cases, a well-designed process and properly encoded knowledge will result in the machine making the right suggestion, and all we will have to do is verify a suggestion. When it’s wrong, the system should still have the appropriate decision encoded as an alternate the majority of the time, and we should just have to select that. And in the exceptional situation we never thought of, or for which it has no data, we will still be able to alter the process, encode our reasoning, and recode the system to suggest the right action the next time the situation arises, meaning that we will not only start off being ten times as productive, but get more productive over time.

The only real constraints we have are on the data we can leverage due to

  1. the lack of good, clean, verified data (and AI will NOT fix that) in most organizations (private and public)
  2. the lack of proper tools to do an office job in the modern age!

For example, if you give me the right modelling, analytics, optimization, and RPA tools, I can leverage ALL the data at my disposal to arrive at the optimal decision (given the time to do so). But how many Procurement personnel have access to all of these tools? Moreover, what percentage of those personnel would know how to fully leverage those tools (considering you need advanced degrees in mathematics and computer science to do so today). And what percentage still would have the time to do so? The percentage can be expressed by a single digit in industry (if you round up). It’s worse in government! But properly designed tools that embed best practice and human intelligence on top of these tools and bring the knowledge requirements down to what an average Procurement professional has would allow them to be ten times as productive in their analysis and make the right decision every time.

Moreover, the compliance slowdown that people are grumbling about is due to lack of good tools (RPA platforms that walk the users through the process) and people to do the work that HAS to be done manually. (And AI is NOT going to fix the fact that health, safety, quality, and oversight inspectors, where you don’t have enough qualified people to begin with, can be fired in droves and further increase backlogs.)

And guess what? We still handle unstructured data better than AI as some of the BS it continues to spit out in what they call “edge cases” is astounding! (the doctor really hopes the maverick doesn’t go mad in his conversations with DeepSeek — it almost drove the doctor mad just reading them!)

In other words, the core of any business function MUST continue to be HUMANs applying HUMAN INTELLIGENCE (HI!), and modern technology must AUGMENT (not replace) every function. Properly (human) designed and (human) implemented systems that use the right Augmented Intelligence technology (not the hype of the day) to supercharge a human-driven process can make the human easily ten times more efficient in some cases. (But left to their own devices, interacting AI agents will, more-or-less, as Meta found out in multiple forays last decade and this decade, self destruct.)