Category Archives: AI

You Really Don’t Need to Read Another State of Procurement Report for Five Years!

Just read this 34 part series and you can ignore the 10+ surveys / studies / reports that will be collectively released by every major ProcureTech consultancy and analyst firm this year (which will likely include, but not be limited to: Capgemini, Deloitte, Everest Group, EY, Hackett, McKinsey, PwC, and many, many more)! We say this with certainty because we reviewed all of the reports they put out for the last 5 years and the vast majority of the content was the same year-after-year and firm-after-firm. You can practically count on any survey/study that tackles barriers, risks, and concerns to overlap with the following at least 80%, and that these will be the most significant barriers, risks, and concerns. In fact, in five years, only one concern will have changed, and that’s the tech-du-jour, because that’s all that was really different between 2025, 2020, 2015, etc.

You’re welcome!

You Don’t Need To Read Another State of Procurement Study for the Next 5 Years!

Top Barriers to Success

Breaking Down The Major Procurement Risks with High or Moderate Impact

Primary Concerns for Procurement Leaders

BONUS

If You Think You’re Ready for AI, You’re Not Ready for AI!

All of the Big Analyst Firms, Consultancies, and Vendors are telling you that you need AI, that it’s the only technology that’s going to allow you to get with the digital times, and that everyone else is using it, so you should too.

But the reality is that you probably don’t need AI, it’s not the only technology that can bring you up to date in the digital age, and while many people are using it, 94%/95% are FAILING.

The only hope you have to succeed is to be brutally honest, to ADMIT what you don’t know, that you’re only chasing AI because of FOMO and FUD, and that real progress has always been methodical and one step at a time.

More specifically, from where you are starting, not from where the market pretends you are.

The only organizations that have been successful at AI are those that:

  • honestly assess where they are today
  • determines their readiness for change
  • identifies the most time consuming processes they are willing to change
  • identifies the appropriate automation one process at a time, which is often just simple workflows/RPAs/built-in automations in existing platforms and other times ML/ARPA
  • monitors and tweaks them until they run smoothly and reliably
  • uses modern meta-workfows/ARPA/AI to connect the individual automations together where, and only where, it makes sense
  • only slaps guard-railed semantic tech / focussed SLMs on top to provide a natural language interface that processes inputs and outputs fixed action requests where appropriate

Successful companies don’t go all in an unproven tech, don’t try to do big bang projects (as that only results in big failures and sometimes the greatest supply chain disasters of all time), and definitely didn’t take the advice of the BIG X that promoted multi-year modernization mega-projects with no successes that they can point to.

In other words, the only companies that have succeeded with AI (the 5% to 6% depending on if you would rather go with McKinsey or MIT) are those that learned from the mega-ERP disasters of yesteryear and did a sequence of successive mini-projects that each built on the lass and slowly ramped to mega success.

In other words, they understand that you have to crawl before you can walk and walk before you can run. And if you can’t even crawl, you’re not ready to try and run at the Olympics, which is the level of tech maturity you have to be at to HOPE to succeed with AI.

Primary ProcureTech Concern: (Gen-)AI Integration/Impact

The non-stop hype coming straight from the A.S.S.H.O.L.E. is continuing to cause market confusion and utter chaos.

Why?

Gen-AI is on the concerns list because it’s the tech-du-jour. Five years ago it was (advanced) (predictive) analytics. Ten years ago it was the fluffy magic cloud. Fifteen years ago it was SaaS. Twenty years ago it was the World Wide Web. And so on.

But not one of these technologies, all sold as the panacea that would solve all your woes, solved your problems because all of the promised capabilities were just silicon snake oil, and Gen-AI is no different. The hype cycle may be slowly coming to an end, but it will quickly be replaced by Some-BS-World-Model-Adjacent-Agentic-AGI that will be sold as the AI that finally solves all your problems but, in reality, still won’t be anything close (but, if narrowly applied in the right domains where the client has sufficient data might actually work quite well … but won’t do anything reliably in general and the failure rate will still be 80%+, which is the average tech failure rate for the last 25 years … and SI knows, because the doctor has been following tech failure for over 25 years).

Not only is Gen-AI no different than the previously over-hyped tech-du-jour offerings of the last two decades, but with a failure rate of 94%+ (McKinsey, and 95%, MIT), it’s arguably the worst yet! And, as per our predictions, it’s not going to get much better. If the failure rate gets as low as 90% this year, it will be the closest thing to a tech miracle that we can conceivably get. Like every other tech before, Gen-AI will only solve a relatively small set of problems.

Just like

  • The Web only solves remote connectivity
  • SaaS only allows solutions to be built in the cloud
  • Analytics only provides insight where you have the right, sufficient, data and the right algorithms to get useful insights
  • Gen-AI is just a next-gen probabilistic deep neural net that often does
    • better semantic processing
    • better search
    • better summarization
    • better potential pattern identification (but only if you can learn how to prompt it to do so and only if you have it trained on the right data subsets, not the entire web which is now more than half AI slop)

    but does so at the additional expense of

    • hallucinations
    • intentional falsehoods
    • thoughtless reinforcement
    • cognitive atrophy
    • etc. etc. etc.

As a result of this, as far as I’m concerned, the AI bubble can’t burst fast enough! It’s all hype, buzzwords, and hallucinatory bullcr@p. And, frankly, any (claims of) agentic AI built on it are fraudulent. (After all, we’ve already seen what happens when you let AI run your vending machine. The last thing you want is it buying for you!)

Especially when, on top of hallucinations, we have plenty of examples of:

We’ve said many times that LLMs are not helpful and ChatGPT (in particular) is not your friend, that if you have a headache you definitely shouldn’t take an aspirin or query an LLM, and that, frankly, you’d be better off with a drunken plagiarist intern because that’s the best case result from an LLM. Most are worse.

Frankly, it’s time to stop falling for the artificial intimidation, fight back against AI Slop, and remember cutting edge tech is NOT defined by the C-Suite or the incessant marketing from the A.S.S.H.O.L.E. that is targeting the C-Suite on a daily basis!

Impact Potential

Huge! Companies will continue to waste millions individually and collectively hundreds of billions on the next generation tech that, with a probability of 90%+, will generate a (huge) loss.

Major Challenges/Risks

The major challenge is not with the tech, it’s helping companies realize that they’re probably not ready for the tech. The reason that tech failure rate has averaged 80%+ over the last twenty years is that consultancies keep promoting, vendors keep selling, and companies keep buying advanced leading edge tech they are not ready for. The reality is that unless you are in the top 10% of buyers of tech, already on the latest tech, and have sufficiently mastered that tech, you are not ready for Gen-AI (which should not have left the research lab when it did and, in all honesty, should still be in the research lab since it still only works in a small number of well defined scenarios and is so bad that every year a couple of AI founders turn away from AI because of it — with Yann Lecun walking away from Meta and LLMs and reverting to world models, that can be thought of as next generation (Semantic) Web 3.0 models augmented with [deterministic and dependable] automated reasoning and, hopefully, very little dependence on hallucinatory probabilistic models [beyond what’s needed to semantically parse an input].)

The only place you should be using Gen-AI is where a non Gen-AI solution doesn’t exist, the task is well defined, and you can build a custom in-house model that works reasonably well in the majority of situations and that can be implemented with guard-rails. But that’s something you can only do if you have a high TQ (Technical Quotient) and have mastered last generation tech. Right now, you should be tripling down on E-MDMA and Advanced Analytics as this tech has improved to the point where it can allow you to optimize processes, spending, schedules, and anything else you can think of with high accuracy and low cost with basic analytics skills as so much comes pre-packaged and the visualizations and drill-downs are much more intuitive than they were a decade ago. Plus, these firms have figured out how to use multiple forms of AI to classify your data with high accuracy and minimize the work required by you to fix errors and reclassify to your preferred schemas. It’s literally drag and drop as compared to the complex rule-building that used to be required. In addition, you should be looking for the mature A-RPA (Advanced Robotic Process Automation) solutions that are highly customizeable and capable of “self-learning” such that the parameters that trigger exceptions will adjust over time based upon user acceptance or rejection of recommended actions and the platform will automatically encode new processing rules based upon the users’ actions on an exception. Much better than Artificial Iiocy that decides everything based on hallucinations.

THE FINAL WORD

If you haven’t mastered all of the tech that existed before Gen-AI, including classical machine learning AI that has been studied, optimized, and proven to work for over a decade, you’re not ready for Gen-AI, should treat it like the drug it is (as it does more damage to your cognitive abilities than many illegal drugs), and JUST SAY NO!

There is NO Infinite Compression – The Latest DeepSeek Paper is BullCr@p!

Every decade or so, some idiots who never studied Huffman coding or Information Theory believe they have cracked the problem of infinite compression, and this linked paper is just the latest example of this lunacy. I really hope this was a joke paper authored by AI because it’s all bullcr@p!

On average, a text token in a LLM should require 20 bits or less (as 17 bits support a 129,000 word vocabulary) while a vision token can be 16,384 bits (based on 1024 dimensional continuous vectors) — because it takes a lot of bits to represent pixelation of a square in a 2-D image! This says you can store about 820 text tokens in the same space it takes to store one vision token. Or, you can store the entire text (lossless) in 48K, versus the 4M it would take to store the 250 vision tokens (using very lossy compression) that are required in the paper. Looks like a LOT of people can’t do basic math if this is being praised as revolutionary!

Moreover, the raw text, which maintains the full context if the tokens are kept in order, is not only fully lossless, but can be compressed using a modified Lempel-Ziv algorithm to take up an average of less than 2 bits per character (and achieve up to an 80% compression rate). Given that the average length of a word in average text is 5 characters, and a space is one character, 2500 words would be 15,000 characters, storable in 30,000 bits or a mere 4K! In other words, this paper is trying to pass off a ONE THOUSAND FOLD increase in space requirements as space saving! Pure lunacy!

In other words, if someone is claiming something too good to be true, it is! Don’t fall for it or the sure to follow claims that DeepSeek OCR is revolutionary because of this. (Since every document is different, you can’t imagine the true loss with a 90% vision token reduction!)

CEOs are hugely expensive. Why not automate them?

As per Will Dunn, as published on The New Statesman

Especially when hiring a CEO who doesn’t understand what makes the business profitable loses Billions:

Starbucks Loses 30 Billion

and doesn’t understand what is critical to the company product to the point costs can never be cut no matter how high those costs may look on the spreadsheet because the net result is not only product failure, but grounding/banning of your product and expensive lawsuits that costs Billions:

Boeing lost 11.8 Billion in 2024

After all, if we’re hiring CEOs without any relevant experience, actual business intelligence, or even logic, then why not use Artificial Idiocy? It’s not like the occasional hallucinations will be any worse that an average CEO’s these days (who believes investing Billions on empty promises is a good idea) … and the actual compute costs, even if in the six figures, will still be a tenth (or [much {much}] less) of what a CEO salary and benefit package actually costs!

So if you insist on creating fictional “AI Employees”, why not kick off 2026 by starting with a job that, sadly, Gen-AI agents can actually do?