Monthly Archives: August 2025

There are NO Simple Answers Because CONTEXT MATTERS!

If you’ve been following along, over the past few months we’ve had to complain about:

While these may seem like completely different situations that have to be (continually) (re)addressed on their own merits, they really aren’t. They are all interconnected (and taken together they help to define the 88%+ technical project failure rate in our ProcureTech space) and all have the one of the same issues at the core. They all try to oversimplify, which is something you cannot do in any field of technology because CONTEXT MATTERS!

Analyst Firm and Influencer Maps and flashy graphical comparisons on a few randomly selected “data points” are useless because context matters. You can’t create a shortlist of potential solutions without understanding, at a minimum:

  • who the company is and what the department does
  • the platform and skill topography
  • the problems that the existing topography is not solving

Because sourcing is not sourcing is not sourcing, procurement is not procurement is not procurement, and analytics is not analytics is not analytics. Indirect Finished Goods vs Direct Materials vs Services are sourced differently; catalogs vs. one-time buys vs. on-contract inventory replenishments are handled completely differently; and there’s reports vs drill down cubes vs data federation, and each brings different insights. APIs, interface, and integration requirements differ on platforms, core vs. nice to have shift based upon what’s in the ERP, AP, and SCP systems. And the maturity level has a great impact on what will, vs. will not, be used.

It’s NOT SIMPLE! And anytime someone says “keep it simple, just give them a list”, it means they don’t understand the reality of the situation and that, while it is not complex, not hard (and yes, Procurement can be really easy), it’s NOT simple. Context is needed to make the right recommendations and right decisions.

There is NO Autonomous AI Agent and anyone peddling one is selling the new silicon snake oil. (First of all, remember that there’s no such thing as Artificial Intelligence, and it’s still the case that since a computer can’t take responsibility for a critical decision, it should NOT make one.) For an agent to be autonomous, it would need to have, or be able to retrieve, all the data it needs, connect with all relevant internal and external systems, get information not on the web through traditional means (ask people), verify truth from lies, have the ability to adapt to any situation, and the intelligence to know when a decision can be made and when it can’t. Not only does it not have the intelligence, but no software agent in existence meets the rest of these requirements either. (The best that can be created is a support agent that can do all of the data processing, standard analysis, workflow automation, and decision suggestion using Augmented Intelligence that allows it to act as a useful personal assistant that multiplies your productivity. But ONLY if the Agent has the right context — and guess what, YOU have to work with a partner to custom build that agent with YOUR context. It won’t be delivered out of the box and magically trained just by feeding it your data. The myth of emergence has already been debunked. Please stop falling for it.)

There is no Best-In-Class process or methodology guaranteed to work for you. Unless you are lifting it from a company in the same business buying and selling the same products for the same consumer base that is structured the same way and more-or-less does the same thing as you, that best-in-class process or methodology may not even be close to what you need (and no amount of adaptation will get you there). Best-in-Class always works within a context (which includes your maturity level as an organization), and until that is understood, no consultant or analyst can make the right recommendations for where you are today.

So next time someone says it’s simple, and that their map, chart, or infographic will solve all your problems — delete it, because unless they also take the time to qualify the context in which that map, chart, or infographic applies, it is worse than useless for you (and doubly so if it presents a dangerous and dysfunctional dashboard) and may even cause organizational damage if blindly followed.

Finally, just remember, just because it ain’t simple, that doesn’t mean it ain’t easy. It just requires a bit of brainpower and effort to get it right, and, moreover, an amount thereof that is well within our capability!

Why Are We Inundated By AI Slop?

And I don’t just mean the slop produced by AI, which we should all know by now is 100% AI slop, but all of the human and “expert” guidance produced, or co-produced, by real people that isn’t much better!

In one way the answer is simple: there is a considerable lack of knowledge and understanding about AI, even among the firms and practitioners who are touted as, or claim to be, “the experts”. There is both a failure to realize this as well as admit this.

But let’s back up. Recently, THE REVELATOR asked, in response to a Gartner post (screenshots below, because Gartner has a habit of deleting posts where THE REVELATOR asks hard questions or points out major issues, asked for my “thoughts” on the infographic that referenced a two year old paper. A two year old paper that didn’t even mention a number of critical concepts that should have been discussed in reference to the AI capability and tooling breakdown the infographic presented, and all but one of those concepts should have been mentioned if it was a serious evaluation of AI technology at the time.

My thoughts on the matter would be obvious to anyone who’s read more than a handful of my articles, but I decided to step back and assume the real question was not “is this bad” but “why does this keep happening” — why do Gartner, and almost every other analyst and consulting firm (because it’s not just Gartner, so they shouldn’t be singled out), keep producing content that just doesn’t cut it — that doesn’t address the core issues, outline the challenges, discuss the plethora of failures (with an 88% tech project failure rate in the last published study with indications it could now be as high as 92% in AI), or provide any deep understanding of AI technology and how to differentiate between it?

The reason is two-fold. At best, the big firms have only a handful of employees who have a real understanding of the technology, but

  1. 100 times as many analysts and consultants taking advisory on the matter from vendors (who we have already told you have lured big analyst firms astray) and clients who know even less, and this is the workforce powering
  2. the relentless marketing machine (powered by AI content writers) that believes they have to pump out multiple articles a day to be relevant (even though not one of those articles has an original thought, insight, or suggestion on how to better make use of this technology because all AI bots can do is regurgitate someone else’s ideas and content)

The reality is that very few people understand advanced technology, especially new (or recently sexy) advanced technology. To truly understand this technology, you need the equivalent of a PhD — either years studying it in an academic environment or the equivalent number of years studying it in R&D labs or proof-of-concept implementation pilots.

A few years of “prompt engineering” an LLM or configuring pre-built models on Sci-Kit that “work the majority of the time for the use cases they tested” doesn’t cut it. Not even close!

You need to understand the core algorithms and the fundamental mathematics that underlies them, and that’s not easy. Even classical curve-fitting, nearest neighbor, clustering, regression, and knowledge graphs can be much more intricate than you think. The complexity intensifies when you migrate to multi-layer (feedback) (deep) neural networks, semantic technology built on ML(F)(D)NNs, and now LLMs which don’t just use very advanced statistical processing to map an input of a fixed type to an output in a fixed set (that can computed with mathematical confidence) but an arbitrary input to a generated output using layered feedback statistical calculations on parts of the input that are statistically stitched together (like Frankenstein’s monster, but worse) to make parts of the output, which means that hallucinations are a core feature of these platforms (as well as behavior that is much, much worse). Furthermore, if you’re trying to put it all together, them, unless you understand the limitations in interplay between different algorithms and models … good luck. (And, unless you understand the underlying mathematical models and their strengths, and limitations, good luck with that too!)

And this isn’t easy, especially when you need to start asking questions about computability (and decidability).

To put this in perspective, I have an earned PhD in Computer Science (specializing in data structures and computational geometry, but also included study of late 90s “AI” (including ML, Expert Systems, and Neural Networks) and when you earn one of these degrees, don’t wimp out (and try to stick to coding or “software engineering”, and take all of the (cross-listed with Mathematics) logic and theory courses, at least when I studied, you studied the classics in fundamental algorithms, automata, P vs NP, computability and decidability. If you do well in these advanced courses, you leave with the nagging feeling that you still don’t really understand what you studied (and tested on) — and you don’t! For example, it’s not just P vs NP — it’s P vs NP Hard vs NP Complete. And P isn’t always P, because if it’s n^8, well, that might as well be NP Hard for practical purposes! And categorization in NP is way harder in practice than it is in theory. And advanced algorithms often perform no better than stupid simple ones and it takes years to “see” why. And so on.

It takes years to get a grip on and really understand the fundamentals, which is what you need to understand to get a grip on what you can and can’t do with advanced algorithms in the fields of optimization, predictive analytics, and AI — which each take additional years of study, research, development and implementation experience to understand what they can and can’t do and evaluate new developments from technical papers, not marketing BS and fairy tales weaved by master storytellers that would leave PT Barnum in awe.

Script kiddies, “prompt patsies” (they are not prompt engineers, that is utter BS), consultants, and analysts with no formal background in CS or appropriate areas of STEM and limited experience beyond installing someone else’s software and doing a few parametric modifications don’t understand this. Not even close! (And don’t even have the background to understand where there understanding is [more] limited!) But yet, this is what most of the firms are asking of their consultants and analysts everyday, which is why we get so much AI slop that completely misses the point.

You have too many people without the deep background and experience being told that everything they do has to be “AI” (even if they have no clue what it means) because of all of the funding being poured into it, too many more “influencers” (or should I say silicon snake oil peddlers) trying to take advantage of the confusion, not enough deep understanding, and almost no one willing to cut through the noise and say “wait a minute; the AI they are selling is not the AI you are looking for“.

That, and everyone forgetting, as happens every hype cycle, that context matters. But that’s another article, because context doesn’t matter if you don’t know what you’re doing.


The original post of joy pain.




Listen to Tom and Jon. Say what needs to be said. Especially if you can’t smile when saying it.

Procurement is not just about savings (and the cost avoidance that the C-Suite continually demands but refuses to recognize during the performance reviews). Nor is it just about supply assurance, which is most definitely critical in direct industries. It’s not even about risk management, even though that’s a big part, because the organization likely has a role dedicated to that.

It’s about value generation. While corporate and the other departments like to propagate the myth, Procurement is not a cost-center! With the exception of headcount and supporting software, it’s not spending it’s own money, it’s helping the other departments and budget holders spend their money more wisely in a manner that generates additional value, whatever that value may be. Sometimes it’s lower cost, sometimes it’s higher quality, sometimes it’s lower risk, sometimes it’s higher service.

This will require a lot more than just standing up and refusing to endorse a large contract that did not go through a proper selection process and/or was not properly vetted. (Emphasis on large. If the contract is small, and does not require procurement vetting [which will often be the case in Marketing, Legal, etc.], it’s probably not even worth the cost of review. But if a department wants to hand out a multi-million dollar contract with no bid and no vetting, BIG RED FLAG!)

A big thank you to Tom Mills for reminding us of this in his recent post on how Procurement’s job is not to smile and nod, which reminded me of a post by THE REVELATOR Jon Hansen about a year ago on How It’s Procurement’s Job To Speak The Unthinkable (which he credits to Tom for inspiring him in something Tom wrote about a year ago).

Because Procurement has to stand up to decisions that will have a significant negative impact on the organization, such as

  • outsourcing critical functions (with no mechanism to capture knowledge and bring the function back when a temporary crisis has been averted),
  • changing providers due to temporary geopolitical conditions without proper long-term planning, and/or
  • attempting to replace employees with AI (vs. augment them for maximum performance).

While we can say that all of this will make you EXCEEDINGLY UNPOPULAR with the CXO who is pushing for this (even more so than just telling the CEO to essentially f*ck 0ff, which, I can tell you from personal experience, they really don’t like to hear), you have to do it because, as we all know, none of the I-can-manage-off-a-spreadsheeet MBAs or @ss-k1ss3rs will! But all of this is absolutely vital to organizational success and the value Procurement can bring because no one understands better

  • the cost of lost knowledge,
  • the full impact of a rush decision to change suppliers and all of the organizational and supply-chain wide fallout that will occur for months (and maybe years) to come, and
  • the true value of a knowledgeable employee (vs. the true cost of a bad decision left to AI)

than Procurement. Procurement is about identifying, realizing, and protecting value. And if Procurement pros don’t speak up when they need to, then value will be lost. After all, it’s not like you can’t be very polite when doing it (unless the project leader keeps cutting you off, in which case you have another problem to speak up about).

Even in the age of “AI”, SaaS Startup Valuation Isn’t That Hard

The Prophet recently penned a long LinkedIn post on The New Diligence Questions for SaaS in an “AI”-dominated world that, on a first read, makes it sound like diligence is going to get insanely difficult unless you’re backing AI (because, apparently, AI is going to replace everything and everyone).

The reality is that AI doesn’t really complicate the equation, especially if you already realized that a lot of software is becoming a commodity and making the right investment is all about focussing on what’s not commodity and then, when you find that subset of potential investments, which one of those is the most user friendly. And you can narrow down to a good potential investment pretty quick with just 3 short questions:

What data is being captured, created, or curated?
Tech replicates quickly, and easier to build now than ever. But good data is scarcer and scarcer.
What unique algorithmic capabilities does the platform possess that can’t be accomplished by today’s, and likely tomorrow’s, AI?
Orchestration, workflow, NLP, et.? Sorry but that’s all pretty common place. We’ve had we-based middleware since a year after the world wide web was invented (and orchestration is just middleware 3.0), workflow for decades longer, NLP for decades (although LLMs now make it easier to use and more accessible), etc. You need to look for unique algorithmic capability that can’t be plug and play from open source components or learned by dumb AI (like advanced optimization, new types of mathematically sound predictive analytics algorithms, etc.)
Does the platform enable users, through Augmented Intelligence capabilities, to be 10X as productive as they would be without it?
i.e. where data collection, processing, workflow, etc. etc. etc. can be fully automated, is it? does it employ NLP interfaces to the extent possible for non-technical users?

This is what defines winning software, not plugging in overhyped 3rd party LLMs and AI tech that is still, more-or-less, experimental, hallucinatory, and fundamentally flawed.

Once you have successfully answered these questions, chances are that there is nothing else super significant to answer about the tech (beyond the standard due diligence process, inc. security and privacy reviews where needed) and you can focus on the business and market questions. Does the market exist, and does the business have the right people, processes, and support to capture the market.

So, in other words, if the platform

The SaaS play has value, and you can move onto the business and market analysis.

The only real question will be how to define the market and the new market value in an age of (temporarily) overhyped AI / Agentic plays (when, as we have pointed out many times, it’s not new, just better) to determine its real valuation (when you are being flooded with nonsense).

And of course,

  • beyond pure S2P,
  • easy agentic co-worker interfaces, and
  • plays well with “AI”,

as pointed out by The Prophet, will increase value, but that’s not the core of what you’re looking for.

Governance IS the Agent No One is Talking About

Joel is right — The Procurement AI Agent That No One is Talking About is Governance, it’s the agent that is needed the most, and, moreover, it’s one of the few agents, especially among the AI Agents (that include the felon roster), that can actually be implemented predictably and reliably, if you define their role properly.

In Joel’s post, he asks:


What happens AFTER you go live?

  • Users start tweaking workflows without documentation
  • Agents get duplicated as teams grow
  • Logic gets lost when staff turnover happens
  • Nobody remembers why decisions were made

And then tells you the answer:

It’s the same mess we created with ERP and S2P systems!

And then he goes on to say

????’? ???? ?? ????:

  • Automated workflow documentation
  • Change tracking with rationale capture
  • Duplicate detection and consolidation
  • Impact analysis before modifications
  • Knowledge retention across team changes

And he’s very close here, except what we really, really need (and really, really want) is

  • Impact assessment before initial implementation (as well as modifications),
  • Workflow documentation up-front and not just on changes, and
  • Documentation of every decision made, whether or not it changes the workflow, as well as who made it, and who approved.

In other words, knowledge capture and retention is ongoing, change tracking is also decision tracking, and analysis is continual.

However, when it comes to duplicate detection and consolidation, good luck with that!

While it would be nice to automatically detect (and quash) duplicate agents — if they are acting on API pulls through third party systems, how do you know they exist? When users in multiple departments go rogue, and do their own thing (especially if they are unaware there’s already an agent-based app for that), how do you know? You don’t!

So, instead, what you should really be focused on, especially from a GRC viewpoint, is

access tracking and access control
only authorized, validated requests get through to systems and agents because while you can’t track every agent on your system, approved or felonious, you can ensure access control to data if you replace the (open) APIs with no access control or access tracking with an agent that intercepts all requests and does that
risk assessment
continuously monitor data sources, internal and external, for KRIs and alert the right person when a potential risk situation is detected
compliance enforcement
ensure that any company, industry, or government protocols are followed in access control, data collection, decision making, and reporting

Considering that all of this can be accomplished via well-defined workflows, you could build very reliable agents and solve the un-cool problem that everyone needs a solution too. And I think that would be cool. Don’t you want to be someone who’s cool?