Category Archives: rants

AI Employees Aren’t Real! Don’t Believe The Lunacy!

This should be so obvious that it shouldn’t need to be said, but with multiple companies still promising to (soon) deliver “AI Employees”, it apparently needs to be said.

First of all, why it should be obvious:

  1. There is no Artificial Intelligence. The tools are as dumb as a doorknob. The best you can get is Augmented Intelligence, which, by the way, is what you really need because it can provide almost instantaneous insights that would take a traditional analyst with traditional tools days to weeks (or months) to discover.
  2. An employee is a person. A PERSON! Not a piece of software.
    (As we don’t have AI, we don’t have autonomous robots, so we can’t even have the theoretical argument about whether or not a robot should be recognized as a person for legal means.)

Secondly, we’ve already seen how autonomous software agents don’t work (because they are not intelligent or people). Klarna, one of the first companies to fire the majority of its customer support team with the false claim AI can do that, quickly found it it really can’t and now has to hire back hundreds of support agents because what AI was really doing was the work of 700 really bad agents! And their customers didn’t want to talk to these bots (essentially because of how dumb and useless they were).

Thirdly, there have been no, and nor will there be with existing algorithms, stacks, and technologies, any magical emergence that will suddenly allow these “AI agents” to become intelligent and be able to perform their tasks autonomously. Because

  1. if Neural Networks were the right models, today’s models (constrained to commercial compute capacity) would put them on par with a pond snail, maybe a sea slug;
  2. compute power doesn’t double year over year anymore; Moore’s law is quickly becoming a historical footnote due to quantum limits; and
  3. there’s no more data to train them on — the big AI tech plays have already illegally stolen all of the copyrighted data on the internet, and that’s still not enough (and AI generated data just worsens performance because it’s not real, or good, data).

Moreover, we don’t need AI Employees. What we need are more productive employees! Employees that don’t waste up to 80% (or more) of their time doing more-or-less nothing but data wrangling trying to turn data into knowledge and knowledge into the insights needed to make a decision. Tasks that are purely tactical calculations and conversions that are precisely what computers were built for. Computers can do trillions of calculations a second error free, while we can only do a few, and not necessarily error free.

Which means what we really need are Augmented Intelligent Agent Assistants that do the computational tasks we need done and either

  1. automate processing that we would do almost thoughtlessly if we determined that the appropriate conditions were met or
  2. present us with the data, knowledge, and insights we need to make a decision and take action, including suggestions for that action if there are standard response patterns

Because, when the 80% time wasting tactical data processing is taken off of our plates, we will be at least 5 times as effective with these Automated Intelligent Agent Assistants, and that is what will propel organizations forward. Not dumb tech, and definitely not false promises of fake AI Employees that do not, and fundamentally cannot, exist.

There are NO Simple Answers Because CONTEXT MATTERS!

If you’ve been following along, over the past few months we’ve had to complain about:

While these may seem like completely different situations that have to be (continually) (re)addressed on their own merits, they really aren’t. They are all interconnected (and taken together they help to define the 88%+ technical project failure rate in our ProcureTech space) and all have the one of the same issues at the core. They all try to oversimplify, which is something you cannot do in any field of technology because CONTEXT MATTERS!

Analyst Firm and Influencer Maps and flashy graphical comparisons on a few randomly selected “data points” are useless because context matters. You can’t create a shortlist of potential solutions without understanding, at a minimum:

  • who the company is and what the department does
  • the platform and skill topography
  • the problems that the existing topography is not solving

Because sourcing is not sourcing is not sourcing, procurement is not procurement is not procurement, and analytics is not analytics is not analytics. Indirect Finished Goods vs Direct Materials vs Services are sourced differently; catalogs vs. one-time buys vs. on-contract inventory replenishments are handled completely differently; and there’s reports vs drill down cubes vs data federation, and each brings different insights. APIs, interface, and integration requirements differ on platforms, core vs. nice to have shift based upon what’s in the ERP, AP, and SCP systems. And the maturity level has a great impact on what will, vs. will not, be used.

It’s NOT SIMPLE! And anytime someone says “keep it simple, just give them a list”, it means they don’t understand the reality of the situation and that, while it is not complex, not hard (and yes, Procurement can be really easy), it’s NOT simple. Context is needed to make the right recommendations and right decisions.

There is NO Autonomous AI Agent and anyone peddling one is selling the new silicon snake oil. (First of all, remember that there’s no such thing as Artificial Intelligence, and it’s still the case that since a computer can’t take responsibility for a critical decision, it should NOT make one.) For an agent to be autonomous, it would need to have, or be able to retrieve, all the data it needs, connect with all relevant internal and external systems, get information not on the web through traditional means (ask people), verify truth from lies, have the ability to adapt to any situation, and the intelligence to know when a decision can be made and when it can’t. Not only does it not have the intelligence, but no software agent in existence meets the rest of these requirements either. (The best that can be created is a support agent that can do all of the data processing, standard analysis, workflow automation, and decision suggestion using Augmented Intelligence that allows it to act as a useful personal assistant that multiplies your productivity. But ONLY if the Agent has the right context — and guess what, YOU have to work with a partner to custom build that agent with YOUR context. It won’t be delivered out of the box and magically trained just by feeding it your data. The myth of emergence has already been debunked. Please stop falling for it.)

There is no Best-In-Class process or methodology guaranteed to work for you. Unless you are lifting it from a company in the same business buying and selling the same products for the same consumer base that is structured the same way and more-or-less does the same thing as you, that best-in-class process or methodology may not even be close to what you need (and no amount of adaptation will get you there). Best-in-Class always works within a context (which includes your maturity level as an organization), and until that is understood, no consultant or analyst can make the right recommendations for where you are today.

So next time someone says it’s simple, and that their map, chart, or infographic will solve all your problems — delete it, because unless they also take the time to qualify the context in which that map, chart, or infographic applies, it is worse than useless for you (and doubly so if it presents a dangerous and dysfunctional dashboard) and may even cause organizational damage if blindly followed.

Finally, just remember, just because it ain’t simple, that doesn’t mean it ain’t easy. It just requires a bit of brainpower and effort to get it right, and, moreover, an amount thereof that is well within our capability!

Why Are We Inundated By AI Slop?

And I don’t just mean the slop produced by AI, which we should all know by now is 100% AI slop, but all of the human and “expert” guidance produced, or co-produced, by real people that isn’t much better!

In one way the answer is simple: there is a considerable lack of knowledge and understanding about AI, even among the firms and practitioners who are touted as, or claim to be, “the experts”. There is both a failure to realize this as well as admit this.

But let’s back up. Recently, THE REVELATOR asked, in response to a Gartner post (screenshots below, because Gartner has a habit of deleting posts where THE REVELATOR asks hard questions or points out major issues, asked for my “thoughts” on the infographic that referenced a two year old paper. A two year old paper that didn’t even mention a number of critical concepts that should have been discussed in reference to the AI capability and tooling breakdown the infographic presented, and all but one of those concepts should have been mentioned if it was a serious evaluation of AI technology at the time.

My thoughts on the matter would be obvious to anyone who’s read more than a handful of my articles, but I decided to step back and assume the real question was not “is this bad” but “why does this keep happening” — why do Gartner, and almost every other analyst and consulting firm (because it’s not just Gartner, so they shouldn’t be singled out), keep producing content that just doesn’t cut it — that doesn’t address the core issues, outline the challenges, discuss the plethora of failures (with an 88% tech project failure rate in the last published study with indications it could now be as high as 92% in AI), or provide any deep understanding of AI technology and how to differentiate between it?

The reason is two-fold. At best, the big firms have only a handful of employees who have a real understanding of the technology, but

  1. 100 times as many analysts and consultants taking advisory on the matter from vendors (who we have already told you have lured big analyst firms astray) and clients who know even less, and this is the workforce powering
  2. the relentless marketing machine (powered by AI content writers) that believes they have to pump out multiple articles a day to be relevant (even though not one of those articles has an original thought, insight, or suggestion on how to better make use of this technology because all AI bots can do is regurgitate someone else’s ideas and content)

The reality is that very few people understand advanced technology, especially new (or recently sexy) advanced technology. To truly understand this technology, you need the equivalent of a PhD — either years studying it in an academic environment or the equivalent number of years studying it in R&D labs or proof-of-concept implementation pilots.

A few years of “prompt engineering” an LLM or configuring pre-built models on Sci-Kit that “work the majority of the time for the use cases they tested” doesn’t cut it. Not even close!

You need to understand the core algorithms and the fundamental mathematics that underlies them, and that’s not easy. Even classical curve-fitting, nearest neighbor, clustering, regression, and knowledge graphs can be much more intricate than you think. The complexity intensifies when you migrate to multi-layer (feedback) (deep) neural networks, semantic technology built on ML(F)(D)NNs, and now LLMs which don’t just use very advanced statistical processing to map an input of a fixed type to an output in a fixed set (that can computed with mathematical confidence) but an arbitrary input to a generated output using layered feedback statistical calculations on parts of the input that are statistically stitched together (like Frankenstein’s monster, but worse) to make parts of the output, which means that hallucinations are a core feature of these platforms (as well as behavior that is much, much worse). Furthermore, if you’re trying to put it all together, them, unless you understand the limitations in interplay between different algorithms and models … good luck. (And, unless you understand the underlying mathematical models and their strengths, and limitations, good luck with that too!)

And this isn’t easy, especially when you need to start asking questions about computability (and decidability).

To put this in perspective, I have an earned PhD in Computer Science (specializing in data structures and computational geometry, but also included study of late 90s “AI” (including ML, Expert Systems, and Neural Networks) and when you earn one of these degrees, don’t wimp out (and try to stick to coding or “software engineering”, and take all of the (cross-listed with Mathematics) logic and theory courses, at least when I studied, you studied the classics in fundamental algorithms, automata, P vs NP, computability and decidability. If you do well in these advanced courses, you leave with the nagging feeling that you still don’t really understand what you studied (and tested on) — and you don’t! For example, it’s not just P vs NP — it’s P vs NP Hard vs NP Complete. And P isn’t always P, because if it’s n^8, well, that might as well be NP Hard for practical purposes! And categorization in NP is way harder in practice than it is in theory. And advanced algorithms often perform no better than stupid simple ones and it takes years to “see” why. And so on.

It takes years to get a grip on and really understand the fundamentals, which is what you need to understand to get a grip on what you can and can’t do with advanced algorithms in the fields of optimization, predictive analytics, and AI — which each take additional years of study, research, development and implementation experience to understand what they can and can’t do and evaluate new developments from technical papers, not marketing BS and fairy tales weaved by master storytellers that would leave PT Barnum in awe.

Script kiddies, “prompt patsies” (they are not prompt engineers, that is utter BS), consultants, and analysts with no formal background in CS or appropriate areas of STEM and limited experience beyond installing someone else’s software and doing a few parametric modifications don’t understand this. Not even close! (And don’t even have the background to understand where there understanding is [more] limited!) But yet, this is what most of the firms are asking of their consultants and analysts everyday, which is why we get so much AI slop that completely misses the point.

You have too many people without the deep background and experience being told that everything they do has to be “AI” (even if they have no clue what it means) because of all of the funding being poured into it, too many more “influencers” (or should I say silicon snake oil peddlers) trying to take advantage of the confusion, not enough deep understanding, and almost no one willing to cut through the noise and say “wait a minute; the AI they are selling is not the AI you are looking for“.

That, and everyone forgetting, as happens every hype cycle, that context matters. But that’s another article, because context doesn’t matter if you don’t know what you’re doing.


The original post of joy pain.




Listen to Tom and Jon. Say what needs to be said. Especially if you can’t smile when saying it.

Procurement is not just about savings (and the cost avoidance that the C-Suite continually demands but refuses to recognize during the performance reviews). Nor is it just about supply assurance, which is most definitely critical in direct industries. It’s not even about risk management, even though that’s a big part, because the organization likely has a role dedicated to that.

It’s about value generation. While corporate and the other departments like to propagate the myth, Procurement is not a cost-center! With the exception of headcount and supporting software, it’s not spending it’s own money, it’s helping the other departments and budget holders spend their money more wisely in a manner that generates additional value, whatever that value may be. Sometimes it’s lower cost, sometimes it’s higher quality, sometimes it’s lower risk, sometimes it’s higher service.

This will require a lot more than just standing up and refusing to endorse a large contract that did not go through a proper selection process and/or was not properly vetted. (Emphasis on large. If the contract is small, and does not require procurement vetting [which will often be the case in Marketing, Legal, etc.], it’s probably not even worth the cost of review. But if a department wants to hand out a multi-million dollar contract with no bid and no vetting, BIG RED FLAG!)

A big thank you to Tom Mills for reminding us of this in his recent post on how Procurement’s job is not to smile and nod, which reminded me of a post by THE REVELATOR Jon Hansen about a year ago on How It’s Procurement’s Job To Speak The Unthinkable (which he credits to Tom for inspiring him in something Tom wrote about a year ago).

Because Procurement has to stand up to decisions that will have a significant negative impact on the organization, such as

  • outsourcing critical functions (with no mechanism to capture knowledge and bring the function back when a temporary crisis has been averted),
  • changing providers due to temporary geopolitical conditions without proper long-term planning, and/or
  • attempting to replace employees with AI (vs. augment them for maximum performance).

While we can say that all of this will make you EXCEEDINGLY UNPOPULAR with the CXO who is pushing for this (even more so than just telling the CEO to essentially f*ck 0ff, which, I can tell you from personal experience, they really don’t like to hear), you have to do it because, as we all know, none of the I-can-manage-off-a-spreadsheeet MBAs or @ss-k1ss3rs will! But all of this is absolutely vital to organizational success and the value Procurement can bring because no one understands better

  • the cost of lost knowledge,
  • the full impact of a rush decision to change suppliers and all of the organizational and supply-chain wide fallout that will occur for months (and maybe years) to come, and
  • the true value of a knowledgeable employee (vs. the true cost of a bad decision left to AI)

than Procurement. Procurement is about identifying, realizing, and protecting value. And if Procurement pros don’t speak up when they need to, then value will be lost. After all, it’s not like you can’t be very polite when doing it (unless the project leader keeps cutting you off, in which case you have another problem to speak up about).

There is No Super-Selection Map for Source-to-Pay

In a post on comparing the Hansen Fit Score to other analyst ranking maps and methodologies, THE REVELATOR asks “which would you choose, and why”, to which the doctor responds that THE REVELATOR has to be a lot more specific, because, depending on your context, there could be three choices

1) The Hackett Group Inc. KPIs for zeroing in on what type of technology you should choose for the biggest boost to your business as there’s no arguing with their book of numbers. But this doesn’t give you a shortlist.

2) Spend Matters, A Hackett Group Company Solution Map for deep tech assessments, allowing you to qualify tech for consideration before doing a deep dive assessment on business needs (and we all know that most people can’t do this effectively). Once you know what module, or modules, you need, SolutionMap will give you a qualified list of the best, rated, vendors with those modules.

3) Jon W. Hansen fit score for sieving a shortlist of relevant vendors who make the tech cut into the 3 most likely to be the best organizational fit to invite to the RFP where they can prove their worth AND interest in actually making your organization successful

However, the optimal route, if you have the time and money, is 1, 2, 3 … (and let’s face it, since this could save you millions, you likely do). Why? When you use

1) you focus in on the specific problem set/module (set) to attack first for the biggest impact

2) you filter down to those providers who have the tech to do it

3) you filter down to those that would be right for your business on the other dimensions 1 and 2 does not address.

However, none of these approaches can

0) perform a gap analysis, determine what problems you need to solve, and help you center your analysis on the right metrics or numbers or

5) take the short-list you are left with after using Spend Matters Tech Match (built on Spend Matters Solution Match) or the Hansen fit score and construct a proper RFX to help you determine which vendor will provide you with more than a license but work with you to implement, and execute, a proper solution.

And that’s why there’s no super selection map for source-to-pay!

(And please remember, never use a big analyst firm quadrant map because vendors have lured big analyst firms astray.)