Monthly Archives: June 2025

Are ProcureTech Analysts Doing Their Jobs Anymore?

Very good question. Let’s get down to definitions.

Analyst: a person who conducts analysis

Analysis: a detailed examination of the elements or structure

There are two key words here “detailed examination“. At the major analyst firms (i.e. Forrester, Gartner, Hackett, IDC, etc.), is this happening? And to what extent?

Those following LinkedIn will have seen a lot of posts putting down the major analyst firms (and one firm in particular) over the last few months, including:

And you have to wonder if they are doing a “detailed examination”?

Because, as

  • THE REVELATOR and the doctor have repeatedly pointed out, doubling down does not mean detailed inquiry, and technology first is (as it’s always been) a recipe for disaster
  • if firms are claiming a map is no longer relevant, then either the map is not analyzing technology (enough) or not doing a proper analysis with respect to actual marketplace needs for the technology
  • if the founder of one of the most significant supply chain analyst groups in existence is saying the most recent event was a tornado echo chamber of buzzword bingo and a vicious cycle of recycled hype—analysts feeding vendors, vendors feeding analysts. No one challenged the status quo. No myth-busting. No dragon-slaying. No industry policing. Just a milk-toast cycle with no actual analysis in sight

Then it seems actual analysis has flow the coup from at least one of these big shops (if not two or three). And if that’s the case, then what’s the point of these shops employing ProcureTech analysts?

Because an analyst should be

  • doing detailed technology examinations
  • giving their totally unbiased opinions, for better or worse,
  • telling buying organizations what’s important in analyzing vendor solutions and what’s not, and
  • telling vendors what they should be focussed on to serve the buying organizations they want to sell to

and should not be

  • defining arbitrary market parameters as to whom can be considered for a technology evaluation and whom can not (when it should come down to whether or not the vendor has a module that meets the core technology requirements from a stack and functional viewpoint),
  • analyzing AND scoring very subjective factors (“innovation”, “vision”, “sales strategy”, etc. etc. etc.),
  • repeating vendor soundbite and BS marketing ad nauseam and
  • accepting money to repeat vendor soundbite and BS marketing ad nauseam!!!

So while real ProcureTech analysts are sorely needed, the doctor also has to wonder if many of the existing ProcureTech analysts are doing their jobs anymore!

 

AI-Enabled, AI-Enhanced, AI-Backed, AI-Powered, AI-Driven, or AI-Native?

It DOES NOT matter. It’s ALL AI-Bullcr@p! Every last instance!

Vendors still won’t admit that AI is the new gold-standard for tech failure, including Procure-Tech, as evidenced by the fact that tech failure rates have shot up to an all-time high of 88% (see Two and a Half Decades of Project Failure). Nor will they admit that even if they have tech that works, that it’s not the be-all and end-all (because, as far as they are concerned, it’s going to slice, dice, and make virtual julienne fries of your data just like a good AI should) and may not be the right solution for you.

But those with any modern tech at all know that a lot of vendors out there claiming “AI” don’t have anything close to deserving the AI label, that they can blame all the failures on those vendors (because they are obviously the new silicon snake oil salesmen, right?), and are now trying to win the AI marketing war by claiming whatever phrasing their competition is using, or not using, proves that their opponent doesn’t have good tech, and definitely doesn’t have AI.

But it’s all bullcr@p, because all of the phrasing is bullcr@p, most of the vendors don’t have anything close to what should be considered AI, and, most of the time, it doesn’t matter whether or not the vendor has AI, only if the vendor’s tech solves your problems.

To make this clear, let’s look at each term, what some vendors say the term means, and why their definition is meaningless.

Term Vendor Definition What it Actually Means
AI-Enabled core features incorporate AI the vendor has injected a few analytical algorithms, but no guarantee they are actually advanced or anything close to what you should expect from AI
AI-Enhanced AI is added to the interface to give you the AI experience the vendor has wrapped a Gen-AI LLM (like Chat-GPT) to give you a meaningless conversational interface
AI-Backed AI is at the core of one or more functions one or more parts of the app are built around an algorithm the vendor is calling AI
AI-Powered External AI has been integrated to power our tech the vendor has wrapped Chat-GPT and integrated it directly into their app (letting unpredictable and undependable code run parts of the app)
AI-Driven AI has been built into the workflow and runs (part of) the app the vendor has decided to let AI control the application (for better or worse) and determine what algorithms to run, when, and why
AI-Native the entire infrastructure was built to support AI the vendor has built the entire application to support integration with AI systems (and may not have built any actual functionality)

Moreover, if you read any statements about how an infrastructure needs to be purpose built from the ground up to “serve data to AI models“, that’s an even bigger pile of bullcr@p because no application works unless it can serve data to the models it is based on, whether classical or modern or “AI”. All applications take in data, process it, and spit it out, so claiming that you need to build a special architecture to support AI is complete and utter bullcr@p.

Always remember the reality that:

  • true AI doesn’t exist (as no software is intelligent)
  • advanced algorithms do exist, but just slapping an AI label on an algorithm doesn’t make it any more advanced than it was yesterday
  • not just any advanced algorithm will do, it has to be appropriate to your problem
  • you don’t always need an advanced algorithm, you need one that gets the results you need

And then you can see through the vendor bullcr@p and focus in on finding a vendor with a solid solution that actually solves your problem, regardless of whether the vendor claims AI or not.

With Suites, What you are Sold Vs. What You Get Vs. What You Need are Three VERY Different Things!

A while back, Dan Gianfreda published a piece on LinkedIn on how what you need is not what you are sold when you buy a a shiny, “all-in-one” procurement platform that is 10X bigger than what you will actually use (on a multi-year contract with a massive implementation that takes months longer than promised and ensures you don’t have the majority of the functionality you need until the contract is almost up), and he was right. But it missed the full picture. The reality is that not only are you sold 10 times more than you will use, but what you will use doesn’t cover what you need, and with a poor selection, might only be one 10th of what you actually need!

In other words, you need to see the full picture:

As outlined in the response post, just because a suite has a module, there’s no guarantee that module is anywhere close to what the organization really needs, especially when the capabilities can vary greatly (and the definitions even more so). Sourcing can be a simple RFX or a multi-staged integrated RFX/Auction platform with embedded strategic sourcing decision optimization. We still see canned reporting modules sold as “modern spend analysis” when they are anything but. And most AI claims are pure BS (or an indication that you should probably run for the hills if that’s the only selling point).

Even if the suite theoretically has the core/must have functionality the organization needs, that’s only meaningful if that functionality is implemented in a way that supports the organizational processes and policies. If approval chains are required, tamper-proof audit logs need to be in place, validated process steps are needed for public sector compliance, and so on — and the suite has none of those, it don’t matter how user friendly, integrated, or “powerful” it is because the organization will NOT be able to use it.

Moreover, the core functionality differs by organizational type and since most platforms only do one of indirect, direct, services, capex projects, or tailspend well, selecting the wrong suite will render it totally useless for the majority of sourcing/procurement projects, which will add insult to injury of the huge cash outlay you agreed to (for an ROI that will never, ever, materialize).

Moreover, as previously indicated, you can NEVER assume that all (or sometimes, even any) of the solution providers will:

  • ask the right questions to understand the challenges
  • do the right due diligence to ensure their solution will solve those challenges
  • be honest about their capabilities (or, outside of the dev team, even understand those capabilities)

because, chances are, as I have indicated many times, everyone in the ecosystem exists to make money off of YOU, but not necessarily to help you. (Especially when too many vendors took too much money and are now under extreme pressures to fulfill ridiculous growth requirements in just a few years or risk massive layoffs, being folded into a bigger player, or getting dropped from the portfolio entirely before going bankrupt.) There’s no time to do it right, just to sell, sell, sell. (Which is why we keep advocating employing an independent consultant to help you with selection, project planning, and project assurance — since their remuneration depends on helping you, not someone else.)

So remember this before you start looking at big suites as there is a good chance you’ll likely be paying 10 times what you should be (based on what you are using) while still only getting 25% of what you actually need in the best case. (And there’s nothing wrong with building your own Best-of-Breed ecosystem, even if you need to add an orchestration player to that mix, if that is what maximizes the return on every dollar spent.)

(Supplier) Diversity is Dead!

Editor’s Note: This is an extended version of a comment that was made in response to an inquiry by THE REVELATOR on LinkedIn about the progression of supplier diversity.

The simple fact of the matter is thus: diversity threatens fascists who want authoritarian dictatorships. This means that as long as far right wing agenda politicians keep getting elected in first world countries (which has been happening more than not over the last decade), not only is DEI (Diversity, Equity, and Inclusion) not going anywhere, but it is going to be rolled back, and done so faster than most policies that came before in countries which equated diversity progress with measurable outcomes.

The sad reality of the situation is that as soon as the board/chief/president of an organization or governmental department concluded that you were not diverse if you did not have x% of whatever minority the board/chief/president thought you should have x% of by time y, and started equating diversity success with measurable outcomes, we went from a situation where “equal opportunity” was replaced with “minority designated role”. And instead of being a further step in the right direction, it was often a step backwards. Under equal opportunity, if two candidates were roughly equal for a role, the role is to go to the minority candidate. And that’s a good thing. However, under “minority designated role”, non-minorities are banned from consideration, and this is not a good thing if there are no qualified minority candidates available for the role. A senior role that should demand a full University degree (Bachelor’s or higher), a decade of experience, and one or more certifications may end up going to someone who just has a 2 year associates degree, only 3 years of work experience (barely relevant to the role), and no certifications as that is the most qualified person who applied.

What many firms fail to take into account when considering diversity mandates is the number of qualified candidates in the minority who are actually in the vicinity of, and who are then actually interested in, and willing to take on, the position. For example, if you were to demand that half of your coding team need to be women, good luck with that when only 25% of STEM graduates in North America are female. (So if you did get 50%, a lot of other companies wouldn’t get any female hires.) Or if you demand that 1/5th of your workforce be hispanic, to mirror the US population distribution, but it’s an in office job in a major city in an expensive neighbourhood where 95% of the local population is white, good luck with that. You might meet your quota, but you know that the vast majority are not going to be qualified for the role.

And DEI didn’t stop there at some organizations and institutions in North America. As soon as people figured out that a DEI program or a particular minority designation could be used to exclude people of certain religion(s) they didn’t like, it went from a tool of inclusion to a tool of subversive discrimination. (So much for equity and inclusion!) Then came the backlash; the labelling of anything even remotely related to DEI, equal opportunity, or humanity as woke; and a full on assault by the fascists and authoritarians.

More specifically, in countries where they have enough power in the government, the authoritarians are dismantling any and all programs they have control over, barring any third party organizations with such policies from doing business with their government, and doing whatever they can to overturn all DEI and Equal Opportunity legislation they can, as far back as they can.

Moreover, given that these far right wing parties are being well funded by donations from the tech bros who spend more time meddling in global politics than running their own ventures, there are not many options for progression of ANY diversity on the global stage.

Agentic AI is a Fraud — And Arguments to the Contrary are BS!

We’ve been over this many times here and on LinkedIn. And we are going to go over many of the reasons yet again, because it seems we have to. But first, we’re going to back up and give you a fundamental answer as to why Agentic AI cannot exist with today’s technology.

The dictionary definition of an agent is generally either “a person who acts on behalf of another” or “a person or group that takes an active role or produces a specific effect“. It’s about “doing, performing, or managing” for the express purpose of a “desired outcome“.

Classically, an agent is a person, and a person is an individual human and a human, unlike a thing, is intelligent, which today’s technology is not and that should be Q.E.D.

However some “modern” or “progressive” dictionaries (and there are at least 26 publishing houses publishing dictionaries as per Publishers Global), will allow an “agent” to be a “thing” as the term has been used in technical diagrams and patents, so if you define AI as a thing (which is being nice), then this refutation is not enough so we have to consider the rest of the definition.

As for “doing, performing, or managing“, like any automated system, it, unfortunately, does something. However, unlike past “automations” which took fixed inputs and produced predictable outputs, today’s Gen-AI takes an input and produces an unpredictable output that depends on how it was trained, parameterized, implemented, interacted with (down to the specific wording and format of the prompt as they are designed to continuously learn [which is a huge misnomer as they do not learn, but simply evolve their state]), fed the data, etc. As a result, two systems fed the same inputs at any point in time are not guaranteed to give the same, or even similar, output and are NOT guaranteed to produce the “desired outcome” (as even one single “bit” of difference between data sets, configuration, and historical usage can completely change an outcome, just like one parameter can completely change an optimal solution in a strategic sourcing decision optimization model). While this can be argued unlikely (and is for run-of-the-mill scenarios), it’s not infeasible (and, in fact, has a statistically significant chance of happening the more an input is off from the norm, and even 1/100 is significant if you plan on pushing [tens or hundreds of] thousands of processing tasks through the agent). Just like one wrong cost (off by a factor of 100 due to a decimal error) can completely change the recommended award in strategic sourcing decision optimization, because these are essentially super sized multi-layer deep learning neural networks (with recurrent, embedding, attention, feed-forward, and maybe even feed-back layers) based on non-linear and/or statistical activation functions where the parameters change based on every activation and, thus, every input, one wrong input, in the right situation, can completely change the expected outcome.

As for the arguments that “they often work as well as people, so why not“, that’s equivalent to the argument that “dynamite often works as well as excavators, so why not” or “loop antennas work just fine for direction bearing on aircraft, so why not“. In the second argument, the “why not?” is that even the best expert can’t always predict the full extent, direction, or result of the explosion (which, FYI, could also create a shockwave that could damage or take down nearby structures). In the third argument, ask Amelia Earhart how reliable they are if the short range is too short. Oh wait, you can’t! (And that’s why modern planes have magnetic compasses and GPS in addition to radio-based navigation.)

As for the arguments that the issues aren’t unique to AI and show up in other systems or people, while that is fundamentally true, there is no other technology ever invented that collectively has all of the issues we have so far identified in Gen-AI … and I’m sure we aren’t done yet!

[ Moreover, when you think about it logically, it makes no sense that we are so intent on pursuing this technology after having spent almost a century designing and building computing systems to be more accurate and reliable than we could ever be (with stress tested design techniques, hardware, and generations of error correcting coding to get the point where a computer can be expected to do trillions of calculation without fail, whereas an average human, who can do an average of one simple calculation a second, might not get through 10 without an error) while continually enhancing their capacity to the point that the largest supercomputer is now one quintillion times more powerful at math than we are! ]

As per a previous post, for starters, this technology:

1. Lies. It hallucinates every second, and often makes up fake summaries based on fake articles written by fake people with fake bios that it generates in response to your query, because it’s trained to satisfy (even though the X-Files made it clear in Rm9sbG93ZXJz how bad an idea this was back in 2018, four months before GPT-1 was released). And that’s the tip of the iceberg of the issues.

2. Doesn’t Do Math. These models don’t do math as they are built to combine the most statistically relevant inputs, not to do standard arithmetic. Plus, they aren’t even guaranteed to properly recognize an equation, especially when words are used. So it will miscalculate, and sometimes even misread numbers and shift decimal points. (Just ask DOGE, even though they won’t admit it.)

3. Opens Doors For Hackers. Over 70% of AI code has been found to be riddled with KNOWN security holes. (So imagine how many more holes it’s introducing!) It doesn’t generate code better than an average developer, and sometimes generates code that’s even worse than a drunken plagiarist intern who can at least identify a good code example to base the generated code on.)

4. Puts Your Entire Network at Risk. If you’re using it to automate code updates, cross-platform integration and orchestration, or system tasks, all it has to do is generate one wrong command and it could lock up and shut down your entire network, no Crosslake required!

5. Helps Your Employees Commit Fraud. They can generate receipts that look 100% real, especially if they do their own math (and ensure all the subtotals and totals add-up and the prices are actual menu prices), look up the restaurant name and tax codes, and have a real receipt example to go off of. (And as for the claims by Ramp and other T&E firms that they can detect fake receipts generated by Chat-GPT, good luck with that, because all the user has to do is strip the embedded metadata/fingerprint by taking an image of the image or running it through a utility that strips the metadata/fingerprint or converts the image format to one that loses the metadata.)

Can you name another technology that comes with all of these severe negatives (with more being discovered regularly, including addiction and a decrease in cognition as a result of using it). (We can’t! Not even close!)

[ And going back to the “people do all of this too” argument, it’s true we collectively do, but the vast majority of humans are not narcissistic sociopathic psychopathic robber barons with delusions of grandeur and no moral code or ethics. (Most criminals and con artists have a code or a line they won’t cross. To date, Gen-AI has proven such a concept is beyond it.) ]

And for those of you who believe Gen-AI is emergent, it has been refuted multiple times. There is no emergence in these models whatsoever. They were, are, and will always be dumb as a doorknob while being much less reliable. (At least a doorknob, when turned sufficiently, will always open a door.) If you want to believe, go find religion (and keep your religion out of technology)or at least restrain yourself to the paranormal. When it comes to Gen-AI, THERE IS NOTHING THERE.

As for it doesn’t have to be smart to be useful, that is one of the most useless statements ever — because it’s context free and could be 100% true or 100% false or anything in between, all depending on the context you are referring to. By definition, technology does not have to be smart to be useful. Every piece of technology we use today is, by definition, dumb because it all lacks intelligence. (And some of it is so useful we’d never live without it — like control systems for energy grids and water works, air traffic, and modern communications).

However, all of the dumb technology that we have developed that is useful is ONLY useful with a precise context that defines the problem to be solved, the inputs it will expect, and the outputs that can be generated. An online order system is useless in a nuclear power plant control station, for example. (And while you think that is far-fetched, many of the areas vendors are claiming you can apply Gen-AI to are even more far-fetched.)

Even if the situations where the task you want ultimately boils down to the digestion, search, summarization, and/or generation of outputs based on a very, very large corpus of data, which is essentially what LLMs were designed to do, they are still only useful as a starting point or suggestion generator, or guide. They are not perfect, are totally capable of misunderstanding the question, being too broad in their interpretation of one or more inputs, being incorrect in their interpretation of the prompt for the desired output, generating fake data despite requests not to, excluding key documents or data from a result, or including irrelevant or wild-goose documents or data in the result.

In the best case, an LLM query will be about equivalent to a traditional Google search (that weights web pages based on key words, context, link weight, freshness, etc. and returns only real links in its results) of potentially relevant data. In the worst case, it’s a mix of real relevant, real irrelevant, and a lot of made up results, and you have no clue which is which until you check every one. (Which means it is ultimately less useful than a drunken plagiarist intern who will only refer to and copy existing references when sent on a research project, and all you will have to do is filter out the irrelevant from the relevant, as such an intern will be too lazy to make anything up completely.)

Since Gen-AI LLMs are what are powering all of the Agentic AI claims, it should now be quite clear as who why Agentic AI is a fraud!

Now, this isn’t saying Gen-AI is bad (as it will have some solid use cases where it is dependable with further research; as with all generations of AI tech before, it needs more time; nor is it saying that we won’t have smarter software agents in the future that can do considerably more than they can today), just that we aren’t anywhere close to being there yet and that they won’t be based on Gen-AI as it exists today.

So for now, as Sourcing Innovation has always advocated, switch your focus to Augmented Intelligence and systems that make your humans significantly more productive than they are today. The right systems that automatically

  1. collect, summarize, and perform standard analysis on all of the data available;
  2. make easily modified suggestions based on those analysis, similar situations, and typical organizational processes (that can then be easily accepted); and then
  3. invoke traditional rules-based automations based on those accepted scenarios and invoke the right processes to implement the defined workflow

can, depending, on the task, make a human three (3), five (5), and even ten (10) times more productive than they are today, allowing for a team of two (2) to do the work that used to require a team of six (6), ten (10), or even twenty (20). For example, consider the best AI-based invoice processing applications that can automatically processes standard form invoices, break out and classify all the data, auto-match to (purchase) orders and receipts, auto-grab missing index/supplier data, auto-accept and process if everything matches within tolerance, auto-reject (and send back with reason and correction needed) in the case of a significant mismatch, and automate a dispute resolution process. When these, with the right configuration and training, will get most organizations to 95% touch-less processing (with zero significant errors), that’s a 10X improvement on invoice processing.

While not all tasks/functions will achieve the high efficiency of invoice processing, depending on the time required to collect and analyze data before strategic decisions can be made, most functions can easily see a 3X improvement with the right Augmented Intelligence technology. With the right tech (especially with supercomputing capabilities in modern cloud data centres), we have finally reached the age where you can truly do more with less … and all you need to do is NOT get Blinded By The Hype.

And for those interested, this latest rant was inspired by one of THE REVELATOR‘s Triple-Play Thursday LinkedIn posts and comments thereon (which have some deeper explanations in the comments if you want to dig in even deeper).