Category Archives: Technology

Agentic AI is a Fraud — And Arguments to the Contrary are BS!

We’ve been over this many times here and on LinkedIn. And we are going to go over many of the reasons yet again, because it seems we have to. But first, we’re going to back up and give you a fundamental answer as to why Agentic AI cannot exist with today’s technology.

The dictionary definition of an agent is generally either “a person who acts on behalf of another” or “a person or group that takes an active role or produces a specific effect“. It’s about “doing, performing, or managing” for the express purpose of a “desired outcome“.

Classically, an agent is a person, and a person is an individual human and a human, unlike a thing, is intelligent, which today’s technology is not and that should be Q.E.D.

However some “modern” or “progressive” dictionaries (and there are at least 26 publishing houses publishing dictionaries as per Publishers Global), will allow an “agent” to be a “thing” as the term has been used in technical diagrams and patents, so if you define AI as a thing (which is being nice), then this refutation is not enough so we have to consider the rest of the definition.

As for “doing, performing, or managing“, like any automated system, it, unfortunately, does something. However, unlike past “automations” which took fixed inputs and produced predictable outputs, today’s Gen-AI takes an input and produces an unpredictable output that depends on how it was trained, parameterized, implemented, interacted with (down to the specific wording and format of the prompt as they are designed to continuously learn [which is a huge misnomer as they do not learn, but simply evolve their state]), fed the data, etc. As a result, two systems fed the same inputs at any point in time are not guaranteed to give the same, or even similar, output and are NOT guaranteed to produce the “desired outcome” (as even one single “bit” of difference between data sets, configuration, and historical usage can completely change an outcome, just like one parameter can completely change an optimal solution in a strategic sourcing decision optimization model). While this can be argued unlikely (and is for run-of-the-mill scenarios), it’s not infeasible (and, in fact, has a statistically significant chance of happening the more an input is off from the norm, and even 1/100 is significant if you plan on pushing [tens or hundreds of] thousands of processing tasks through the agent). Just like one wrong cost (off by a factor of 100 due to a decimal error) can completely change the recommended award in strategic sourcing decision optimization, because these are essentially super sized multi-layer deep learning neural networks (with recurrent, embedding, attention, feed-forward, and maybe even feed-back layers) based on non-linear and/or statistical activation functions where the parameters change based on every activation and, thus, every input, one wrong input, in the right situation, can completely change the expected outcome.

As for the arguments that “they often work as well as people, so why not“, that’s equivalent to the argument that “dynamite often works as well as excavators, so why not” or “loop antennas work just fine for direction bearing on aircraft, so why not“. In the second argument, the “why not?” is that even the best expert can’t always predict the full extent, direction, or result of the explosion (which, FYI, could also create a shockwave that could damage or take down nearby structures). In the third argument, ask Amelia Earhart how reliable they are if the short range is too short. Oh wait, you can’t! (And that’s why modern planes have magnetic compasses and GPS in addition to radio-based navigation.)

As for the arguments that the issues aren’t unique to AI and show up in other systems or people, while that is fundamentally true, there is no other technology ever invented that collectively has all of the issues we have so far identified in Gen-AI … and I’m sure we aren’t done yet!

[ Moreover, when you think about it logically, it makes no sense that we are so intent on pursuing this technology after having spent almost a century designing and building computing systems to be more accurate and reliable than we could ever be (with stress tested design techniques, hardware, and generations of error correcting coding to get the point where a computer can be expected to do trillions of calculation without fail, whereas an average human, who can do an average of one simple calculation a second, might not get through 10 without an error) while continually enhancing their capacity to the point that the largest supercomputer is now one quintillion times more powerful at math than we are! ]

As per a previous post, for starters, this technology:

1. Lies. It hallucinates every second, and often makes up fake summaries based on fake articles written by fake people with fake bios that it generates in response to your query, because it’s trained to satisfy (even though the X-Files made it clear in Rm9sbG93ZXJz how bad an idea this was back in 2018, four months before GPT-1 was released). And that’s the tip of the iceberg of the issues.

2. Doesn’t Do Math. These models don’t do math as they are built to combine the most statistically relevant inputs, not to do standard arithmetic. Plus, they aren’t even guaranteed to properly recognize an equation, especially when words are used. So it will miscalculate, and sometimes even misread numbers and shift decimal points. (Just ask DOGE, even though they won’t admit it.)

3. Opens Doors For Hackers. Over 70% of AI code has been found to be riddled with KNOWN security holes. (So imagine how many more holes it’s introducing!) It doesn’t generate code better than an average developer, and sometimes generates code that’s even worse than a drunken plagiarist intern who can at least identify a good code example to base the generated code on.)

4. Puts Your Entire Network at Risk. If you’re using it to automate code updates, cross-platform integration and orchestration, or system tasks, all it has to do is generate one wrong command and it could lock up and shut down your entire network, no Crosslake required!

5. Helps Your Employees Commit Fraud. They can generate receipts that look 100% real, especially if they do their own math (and ensure all the subtotals and totals add-up and the prices are actual menu prices), look up the restaurant name and tax codes, and have a real receipt example to go off of. (And as for the claims by Ramp and other T&E firms that they can detect fake receipts generated by Chat-GPT, good luck with that, because all the user has to do is strip the embedded metadata/fingerprint by taking an image of the image or running it through a utility that strips the metadata/fingerprint or converts the image format to one that loses the metadata.)

Can you name another technology that comes with all of these severe negatives (with more being discovered regularly, including addiction and a decrease in cognition as a result of using it). (We can’t! Not even close!)

[ And going back to the “people do all of this too” argument, it’s true we collectively do, but the vast majority of humans are not narcissistic sociopathic psychopathic robber barons with delusions of grandeur and no moral code or ethics. (Most criminals and con artists have a code or a line they won’t cross. To date, Gen-AI has proven such a concept is beyond it.) ]

And for those of you who believe Gen-AI is emergent, it has been refuted multiple times. There is no emergence in these models whatsoever. They were, are, and will always be dumb as a doorknob while being much less reliable. (At least a doorknob, when turned sufficiently, will always open a door.) If you want to believe, go find religion (and keep your religion out of technology)or at least restrain yourself to the paranormal. When it comes to Gen-AI, THERE IS NOTHING THERE.

As for it doesn’t have to be smart to be useful, that is one of the most useless statements ever — because it’s context free and could be 100% true or 100% false or anything in between, all depending on the context you are referring to. By definition, technology does not have to be smart to be useful. Every piece of technology we use today is, by definition, dumb because it all lacks intelligence. (And some of it is so useful we’d never live without it — like control systems for energy grids and water works, air traffic, and modern communications).

However, all of the dumb technology that we have developed that is useful is ONLY useful with a precise context that defines the problem to be solved, the inputs it will expect, and the outputs that can be generated. An online order system is useless in a nuclear power plant control station, for example. (And while you think that is far-fetched, many of the areas vendors are claiming you can apply Gen-AI to are even more far-fetched.)

Even if the situations where the task you want ultimately boils down to the digestion, search, summarization, and/or generation of outputs based on a very, very large corpus of data, which is essentially what LLMs were designed to do, they are still only useful as a starting point or suggestion generator, or guide. They are not perfect, are totally capable of misunderstanding the question, being too broad in their interpretation of one or more inputs, being incorrect in their interpretation of the prompt for the desired output, generating fake data despite requests not to, excluding key documents or data from a result, or including irrelevant or wild-goose documents or data in the result.

In the best case, an LLM query will be about equivalent to a traditional Google search (that weights web pages based on key words, context, link weight, freshness, etc. and returns only real links in its results) of potentially relevant data. In the worst case, it’s a mix of real relevant, real irrelevant, and a lot of made up results, and you have no clue which is which until you check every one. (Which means it is ultimately less useful than a drunken plagiarist intern who will only refer to and copy existing references when sent on a research project, and all you will have to do is filter out the irrelevant from the relevant, as such an intern will be too lazy to make anything up completely.)

Since Gen-AI LLMs are what are powering all of the Agentic AI claims, it should now be quite clear as who why Agentic AI is a fraud!

Now, this isn’t saying Gen-AI is bad (as it will have some solid use cases where it is dependable with further research; as with all generations of AI tech before, it needs more time; nor is it saying that we won’t have smarter software agents in the future that can do considerably more than they can today), just that we aren’t anywhere close to being there yet and that they won’t be based on Gen-AI as it exists today.

So for now, as Sourcing Innovation has always advocated, switch your focus to Augmented Intelligence and systems that make your humans significantly more productive than they are today. The right systems that automatically

  1. collect, summarize, and perform standard analysis on all of the data available;
  2. make easily modified suggestions based on those analysis, similar situations, and typical organizational processes (that can then be easily accepted); and then
  3. invoke traditional rules-based automations based on those accepted scenarios and invoke the right processes to implement the defined workflow

can, depending, on the task, make a human three (3), five (5), and even ten (10) times more productive than they are today, allowing for a team of two (2) to do the work that used to require a team of six (6), ten (10), or even twenty (20). For example, consider the best AI-based invoice processing applications that can automatically processes standard form invoices, break out and classify all the data, auto-match to (purchase) orders and receipts, auto-grab missing index/supplier data, auto-accept and process if everything matches within tolerance, auto-reject (and send back with reason and correction needed) in the case of a significant mismatch, and automate a dispute resolution process. When these, with the right configuration and training, will get most organizations to 95% touch-less processing (with zero significant errors), that’s a 10X improvement on invoice processing.

While not all tasks/functions will achieve the high efficiency of invoice processing, depending on the time required to collect and analyze data before strategic decisions can be made, most functions can easily see a 3X improvement with the right Augmented Intelligence technology. With the right tech (especially with supercomputing capabilities in modern cloud data centres), we have finally reached the age where you can truly do more with less … and all you need to do is NOT get Blinded By The Hype.

And for those interested, this latest rant was inspired by one of THE REVELATOR‘s Triple-Play Thursday LinkedIn posts and comments thereon (which have some deeper explanations in the comments if you want to dig in even deeper).

We’ll Say It Again. Analyst Firm 2*2s Are NOT Appropriate for Tech Selection!

Last year, while ranting about the plethora of utterly useless logo maps (which includes the Mega Map the doctor created to demonstrate the extreme futility of these maps), we also did a dive into why analyst firm 2*2s are NOT appropriate for tech selection. This is coming up again as a certain firm is really pushing All AI all-the-time and you can tell it’s about to infuse all their maps. Plus, the biggest firms are really pushing their quadrants, waves, and marketscapes, and most of these are showing the same solutions they showed last year and the year before that and the year before that and so on (going back a decade in some cases).

That, and a number of people are lamenting their lack of usefulness on LinkedIn, with one person even creating yet another logo map to highlight the “significant solutions that matter” (but we’ll save that rant for another day), so it’s time to make it clear that these maps are not appropriate (on their own) for tech selection. For example, in a discussion on my post on how your standard sourcing doesn’t work for direct, Thomas Audibert correctly states that static quadrants, in any form, do not work. (And then went on to correctly note that if you say there are, for instance, 80 sourcing solutions, it means that there are at least 20 niche (geographic, industry, customer size, …) categories of interest and that, unless they are catered within 20 different quadrants, this makes no sense to me.

And it doesn’t, because all a map can do, in the best situation, is give you a set of more-or-less comparable solutions that each serve a specific function (so you don’t end up trying to compare a Strategic Sourcing to a catalog-based e-Procurement to an Accounts Payable solution which, of course, serve three completely different functions). If it’s a good map, and by that I mean focussed on two things max, like Spend Matters Solution Map that only scores tech (on one axis) and only presents tech vs average customer scores (on the other axis), then you can use it to verify that one or two of your key requirements are met (such as the tech is solid and the customers are generally happy), but that’s it. (But if it’s a map that squishes 16 different scores into 2 dimensions, that’s useless … you don’t know what is contributing to the scores. What’s most important to you could be the lowest score in that score mish-mash number that looks above average.)

Moreover, at the end of the day, all an analyst can do that is useful is rate a vendor on one or more business independent objective dimensions that can be scored easily and, more importantly, give a customer comfort that the vendor does well on this dimension and they don’t have to worry about it in their evaluation. (For example, if a vendor does well in Spend Matters Solution Map, you know you don’t have to evaluate the underlying technical foundations, which is something most companies aren’t good at.) However, that’s not enough for a selection.

When it comes to tech, it’s important that:

  1. it’s solid
  2. it fills the need you are searching for
  3. it is easy to use by the majority of the users for the functions they will be doing the majority of the time

And, guess what, an analyst can only verify the first requirement. Why? An analyst doesn’t know your needs, you do. Moreover, they don’t know the TQ (technical quotient) of your users, the functions they do daily, or the processes they follow. You do. So, how can you expect an analyst to produce a map that tells you that.

But, if you’ve been paying attention, the solution to your problem is not tech. It’s process. And until you nail that, and then select the tech that matches that process, tech alone will NEVER solve your problem. NEVER.

And since analysts don’t know your business, or your

  • business size, Procurement department size, maturity
  • culture
  • risk tolerance
  • innovation level/comfort
  • current processes / required processes
  • customer service needs
  • etc. etc. etc.

or even how these slide on a scale across different companies of different sizes across industries, there’s no way they can produce a map that tells you all of this. Or even a fraction of this.

That’s why you need an analyst or independent consultant that truly understands the solution space you are searching in, what those solutions should do, and how to help you identify the subset that is not only technically solid but is also likely to meet your business requirements. (And remember, It’s the Analyst, not the analyst firm. If the analyst hasn’t reviewed dozens of vendors in the space you are searching in that offer the type of solution you are searching for, doesn’t know the must vs. should vs. nice to have requirements, and, most importantly, doesn’t have the technical chops to validate the solution technically (which is the weakness of every non-IT / non-Engineering business department), he’s not the analyst for you!

How AI Enhances 10 Common Procurement Challenges Part II

A recent CIO article drew my ire because it claimed that AI Overcomes 10 Common Procurement Challenges as it oversimplified the problems and overstated the benefits of AI. Let’s finish them one-by-one.

Legacy Systems Complicate the Adoption of New Technology: The article claims AI streamlines integration by assessing system compatibility, automating migration, and reducing downtime. While two out of three ain’t bad, it ain’t good when the critical requirement of assessing system compatibility cannot be met by AI — since simple text matching isn’t helpful if the interface of a legacy system isn’t specified in a standard format (as otherwise it’s essentially field-name matching, which is no different than human guesswork). The reality is that humans still have to define/verify the mappings before the AI can take over.

Letting AI do the mappings is fraught with errors. And its even worse when you let it automatically connect systems, pull and push data, replicate incorrectly mapped and bad data across systems, and “fix” data that was actually correct on system integration because the “bad” data in one system is used to overwrite the good data in another system just because it appeared to be more recent. Because it’s automated, AI can propagate and exacerbate errors at an unprecedented rate and in a matter of seconds make a mess that can take months to repair.

Managing Supplier Risks is a Growing Concern: AI can continuously monitor supplier performance, predict risks, and ensure compliance. This is one situation where they were almost perfectly correct, but, when they say vendor evaluation can be time intensive and imply that AI can speed it up, they overlook the fact that evaluations still have to be done by humans and tech can’t speed that up.

Moreover, if you think you can augment your data with third party data to speed up the evaluations, you’re just fooling yourself. You just make bad decisions faster.

Manual Procurement Process Drain Resources: AI can definitely automate repetitive tasks, reduce human error, increase efficiency, and free your team to focus on strategic initiatives, but only for tasks that are well defined, typically free from exception, and capable of being processed by standard rules. However, this can’t be done until the repetitive tasks are identified, processing rules defined, standard exceptions identified, and additional rules defined. Only then can the AI automate enough to be useful.

Moreover, using a next-gen LLM with chain-of-compute to try to break the requirements of a task down into subtasks, execute those subtasks automatically, and automate a process without any human intervention is just as likely to go wrong as it is to go right.

Demand Forecasting is Often Inaccurate : AI can improve demand forecasting, but only if you have the right data — it’s not a magic box, just a black box that you need to understand.

It’s not just demand trend based on utilization / point of sale data, its also market conditions which can sharply change a demand curve overnight … traditional curve fitting / machine learning that most “AI” is based on cannot detect a change in market conditions or a political situation that can cause a rapid change in demand.

Procurement Remains Transactional Rather Than Strategic: AI DOES NOT transform procurement into a strategic function that optimizes spend, improves supplier collaboration, and aligns purchasing decisions with your business! Only people-powered Human Intelligence (HI!) can do that. Remember — transforming Procurement requires defining a strategy, defining appropriate processes, identifying the right people to transform it, and then, and only then, identifying the right technologies.

Assuming that you can slap in AI and transform a tactical function into a strategic one is worse than a pipe dream, it’s a recipe for disaster. Running fast and hard doesn’t get you any closer to the finish line if it’s not in the right direction. For more details, see the dozens of posts about AI in the archives.

Again, we’re not saying that AI is bad. Technology is neither good nor bad. But, like any technology, it has to be ready for prime time, correctly identified, correctly implemented, and correctly used — and that requires a lot of Human Intelligence (HI!) and planning, and the right processes put in place. Shoving it in and expecting a miracle is dangerous. And this is yet another article that implies you can just shove it in and get results. And you can’t. Especially if it’s the wrong technology, which can enhance your problem instead of shrinking it. That’s the problem. This article, like many others, doesn’t tell you about the dangers and downfalls and what you have to do to avoid them.

How AI Enhances 10 Common Procurement Challenges Part I

A recent CIO article drew my ire because it claimed that AI Overcomes 10 Common Procurement Challenges as it oversimplified the problems and overstated the benefits of AI. Let’s take them one-by-one.

Procurement Takes Too Long, Slowing Innovation: According to the article, AI-driven platforms can generate RFPs, accelerate sourcing, automate approvals, and reduce cycle times … which is mostly true. Properly applied, AI can accelerate sourcing, reduce cycle times, and automate approvals … but not all approvals. As for RFP generation, that’s very limited — LLMs can generate RFPs with a simple prompt, but not necessarily a good RFP. The best RFPs are designed by humans (and then automation, which may or may not use AI, can pull in data from supporting documents as needed), and as for acceleration, it depends on the project — it can’t speed up supplier qualification where humans need to inspect the products and verify the requirements.

Moreover, a rush to AI can make things worse, and not better. Letting AI generate an RFP that misses a key requirement in terms of required certifications, performance criteria, production capacity, etc. can entirely invalidate an RFP process and lead to months of wasted effort if no human realizes that this key requirement was missed until an award is offered and a request for the certification, capacity, etc. is delivered and a “sorry, we don’t have / can’t do that” is returned.

Legal and Budget Complexities Create Bottlenecks: Budget tracking systems and rules-based automation allows for instantaneous budgetary approvals. Contract negotiation software can automate redlining, compliance checks, etc., but cannot handle a complex negotiation for a complex project where each side has a lot of requirements and multiple parties to satisfy. AI speeds up the technical drudgery, but not the human interaction.

Moreover, if you turn over negotiations to software, you have no idea what the end result will be. If you let it negotiate based on market data, and the cost data is off, you could be committing to a bad deal. If you let it predict timeframes based on how it expects prices to rise/stay high, but it’s off by two years, it could lock you into a three year deal when you only need a one year deal. And so on.

CIOs Need to Upskill Their Teams in AI and Cybersecurity: Just because “AI” can simplify processes with guided intelligence, that doesn’t mean the team is upskilled in the process. The reality is, there is no incentive for users to learn anything if they think the system will guide them in everything they need to do.

Thus, if you over invest in AI, especially the kind that guides users in every task they have to do, and works quite well on the basic tasks they have to do daily, and doesn’t screw up the first half dozen or so moderately complex tasks, the user will believe the system is almost flawless, start to trust it implicitly, stop questioning it as time goes on, start believing there is no need to learn anything else because the system knows it, and, over time, stop thinking. And then, instead of performance improving, it will decline … and that decline might be accompanied by a major financial loss if a bad contract is signed or major risk ignored.

Data Inaccuracy Leads to Poor Procurement Decisions: While it’s true that over three quarters of organizations struggle with unreliable data, AI doesn’t magically fix the problem. It can help with cleansing, validation, and procurement trend analysis, but ask any spend analysis vendor who has tried to apply an LLM to unclassified vendors about the classification accuracy (which tends to top out around 70%) — good data still requires manual cleansing and classification, especially where the system reports good confidence. It can definitely help, but it doesn’t take the onus off of the human.

In other words, if you believe that you can plug in a magic AI black box ad that it will fix your data, you are gravely mistaken. Sure it will tell you that it has cleansed, classified, and validated all of your data, but if it’s only 70% accurate, it’s only made matters worse if you trust the data 100% and don’t know what 30% is inaccurate. When you base your decisions on data, and the data is bad, you are bound to make a bad decision. The question is, how bad. You don’t know. And that’s a big problem!

B2B Software Selection is Increasingly Complex: Moreover, despite the claims, AI-powered vendor analysis doesn’t really help that much — see Pierre Mitchell’s crazy conversations with DeepSeek-Rq. Note how it not only recommends inappropriate vendors, but also recommends vendors that don’t even exist anymore … it can help you discover potential vendors, but you still need human reviews and deep pricing intelligence (from expert SaaS optimizers).

Trusting AI to select your software is worse than trusting an analyst firm map! And we know all of the problems those maps contain. (First of all, they only mention the same 10 to 20 vendors year after year, ignoring the dozens of other vendors that might be more appropriate for you.) AI cannot understand your needs, cannot truly map needs to requirements, cannot truly map requirements to features, and cannot truly assess how relevant a solution is, and definitely can’t assess how well a provider’s culture will match yours.

Come back Thursday for Part II!

Why Are You Still Buying That Fancy New Piece of Software That

  • Could Get You Sued?
  • Increases The Chance You Will Be Hacked!
  • Could Result in a 100 Million Processing Error?
  • Could Shut Down Your Organization’s Systems for Days!
  • Helps Your Employees Commit Fraud?

If someone told you this when evaluating a piece of software, and asked if you wanted to buy it, I’m sure the vast majority of you would say HELL NO!

In which case I want you to please tell me, why are you all still riding the AI Hype Train, Buying, and Using Gen-AI everywhere?

It has already resulted in lawsuits and losses!
The Air Canada lawsuit over the Gen-AI chatbot is just one notable well publicized example.

AI systems are AI coded, and AI code has a much greater security risk
as it generates code using training repositories that contain large amounts of untested, unverified, and high risk code — generating code so full of security holes it’s a hacker’s dream! (See this great piece on the ACM on The Drunken Plagiarists.)

AI systems negotiate on the data they have
and with a single decimal point error and you could be paying 10X what you need to. Not to mention, they don’t always translate right. Remember, the Experimental AI DOGE used claimed an 8 Billion savings on an 8 Million contract!

Bad data generated by an AI system and fed into a legacy system with poor data validity checks can shut it down.
Plus, Gen-AI can also push out bad updates faster than any human can and you can easily have your own Crosslake situation!

Now it’s being used by employees to generate fake receipts
that look so real that, if the employee does a few seconds of research (to get the restaurant info, current menu prices, tax code, etc.), you can’t distinguish the generated image from the real thing. And, before you say “Ramp solves this”, well, it only does if the employee is lazy (which, let’s face it, is human nature, so you’ll catch about 90% of it). But what happens when a user strips the meta data which, FYI, can be as easy as taking a picture of the picture … oops! (And if you’re a hacker, running it through a meta data stripper/replacement routine is even easier as you’re just hotkeying a background task.)

AI is good. Gen-AI has its [limited] uses. But unrestricted and unhinged mass adoption of untested, unverified AI for inappropriate uses is bad. So why do you keep doing it?

Especially since it’s now proven it’s worse for you than some illegal drugs! (Source)