Category Archives: AI

Agentic AI is a Fraud — And Arguments to the Contrary are BS!

We’ve been over this many times here and on LinkedIn. And we are going to go over many of the reasons yet again, because it seems we have to. But first, we’re going to back up and give you a fundamental answer as to why Agentic AI cannot exist with today’s technology.

The dictionary definition of an agent is generally either “a person who acts on behalf of another” or “a person or group that takes an active role or produces a specific effect“. It’s about “doing, performing, or managing” for the express purpose of a “desired outcome“.

Classically, an agent is a person, and a person is an individual human and a human, unlike a thing, is intelligent, which today’s technology is not and that should be Q.E.D.

However some “modern” or “progressive” dictionaries (and there are at least 26 publishing houses publishing dictionaries as per Publishers Global), will allow an “agent” to be a “thing” as the term has been used in technical diagrams and patents, so if you define AI as a thing (which is being nice), then this refutation is not enough so we have to consider the rest of the definition.

As for “doing, performing, or managing“, like any automated system, it, unfortunately, does something. However, unlike past “automations” which took fixed inputs and produced predictable outputs, today’s Gen-AI takes an input and produces an unpredictable output that depends on how it was trained, parameterized, implemented, interacted with (down to the specific wording and format of the prompt as they are designed to continuously learn [which is a huge misnomer as they do not learn, but simply evolve their state]), fed the data, etc. As a result, two systems fed the same inputs at any point in time are not guaranteed to give the same, or even similar, output and are NOT guaranteed to produce the “desired outcome” (as even one single “bit” of difference between data sets, configuration, and historical usage can completely change an outcome, just like one parameter can completely change an optimal solution in a strategic sourcing decision optimization model). While this can be argued unlikely (and is for run-of-the-mill scenarios), it’s not infeasible (and, in fact, has a statistically significant chance of happening the more an input is off from the norm, and even 1/100 is significant if you plan on pushing [tens or hundreds of] thousands of processing tasks through the agent). Just like one wrong cost (off by a factor of 100 due to a decimal error) can completely change the recommended award in strategic sourcing decision optimization, because these are essentially super sized multi-layer deep learning neural networks (with recurrent, embedding, attention, feed-forward, and maybe even feed-back layers) based on non-linear and/or statistical activation functions where the parameters change based on every activation and, thus, every input, one wrong input, in the right situation, can completely change the expected outcome.

As for the arguments that “they often work as well as people, so why not“, that’s equivalent to the argument that “dynamite often works as well as excavators, so why not” or “loop antennas work just fine for direction bearing on aircraft, so why not“. In the second argument, the “why not?” is that even the best expert can’t always predict the full extent, direction, or result of the explosion (which, FYI, could also create a shockwave that could damage or take down nearby structures). In the third argument, ask Amelia Earhart how reliable they are if the short range is too short. Oh wait, you can’t! (And that’s why modern planes have magnetic compasses and GPS in addition to radio-based navigation.)

As for the arguments that the issues aren’t unique to AI and show up in other systems or people, while that is fundamentally true, there is no other technology ever invented that collectively has all of the issues we have so far identified in Gen-AI … and I’m sure we aren’t done yet!

[ Moreover, when you think about it logically, it makes no sense that we are so intent on pursuing this technology after having spent almost a century designing and building computing systems to be more accurate and reliable than we could ever be (with stress tested design techniques, hardware, and generations of error correcting coding to get the point where a computer can be expected to do trillions of calculation without fail, whereas an average human, who can do an average of one simple calculation a second, might not get through 10 without an error) while continually enhancing their capacity to the point that the largest supercomputer is now one quintillion times more powerful at math than we are! ]

As per a previous post, for starters, this technology:

1. Lies. It hallucinates every second, and often makes up fake summaries based on fake articles written by fake people with fake bios that it generates in response to your query, because it’s trained to satisfy (even though the X-Files made it clear in Rm9sbG93ZXJz how bad an idea this was back in 2018, four months before GPT-1 was released). And that’s the tip of the iceberg of the issues.

2. Doesn’t Do Math. These models don’t do math as they are built to combine the most statistically relevant inputs, not to do standard arithmetic. Plus, they aren’t even guaranteed to properly recognize an equation, especially when words are used. So it will miscalculate, and sometimes even misread numbers and shift decimal points. (Just ask DOGE, even though they won’t admit it.)

3. Opens Doors For Hackers. Over 70% of AI code has been found to be riddled with KNOWN security holes. (So imagine how many more holes it’s introducing!) It doesn’t generate code better than an average developer, and sometimes generates code that’s even worse than a drunken plagiarist intern who can at least identify a good code example to base the generated code on.)

4. Puts Your Entire Network at Risk. If you’re using it to automate code updates, cross-platform integration and orchestration, or system tasks, all it has to do is generate one wrong command and it could lock up and shut down your entire network, no Crosslake required!

5. Helps Your Employees Commit Fraud. They can generate receipts that look 100% real, especially if they do their own math (and ensure all the subtotals and totals add-up and the prices are actual menu prices), look up the restaurant name and tax codes, and have a real receipt example to go off of. (And as for the claims by Ramp and other T&E firms that they can detect fake receipts generated by Chat-GPT, good luck with that, because all the user has to do is strip the embedded metadata/fingerprint by taking an image of the image or running it through a utility that strips the metadata/fingerprint or converts the image format to one that loses the metadata.)

Can you name another technology that comes with all of these severe negatives (with more being discovered regularly, including addiction and a decrease in cognition as a result of using it). (We can’t! Not even close!)

[ And going back to the “people do all of this too” argument, it’s true we collectively do, but the vast majority of humans are not narcissistic sociopathic psychopathic robber barons with delusions of grandeur and no moral code or ethics. (Most criminals and con artists have a code or a line they won’t cross. To date, Gen-AI has proven such a concept is beyond it.) ]

And for those of you who believe Gen-AI is emergent, it has been refuted multiple times. There is no emergence in these models whatsoever. They were, are, and will always be dumb as a doorknob while being much less reliable. (At least a doorknob, when turned sufficiently, will always open a door.) If you want to believe, go find religion (and keep your religion out of technology)or at least restrain yourself to the paranormal. When it comes to Gen-AI, THERE IS NOTHING THERE.

As for it doesn’t have to be smart to be useful, that is one of the most useless statements ever — because it’s context free and could be 100% true or 100% false or anything in between, all depending on the context you are referring to. By definition, technology does not have to be smart to be useful. Every piece of technology we use today is, by definition, dumb because it all lacks intelligence. (And some of it is so useful we’d never live without it — like control systems for energy grids and water works, air traffic, and modern communications).

However, all of the dumb technology that we have developed that is useful is ONLY useful with a precise context that defines the problem to be solved, the inputs it will expect, and the outputs that can be generated. An online order system is useless in a nuclear power plant control station, for example. (And while you think that is far-fetched, many of the areas vendors are claiming you can apply Gen-AI to are even more far-fetched.)

Even if the situations where the task you want ultimately boils down to the digestion, search, summarization, and/or generation of outputs based on a very, very large corpus of data, which is essentially what LLMs were designed to do, they are still only useful as a starting point or suggestion generator, or guide. They are not perfect, are totally capable of misunderstanding the question, being too broad in their interpretation of one or more inputs, being incorrect in their interpretation of the prompt for the desired output, generating fake data despite requests not to, excluding key documents or data from a result, or including irrelevant or wild-goose documents or data in the result.

In the best case, an LLM query will be about equivalent to a traditional Google search (that weights web pages based on key words, context, link weight, freshness, etc. and returns only real links in its results) of potentially relevant data. In the worst case, it’s a mix of real relevant, real irrelevant, and a lot of made up results, and you have no clue which is which until you check every one. (Which means it is ultimately less useful than a drunken plagiarist intern who will only refer to and copy existing references when sent on a research project, and all you will have to do is filter out the irrelevant from the relevant, as such an intern will be too lazy to make anything up completely.)

Since Gen-AI LLMs are what are powering all of the Agentic AI claims, it should now be quite clear as who why Agentic AI is a fraud!

Now, this isn’t saying Gen-AI is bad (as it will have some solid use cases where it is dependable with further research; as with all generations of AI tech before, it needs more time; nor is it saying that we won’t have smarter software agents in the future that can do considerably more than they can today), just that we aren’t anywhere close to being there yet and that they won’t be based on Gen-AI as it exists today.

So for now, as Sourcing Innovation has always advocated, switch your focus to Augmented Intelligence and systems that make your humans significantly more productive than they are today. The right systems that automatically

  1. collect, summarize, and perform standard analysis on all of the data available;
  2. make easily modified suggestions based on those analysis, similar situations, and typical organizational processes (that can then be easily accepted); and then
  3. invoke traditional rules-based automations based on those accepted scenarios and invoke the right processes to implement the defined workflow

can, depending, on the task, make a human three (3), five (5), and even ten (10) times more productive than they are today, allowing for a team of two (2) to do the work that used to require a team of six (6), ten (10), or even twenty (20). For example, consider the best AI-based invoice processing applications that can automatically processes standard form invoices, break out and classify all the data, auto-match to (purchase) orders and receipts, auto-grab missing index/supplier data, auto-accept and process if everything matches within tolerance, auto-reject (and send back with reason and correction needed) in the case of a significant mismatch, and automate a dispute resolution process. When these, with the right configuration and training, will get most organizations to 95% touch-less processing (with zero significant errors), that’s a 10X improvement on invoice processing.

While not all tasks/functions will achieve the high efficiency of invoice processing, depending on the time required to collect and analyze data before strategic decisions can be made, most functions can easily see a 3X improvement with the right Augmented Intelligence technology. With the right tech (especially with supercomputing capabilities in modern cloud data centres), we have finally reached the age where you can truly do more with less … and all you need to do is NOT get Blinded By The Hype.

And for those interested, this latest rant was inspired by one of THE REVELATOR‘s Triple-Play Thursday LinkedIn posts and comments thereon (which have some deeper explanations in the comments if you want to dig in even deeper).

How AI Enhances 10 Common Procurement Challenges Part II

A recent CIO article drew my ire because it claimed that AI Overcomes 10 Common Procurement Challenges as it oversimplified the problems and overstated the benefits of AI. Let’s finish them one-by-one.

Legacy Systems Complicate the Adoption of New Technology: The article claims AI streamlines integration by assessing system compatibility, automating migration, and reducing downtime. While two out of three ain’t bad, it ain’t good when the critical requirement of assessing system compatibility cannot be met by AI — since simple text matching isn’t helpful if the interface of a legacy system isn’t specified in a standard format (as otherwise it’s essentially field-name matching, which is no different than human guesswork). The reality is that humans still have to define/verify the mappings before the AI can take over.

Letting AI do the mappings is fraught with errors. And its even worse when you let it automatically connect systems, pull and push data, replicate incorrectly mapped and bad data across systems, and “fix” data that was actually correct on system integration because the “bad” data in one system is used to overwrite the good data in another system just because it appeared to be more recent. Because it’s automated, AI can propagate and exacerbate errors at an unprecedented rate and in a matter of seconds make a mess that can take months to repair.

Managing Supplier Risks is a Growing Concern: AI can continuously monitor supplier performance, predict risks, and ensure compliance. This is one situation where they were almost perfectly correct, but, when they say vendor evaluation can be time intensive and imply that AI can speed it up, they overlook the fact that evaluations still have to be done by humans and tech can’t speed that up.

Moreover, if you think you can augment your data with third party data to speed up the evaluations, you’re just fooling yourself. You just make bad decisions faster.

Manual Procurement Process Drain Resources: AI can definitely automate repetitive tasks, reduce human error, increase efficiency, and free your team to focus on strategic initiatives, but only for tasks that are well defined, typically free from exception, and capable of being processed by standard rules. However, this can’t be done until the repetitive tasks are identified, processing rules defined, standard exceptions identified, and additional rules defined. Only then can the AI automate enough to be useful.

Moreover, using a next-gen LLM with chain-of-compute to try to break the requirements of a task down into subtasks, execute those subtasks automatically, and automate a process without any human intervention is just as likely to go wrong as it is to go right.

Demand Forecasting is Often Inaccurate : AI can improve demand forecasting, but only if you have the right data — it’s not a magic box, just a black box that you need to understand.

It’s not just demand trend based on utilization / point of sale data, its also market conditions which can sharply change a demand curve overnight … traditional curve fitting / machine learning that most “AI” is based on cannot detect a change in market conditions or a political situation that can cause a rapid change in demand.

Procurement Remains Transactional Rather Than Strategic: AI DOES NOT transform procurement into a strategic function that optimizes spend, improves supplier collaboration, and aligns purchasing decisions with your business! Only people-powered Human Intelligence (HI!) can do that. Remember — transforming Procurement requires defining a strategy, defining appropriate processes, identifying the right people to transform it, and then, and only then, identifying the right technologies.

Assuming that you can slap in AI and transform a tactical function into a strategic one is worse than a pipe dream, it’s a recipe for disaster. Running fast and hard doesn’t get you any closer to the finish line if it’s not in the right direction. For more details, see the dozens of posts about AI in the archives.

Again, we’re not saying that AI is bad. Technology is neither good nor bad. But, like any technology, it has to be ready for prime time, correctly identified, correctly implemented, and correctly used — and that requires a lot of Human Intelligence (HI!) and planning, and the right processes put in place. Shoving it in and expecting a miracle is dangerous. And this is yet another article that implies you can just shove it in and get results. And you can’t. Especially if it’s the wrong technology, which can enhance your problem instead of shrinking it. That’s the problem. This article, like many others, doesn’t tell you about the dangers and downfalls and what you have to do to avoid them.

How AI Enhances 10 Common Procurement Challenges Part I

A recent CIO article drew my ire because it claimed that AI Overcomes 10 Common Procurement Challenges as it oversimplified the problems and overstated the benefits of AI. Let’s take them one-by-one.

Procurement Takes Too Long, Slowing Innovation: According to the article, AI-driven platforms can generate RFPs, accelerate sourcing, automate approvals, and reduce cycle times … which is mostly true. Properly applied, AI can accelerate sourcing, reduce cycle times, and automate approvals … but not all approvals. As for RFP generation, that’s very limited — LLMs can generate RFPs with a simple prompt, but not necessarily a good RFP. The best RFPs are designed by humans (and then automation, which may or may not use AI, can pull in data from supporting documents as needed), and as for acceleration, it depends on the project — it can’t speed up supplier qualification where humans need to inspect the products and verify the requirements.

Moreover, a rush to AI can make things worse, and not better. Letting AI generate an RFP that misses a key requirement in terms of required certifications, performance criteria, production capacity, etc. can entirely invalidate an RFP process and lead to months of wasted effort if no human realizes that this key requirement was missed until an award is offered and a request for the certification, capacity, etc. is delivered and a “sorry, we don’t have / can’t do that” is returned.

Legal and Budget Complexities Create Bottlenecks: Budget tracking systems and rules-based automation allows for instantaneous budgetary approvals. Contract negotiation software can automate redlining, compliance checks, etc., but cannot handle a complex negotiation for a complex project where each side has a lot of requirements and multiple parties to satisfy. AI speeds up the technical drudgery, but not the human interaction.

Moreover, if you turn over negotiations to software, you have no idea what the end result will be. If you let it negotiate based on market data, and the cost data is off, you could be committing to a bad deal. If you let it predict timeframes based on how it expects prices to rise/stay high, but it’s off by two years, it could lock you into a three year deal when you only need a one year deal. And so on.

CIOs Need to Upskill Their Teams in AI and Cybersecurity: Just because “AI” can simplify processes with guided intelligence, that doesn’t mean the team is upskilled in the process. The reality is, there is no incentive for users to learn anything if they think the system will guide them in everything they need to do.

Thus, if you over invest in AI, especially the kind that guides users in every task they have to do, and works quite well on the basic tasks they have to do daily, and doesn’t screw up the first half dozen or so moderately complex tasks, the user will believe the system is almost flawless, start to trust it implicitly, stop questioning it as time goes on, start believing there is no need to learn anything else because the system knows it, and, over time, stop thinking. And then, instead of performance improving, it will decline … and that decline might be accompanied by a major financial loss if a bad contract is signed or major risk ignored.

Data Inaccuracy Leads to Poor Procurement Decisions: While it’s true that over three quarters of organizations struggle with unreliable data, AI doesn’t magically fix the problem. It can help with cleansing, validation, and procurement trend analysis, but ask any spend analysis vendor who has tried to apply an LLM to unclassified vendors about the classification accuracy (which tends to top out around 70%) — good data still requires manual cleansing and classification, especially where the system reports good confidence. It can definitely help, but it doesn’t take the onus off of the human.

In other words, if you believe that you can plug in a magic AI black box ad that it will fix your data, you are gravely mistaken. Sure it will tell you that it has cleansed, classified, and validated all of your data, but if it’s only 70% accurate, it’s only made matters worse if you trust the data 100% and don’t know what 30% is inaccurate. When you base your decisions on data, and the data is bad, you are bound to make a bad decision. The question is, how bad. You don’t know. And that’s a big problem!

B2B Software Selection is Increasingly Complex: Moreover, despite the claims, AI-powered vendor analysis doesn’t really help that much — see Pierre Mitchell’s crazy conversations with DeepSeek-Rq. Note how it not only recommends inappropriate vendors, but also recommends vendors that don’t even exist anymore … it can help you discover potential vendors, but you still need human reviews and deep pricing intelligence (from expert SaaS optimizers).

Trusting AI to select your software is worse than trusting an analyst firm map! And we know all of the problems those maps contain. (First of all, they only mention the same 10 to 20 vendors year after year, ignoring the dozens of other vendors that might be more appropriate for you.) AI cannot understand your needs, cannot truly map needs to requirements, cannot truly map requirements to features, and cannot truly assess how relevant a solution is, and definitely can’t assess how well a provider’s culture will match yours.

Come back Thursday for Part II!

We Finally Know the Source of the AI Buzzword Bullsh!t!

The Agentic Software Service Hyper Optimized Learning Engine custom built for drowning the World Wide Web in soundbite and buzzword marketing bullsh!t centered on AI, or the A.S.S.H.O.L.E. for short! (With fervent thanks to the esteemed Arthur Mesher for delving deep into the depths to uncover the source of this madness!)

Technology Project Failure is at an all-time high, boosted by the recent AI failure rates (which are on the rise as almost half of AI initiatives are being scrapped in process, see CIO Dive), and while the hype should be subsiding (and shifting to the next hype cycle), it’s now hitting us harder and faster in what should be its death throes than any hype cycle that has come before.

The AI marketing onslaught is coming so hard and fast that it’s impossible to imagine how so much new soundbite, buzzword, FOMO, and FUD content can be produced so fast and so overwhelming to the point that it seems humanly impossible. And that’s because it is. It’s not coming from humans, it’s coming from the A.S.S.H.O.L.E.. As we have indicated in our previous posts on Gen-AI LLMs, one of the valid uses for Gen-AI is mass content digestion, search, summarization, and generation.

It appears that one of these systems was customized to ingest all of the initial human-generated AI BS and trained to spew out marketing soundbites, social media posts, articles, and other forms of web content ad nauseum and to continually ingest new content on the subject to create even more content, including AI-generated BS content from other AI systems that tried to copy the original A.S.S.H.O.L.E..

And even though it doesn’t matter, since apparently every LLM can be trained to emulate the original, the only question that remains is, who currently owns the source engine, what LLM was it originally built on, and what LLM is it running on now? This is obviously the industry’s best kept secret. I hope someone who has gotten to the bottom of this will let us know the full story of the A.S.S.H.O.L.E.. Considering the intellectual and financial pain and suffering it has caused, we deserve to know the truth!

For those interested, since I’m sure LinkedIn will disappear Art’s post if it hasn’t already, here’s the original. (And the Gartner rant ain’t half bad either!)

The Best Way to Survive the AI-Powered Apocalypse? Go Old School!

If you’ve been following along, you know that a great purge is coming on two fronts. All the pundits agree on that! On the first front, a large number of vendors are going bye bye, as we’ve been telling you since our first post on the Marketplace Madness. On the second front, they took ‘er jobs. Except it’s not they, it’s AI.

So doesn’t this mean that if you want to survive the days ahead that you should find the most advanced AI provider that isn’t going to get purged in the near future, adopt the tech, replace as much staff as you can with AI, find a way to survive the hardship, and come out ahead when everyone decides that what they have to do?

Well, for the vast majority of the analysts and pundits, it is exactly what you should do — and do it right now. It’s AI overload all the time. And just when most hype cycles start to die down, this one gets a second wind of hurricane proportions.

But, in fact, it’s the last thing you should do. In fact, you should implement a Gen-AI ban and Agentric AI ban immediately, and identify classic ML-powered AI augmented intelligence tech that can supercharge your team, acquire it, and train your team on that immediately. Because you can get the same results as any Agentric AI can get if you employ the right classic ML-powered human-driven AI technology with the right algorithms, analytics, optimization, etc. Sure, a human might be a little bit slower than an algorithm that can work 24/7/365 without a break, but human who is appropriately skilled and trained will make up for this with something the AI doesn’t have, true intelligence.

You see, the thing about Gen-AI and Agentric AI is that it works great until it doesn’t. As per our recent post, Gen-AI is full of problems. In a recent post, we noted that, Gen-AI can:

  • get you sued
  • increase the chance you will be hacked
  • result in Million/Billion-Plus processing errors
  • shut down your organization’s systems for days
  • help your employees commit fraud

And those are the good side effects from its hallucinations. There are much worse side effects that can happen. If you refer back to our posts on the valid uses for Gen AI and the valid uses for Gen AI in Procurement

  • the embedded biases, that you might not even be aware of, could result in decisions diametrically opposed to what you are expecting
  • when it computes two options that are equally likely to generate the same end result for the company relative to the KPI it is using, there’s no guarantee it will select the right option — and there’s always a right option, especially if one option for cost savings is a longer term contract so the supplier can upgrade equipment and the other option is forcing the supplier to cut an already razor thin margin 50%
  • the hallucinations eventually become real, as the systems get so advanced that they not only create super realistic evidence to back up their recommendations, but take over your entire systems in the background so that you don’t know that a web request to verify a claim is actually still being processed by the AI that is now running in the background
  • it starts negotiations and cutting contracts you haven’t even authorized yet
  • it becomes you … and you get blamed for all its mistakes

In other words, ignore the Gen-AI and Agentric-AI technologies that are not the miracle cures they are promised to be. The miracle cures are the last generation ML-based AI technology that was just about to transform your operations under the expert fingers of your leading practitioners, not some probabilistic monstrosity that requires an entire data center to run to generate an output no one verify using a system no one understands. Hone your chops on those and you’ll get the results you need, without having to deal with unexpected, possibly catastrophic, failures along the way.

After all, when we told you about all of the great advancements that were coming in Source To Pay in our classic series (indexed here), none of it required Gen-AI to achieve!