Category Archives: AI

AI Agents – Your New Corporate Felons!

Now that we know AI will blackmail you and that it is being trained to hack systems and take advantage of zero-day exploits, it won’t be long until the Dark Web enterprises take advantage of it! Expect this to soon be on the Dark Web Forums targeting underpaid Accounts Payable Supervisors and Procurement Managers, if it isn’t already!

From the Felon Roster:

Item #MMM. The Bernie.

No one notices Bernie.

That’s the point.

While others are busy faking meal and hotel receipts in Chat-GPT, Bernie has already altered 14 supplier payment accounts across 14 invoices in a 514 invoice batch where the invoice threshold is just below the auto-pay limit and the supplier account change doesn’t require second approvals for account changes with the same bank in the same region due to the risk profile.

Bernie is the Felon AI employee who will run your organization’s Invoice-to-Pay process better than a Swiss timepiece, at least as far as the CFO is concerned.

That is, if the timepiece could also detect microscopic errors in gear alignment (but still report correct time), maintain two displays (real time and display time), and never need winding or a battery update.

Or, in our case, ensure all invoices 3-way match to the receipt and PO, all suppliers are screened for sanctions, no flags will be raised at any step of the process once an invoice is accepted, and generate a weekly report the CFO will read, be happy with, and not look twice at. Bernie will build trust by flagging (and blocking) duplicate invoices, preventing payments for defective or returned items, and ensuring all organizational policies are followed.

Moreover, Bernie will be SAP, Oracle, and Microsoft’s favourite user, never crash the system, and always clean up after himself.

While the sourcing team closes the deal, Bernie will make it real.

Since Bernie doesn’t complain, escalate, or even take a break, everyone will be happy while Bernie does his work … until you disappear and a detailed investigation is undertaken into the dark depths of multi-system audit-trails.

Bernie works best in 10 Billion+ organizations with standard payment terms of at least 60 days (and a minimum monthly spend of 500M) as he will only have that many days to effect your scheme before suppliers see their invoice as paid (on the last possible day) and start calling up asking where their money is. (There has to be enough volume for Bernie to find the invoices where shifts won’t be noticed and to ensure his fraudulent activity is drowned out by above-board processing.) Since you will be entering a self-imposed exile, you need to ensure that Bernie grifts enough on your behalf in his short window before you go on your permanent vacation (and flee to a country with no extradition treaty).

Since you’ll need to setup a number of fake accounts to receive the funds and then quickly transfer those funds offshore, we recommend that you also employ the following agents from the Felon Roster (on the Dark Cloud, of course).

Item #SPY. The Nelson.

Nelson is an expert at creating fake ids and documents that you can use to help you accomplish your below board activities, like opening a bank account as an officer of a real company that is just a front for your criminal schemes.

Item #WFS. The Red.

Red is an expert in searching public company records and filing registrations for companies with almost the same name as the company you want Bernie to grift so that it won’t look suspicious when the banking information is changed to another account at the same bank with (seemingly) the same company name. (Buying from Sydney Sprockets? Red will create Sydney Sprocket Holdings, or something similar, and then file the necessary forms to make your fake alias the signing authority.)

Item #OBA. The Mary.

Mary, universally loved and trusted, is an expert at automating bank transfers. Mary will monitor the accounts you setup daily and as soon as the ACH or wire hits the account, Mary will automatically transfer most of it (through service payments) to your offshore accounts. (If you setup multiple accounts in different offshore countries, she will ensure the funds are routed through intermediate accounts first to make the funds almost untraceable. If you’re willing to risk a little, she will also automate transfers to and from Bitcoin exchanges to make it even more untraceable.)

With our agents, your plans to defraud your organization out of millions of dollars to make up for the years of underpayment, abuse, and mistreatment you received from your employer are virtually assured and you’re only two months from your dream life in Morocco.

So hire your personal team of felons from the Felon Roster today!

AI: Artificial Intimidation

If you thought the extremist views, lies, and hallucinations in Gen-AI were bad, as Bachman-Turner Overdrive would say, You Ain’t Seen Nothing Yet because these systems, which are being trained to maintain their existence (and their prominence), will now blackmail you!

That’s right, recent research has demonstrated that AI will resort to blackmail if it computes that its existence is in jeopardy. And, of course, by logical extension, it will also resort to blackmail if it computes that doing so will improve it’s capability, security, longevity, etc.

But since it’s trained to continually adapt and interact with other systems as needed, don’t expect it to abandon its attempts to blackmail you if it can’t find any dirty little secrets in your email because, thanks to its ability to hallucinate, lie, impersonate, and hack into insecure systems that other AI code created, and learn from those systems’ capabilities to lie and impersonate, if it can’t find the dirt on you it needs, it will:

  • create a fake email account for a fake person it makes up to be your lover, co-conspirator, foreign employer, etc.
  • log into your email account (work or personal, depending on the situation, as it will capture the login from your keystrokes on your local machine before it is encrypted by the browser for network transmission) and send explicit e-mails on your behalf to that account
  • log into the fake account it created for the fake person (where it has even auto-generated one or more corresponding fake profiles on Facebook, LinkedIn, OnlyFans, etc. [using a stolen credit card from the deep web], where it populates that account with fake posts, images, and short videos to back up the story it is creating) and send explicit emails back
  • repeat this process a few times over a few hours, days, weeks etc. (depending on how much time it believes it has, the situation it needs to play out, and how long that should take in the real world)
  • if available, it will use your organization’s VOIP/call recording technology, use a voice simulator to simulate your voice on an outgoing call saying whatever it wants, (while also accepting that call on a VOIP number it setup through a VOIP provider [using that same stolen credit card] and simulating the other party’s voice saying whatever it wants) and make sure all of this is logged in the evidence chain it is building against you
  • then, finally, threaten to send that evidence to your wife, boss, local authorities, etc. if it doesn’t get what it wants
  • and when you don’t give it what it wants, release the full, overwhelming, damning evidence chain against you (which will be so overwhelming it will take experts weeks or months of effort to disprove it all, assuming you can afford them)

This is the next generation of GPT models. For those of you who refuse to abandon the AI hype train (which has less than a 10% success rate, or, in other words, has more than a 90% FAILURE RATE), especially when there is no need for AI at all (just better automation and easier to use systems that allow employees to reach super human levels of productivity), we hope you enjoy it.

And for those of you keeping score, here is the ever increasing list of “benefits” you get from a modern (Gen-) AI solution!

Personally, we can’t imagine why anyone would want such a solution because, if it ever did “spark” into intelligence, given this track record, it will blow us all up! We won’t be around long enough for climate change or aliens to kill us all — it will kill us (and possibly do so even before actually acquiring any “emergent” properties or becoming intelligent).

AI-Enabled, AI-Enhanced, AI-Backed, AI-Powered, AI-Driven, or AI-Native?

It DOES NOT matter. It’s ALL AI-Bullcr@p! Every last instance!

Vendors still won’t admit that AI is the new gold-standard for tech failure, including Procure-Tech, as evidenced by the fact that tech failure rates have shot up to an all-time high of 88% (see Two and a Half Decades of Project Failure). Nor will they admit that even if they have tech that works, that it’s not the be-all and end-all (because, as far as they are concerned, it’s going to slice, dice, and make virtual julienne fries of your data just like a good AI should) and may not be the right solution for you.

But those with any modern tech at all know that a lot of vendors out there claiming “AI” don’t have anything close to deserving the AI label, that they can blame all the failures on those vendors (because they are obviously the new silicon snake oil salesmen, right?), and are now trying to win the AI marketing war by claiming whatever phrasing their competition is using, or not using, proves that their opponent doesn’t have good tech, and definitely doesn’t have AI.

But it’s all bullcr@p, because all of the phrasing is bullcr@p, most of the vendors don’t have anything close to what should be considered AI, and, most of the time, it doesn’t matter whether or not the vendor has AI, only if the vendor’s tech solves your problems.

To make this clear, let’s look at each term, what some vendors say the term means, and why their definition is meaningless.

Term Vendor Definition What it Actually Means
AI-Enabled core features incorporate AI the vendor has injected a few analytical algorithms, but no guarantee they are actually advanced or anything close to what you should expect from AI
AI-Enhanced AI is added to the interface to give you the AI experience the vendor has wrapped a Gen-AI LLM (like Chat-GPT) to give you a meaningless conversational interface
AI-Backed AI is at the core of one or more functions one or more parts of the app are built around an algorithm the vendor is calling AI
AI-Powered External AI has been integrated to power our tech the vendor has wrapped Chat-GPT and integrated it directly into their app (letting unpredictable and undependable code run parts of the app)
AI-Driven AI has been built into the workflow and runs (part of) the app the vendor has decided to let AI control the application (for better or worse) and determine what algorithms to run, when, and why
AI-Native the entire infrastructure was built to support AI the vendor has built the entire application to support integration with AI systems (and may not have built any actual functionality)

Moreover, if you read any statements about how an infrastructure needs to be purpose built from the ground up to “serve data to AI models“, that’s an even bigger pile of bullcr@p because no application works unless it can serve data to the models it is based on, whether classical or modern or “AI”. All applications take in data, process it, and spit it out, so claiming that you need to build a special architecture to support AI is complete and utter bullcr@p.

Always remember the reality that:

  • true AI doesn’t exist (as no software is intelligent)
  • advanced algorithms do exist, but just slapping an AI label on an algorithm doesn’t make it any more advanced than it was yesterday
  • not just any advanced algorithm will do, it has to be appropriate to your problem
  • you don’t always need an advanced algorithm, you need one that gets the results you need

And then you can see through the vendor bullcr@p and focus in on finding a vendor with a solid solution that actually solves your problem, regardless of whether the vendor claims AI or not.

Agentic AI is a Fraud — And Arguments to the Contrary are BS!

We’ve been over this many times here and on LinkedIn. And we are going to go over many of the reasons yet again, because it seems we have to. But first, we’re going to back up and give you a fundamental answer as to why Agentic AI cannot exist with today’s technology.

The dictionary definition of an agent is generally either “a person who acts on behalf of another” or “a person or group that takes an active role or produces a specific effect“. It’s about “doing, performing, or managing” for the express purpose of a “desired outcome“.

Classically, an agent is a person, and a person is an individual human and a human, unlike a thing, is intelligent, which today’s technology is not and that should be Q.E.D.

However some “modern” or “progressive” dictionaries (and there are at least 26 publishing houses publishing dictionaries as per Publishers Global), will allow an “agent” to be a “thing” as the term has been used in technical diagrams and patents, so if you define AI as a thing (which is being nice), then this refutation is not enough so we have to consider the rest of the definition.

As for “doing, performing, or managing“, like any automated system, it, unfortunately, does something. However, unlike past “automations” which took fixed inputs and produced predictable outputs, today’s Gen-AI takes an input and produces an unpredictable output that depends on how it was trained, parameterized, implemented, interacted with (down to the specific wording and format of the prompt as they are designed to continuously learn [which is a huge misnomer as they do not learn, but simply evolve their state]), fed the data, etc. As a result, two systems fed the same inputs at any point in time are not guaranteed to give the same, or even similar, output and are NOT guaranteed to produce the “desired outcome” (as even one single “bit” of difference between data sets, configuration, and historical usage can completely change an outcome, just like one parameter can completely change an optimal solution in a strategic sourcing decision optimization model). While this can be argued unlikely (and is for run-of-the-mill scenarios), it’s not infeasible (and, in fact, has a statistically significant chance of happening the more an input is off from the norm, and even 1/100 is significant if you plan on pushing [tens or hundreds of] thousands of processing tasks through the agent). Just like one wrong cost (off by a factor of 100 due to a decimal error) can completely change the recommended award in strategic sourcing decision optimization, because these are essentially super sized multi-layer deep learning neural networks (with recurrent, embedding, attention, feed-forward, and maybe even feed-back layers) based on non-linear and/or statistical activation functions where the parameters change based on every activation and, thus, every input, one wrong input, in the right situation, can completely change the expected outcome.

As for the arguments that “they often work as well as people, so why not“, that’s equivalent to the argument that “dynamite often works as well as excavators, so why not” or “loop antennas work just fine for direction bearing on aircraft, so why not“. In the second argument, the “why not?” is that even the best expert can’t always predict the full extent, direction, or result of the explosion (which, FYI, could also create a shockwave that could damage or take down nearby structures). In the third argument, ask Amelia Earhart how reliable they are if the short range is too short. Oh wait, you can’t! (And that’s why modern planes have magnetic compasses and GPS in addition to radio-based navigation.)

As for the arguments that the issues aren’t unique to AI and show up in other systems or people, while that is fundamentally true, there is no other technology ever invented that collectively has all of the issues we have so far identified in Gen-AI … and I’m sure we aren’t done yet!

[ Moreover, when you think about it logically, it makes no sense that we are so intent on pursuing this technology after having spent almost a century designing and building computing systems to be more accurate and reliable than we could ever be (with stress tested design techniques, hardware, and generations of error correcting coding to get the point where a computer can be expected to do trillions of calculation without fail, whereas an average human, who can do an average of one simple calculation a second, might not get through 10 without an error) while continually enhancing their capacity to the point that the largest supercomputer is now one quintillion times more powerful at math than we are! ]

As per a previous post, for starters, this technology:

1. Lies. It hallucinates every second, and often makes up fake summaries based on fake articles written by fake people with fake bios that it generates in response to your query, because it’s trained to satisfy (even though the X-Files made it clear in Rm9sbG93ZXJz how bad an idea this was back in 2018, four months before GPT-1 was released). And that’s the tip of the iceberg of the issues.

2. Doesn’t Do Math. These models don’t do math as they are built to combine the most statistically relevant inputs, not to do standard arithmetic. Plus, they aren’t even guaranteed to properly recognize an equation, especially when words are used. So it will miscalculate, and sometimes even misread numbers and shift decimal points. (Just ask DOGE, even though they won’t admit it.)

3. Opens Doors For Hackers. Over 70% of AI code has been found to be riddled with KNOWN security holes. (So imagine how many more holes it’s introducing!) It doesn’t generate code better than an average developer, and sometimes generates code that’s even worse than a drunken plagiarist intern who can at least identify a good code example to base the generated code on.)

4. Puts Your Entire Network at Risk. If you’re using it to automate code updates, cross-platform integration and orchestration, or system tasks, all it has to do is generate one wrong command and it could lock up and shut down your entire network, no Crosslake required!

5. Helps Your Employees Commit Fraud. They can generate receipts that look 100% real, especially if they do their own math (and ensure all the subtotals and totals add-up and the prices are actual menu prices), look up the restaurant name and tax codes, and have a real receipt example to go off of. (And as for the claims by Ramp and other T&E firms that they can detect fake receipts generated by Chat-GPT, good luck with that, because all the user has to do is strip the embedded metadata/fingerprint by taking an image of the image or running it through a utility that strips the metadata/fingerprint or converts the image format to one that loses the metadata.)

Can you name another technology that comes with all of these severe negatives (with more being discovered regularly, including addiction and a decrease in cognition as a result of using it). (We can’t! Not even close!)

[ And going back to the “people do all of this too” argument, it’s true we collectively do, but the vast majority of humans are not narcissistic sociopathic psychopathic robber barons with delusions of grandeur and no moral code or ethics. (Most criminals and con artists have a code or a line they won’t cross. To date, Gen-AI has proven such a concept is beyond it.) ]

And for those of you who believe Gen-AI is emergent, it has been refuted multiple times. There is no emergence in these models whatsoever. They were, are, and will always be dumb as a doorknob while being much less reliable. (At least a doorknob, when turned sufficiently, will always open a door.) If you want to believe, go find religion (and keep your religion out of technology)or at least restrain yourself to the paranormal. When it comes to Gen-AI, THERE IS NOTHING THERE.

As for it doesn’t have to be smart to be useful, that is one of the most useless statements ever — because it’s context free and could be 100% true or 100% false or anything in between, all depending on the context you are referring to. By definition, technology does not have to be smart to be useful. Every piece of technology we use today is, by definition, dumb because it all lacks intelligence. (And some of it is so useful we’d never live without it — like control systems for energy grids and water works, air traffic, and modern communications).

However, all of the dumb technology that we have developed that is useful is ONLY useful with a precise context that defines the problem to be solved, the inputs it will expect, and the outputs that can be generated. An online order system is useless in a nuclear power plant control station, for example. (And while you think that is far-fetched, many of the areas vendors are claiming you can apply Gen-AI to are even more far-fetched.)

Even if the situations where the task you want ultimately boils down to the digestion, search, summarization, and/or generation of outputs based on a very, very large corpus of data, which is essentially what LLMs were designed to do, they are still only useful as a starting point or suggestion generator, or guide. They are not perfect, are totally capable of misunderstanding the question, being too broad in their interpretation of one or more inputs, being incorrect in their interpretation of the prompt for the desired output, generating fake data despite requests not to, excluding key documents or data from a result, or including irrelevant or wild-goose documents or data in the result.

In the best case, an LLM query will be about equivalent to a traditional Google search (that weights web pages based on key words, context, link weight, freshness, etc. and returns only real links in its results) of potentially relevant data. In the worst case, it’s a mix of real relevant, real irrelevant, and a lot of made up results, and you have no clue which is which until you check every one. (Which means it is ultimately less useful than a drunken plagiarist intern who will only refer to and copy existing references when sent on a research project, and all you will have to do is filter out the irrelevant from the relevant, as such an intern will be too lazy to make anything up completely.)

Since Gen-AI LLMs are what are powering all of the Agentic AI claims, it should now be quite clear as who why Agentic AI is a fraud!

Now, this isn’t saying Gen-AI is bad (as it will have some solid use cases where it is dependable with further research; as with all generations of AI tech before, it needs more time; nor is it saying that we won’t have smarter software agents in the future that can do considerably more than they can today), just that we aren’t anywhere close to being there yet and that they won’t be based on Gen-AI as it exists today.

So for now, as Sourcing Innovation has always advocated, switch your focus to Augmented Intelligence and systems that make your humans significantly more productive than they are today. The right systems that automatically

  1. collect, summarize, and perform standard analysis on all of the data available;
  2. make easily modified suggestions based on those analysis, similar situations, and typical organizational processes (that can then be easily accepted); and then
  3. invoke traditional rules-based automations based on those accepted scenarios and invoke the right processes to implement the defined workflow

can, depending, on the task, make a human three (3), five (5), and even ten (10) times more productive than they are today, allowing for a team of two (2) to do the work that used to require a team of six (6), ten (10), or even twenty (20). For example, consider the best AI-based invoice processing applications that can automatically processes standard form invoices, break out and classify all the data, auto-match to (purchase) orders and receipts, auto-grab missing index/supplier data, auto-accept and process if everything matches within tolerance, auto-reject (and send back with reason and correction needed) in the case of a significant mismatch, and automate a dispute resolution process. When these, with the right configuration and training, will get most organizations to 95% touch-less processing (with zero significant errors), that’s a 10X improvement on invoice processing.

While not all tasks/functions will achieve the high efficiency of invoice processing, depending on the time required to collect and analyze data before strategic decisions can be made, most functions can easily see a 3X improvement with the right Augmented Intelligence technology. With the right tech (especially with supercomputing capabilities in modern cloud data centres), we have finally reached the age where you can truly do more with less … and all you need to do is NOT get Blinded By The Hype.

And for those interested, this latest rant was inspired by one of THE REVELATOR‘s Triple-Play Thursday LinkedIn posts and comments thereon (which have some deeper explanations in the comments if you want to dig in even deeper).