Daily Archives: March 6, 2025

Blind AI “Agents” Will Only Worsen Any Situation!

THE PROPHET recently posted that The AI Overton Window is Open in Government Procurement and that makes the doctor scared for you. The damage they can do in private situations is bad. The damage they can do in public situations is much, much worse.

The following obvious outcomes that the doctor already noted in his rebuttal are just the tip of the iceberg:

  • biased awards
  • overpriced awards to holdings of the billionaires that provide the tech
  • non-compliant awards because submitting a form is NOT verifying quality
  • billions lost to fraud as foreign bad actors use their AI to game our AI and direct Billions to accounts that will quickly be emptied to offshore accounts and then untraceable crypto!

For those of you that haven’t figure it out yet, all AI is biased as it is trained to repeat the patterns found in the training data provided, and all of that data is biased to existing providers and decision patterns of biased award judges who find sneaky ways to direct contracts to the recipients they want to give the business too (whether or not they are the best value for the taxpayer’s money). If your President and his DOGE are telling you the truth, fraud (and thus bias) is rampant, and “AI” will just perpetuate that.

Since there are only a few players who are big enough to handle the data volumes and computational workload that would be required to support the US Federal Government, they have an effective monopoly. As a result, they can charge pretty much whatever they want and get it. (And we have already seen how overpriced this technology is. Total Open AI funding to date: 17.9B [TrackXn] compared to total DeepSeek funding to date: 1B [Pitchbook]. The model is more or less as good as the OpenAI model at less than 1/18th the cost [although there is the issue of the controlling company and country]. The next iteration will probably be built for under 100M. Just don’t expect any improvements in performance. There are inherent limitations in the underlying model/technology they keep building on, we don’t have anything better, and given that it usually decades between real breakthroughs in research, we likely won’t until the late 2030s.]) The end result is that the government will probably end up paying twenty (20) to one hundred (100) times what the technology itself is worth because of the lock on the market the big players have in the US.

Applications can only process the data given to them, they cannot confirm it’s validity. All a supplier has to do is lie on a form or get a third party to (electronically) sign a false form (with a small bribe), and, voila, the AI thinks the supplier meets all the requirements. As long as the supplier is the lowest cost and/or highest score on other metrics (which can be achieved through the submission of false data that matches what the algorithm is looking for), it gets the award. And the taxpayer suffers.

Taking this one step further, if awards come with an up-front payment, all a foreign actor has to do is register a fake front company on American soil, bribe third parties to help it submit a lot of false forms, game the system, get the award, get the up-front payment, wire it to an untraceable offshore account, and disappear and if that up-front payment is millions of US dollars, its easy money. Now, if the government is smart and insists that there is no payment until delivery, depending on what that delivery is, if cheap knockoffs can be produced at a fraction of the price (that don’t have the reliability, lifespan, etc.), then this trick could be used, and then, after a few large shipments are delivered, and before the poor quality products break down, the supplier could all of a sudden close shop and disappear. If this doesn’t work, if the foreign actors are training their AI to generate realistic looking data to be fed into America’s AI, it’s just a matter of faking a delivery receipt to accompany an invoice for goods not delivered, getting that first payment, and then disappearing. This is just the tip of the iceberg of obvious fraud opportunities (and every worst case hypothetical situation in your espionage movies and books will come to pass, and more).

In other words, only bad things will happen if you try to deploy AI “agents” to do a human’s job!

We need to stop this ridiculous focus on AI Agents and instead focus on AI helpers. We need to end these bullsh!t claims that we are going to achieve full artificial intelligence and instead focus on augmented intelligence and build tools that enable white collar workers to become super human in their jobs and do the work that used to take ten people. Because that IS possible today (and has been for a while, especially since that was the route we were going down before “chat, j’ai pété” came along with its false promises of artificial intelligence, reasoning, etc.).

All we have to do is, for every problem, apply our human intelligence (HI), design, or redesign, a the process to solve it so that all of the tactical data processing (the thunking the machines can do a Billion times better than us) is separated from the strategic decision making (the thinking the machine cannot do) and the machine automatically does all of the data processing and thunking that needs to be done at each step so that we have the knowledge (processed data) we need to make the right decision (and a well designed interface that allows us to quickly absorb the summary, identify factors that might change the typical decision, and dive into the knowledge and underlying data) and be confident in it.

In other words, we shouldn’t be doing the same analysis and running the same reports over and over again, the machine should automate all of that [as well as various outlier analysis] and present us with the summary, whether it is typical or atypical, the decisions and actions we typically make in similar situations, and the results typically achieved. In many cases, a well-designed process and properly encoded knowledge will result in the machine making the right suggestion, and all we will have to do is verify a suggestion. When it’s wrong, the system should still have the appropriate decision encoded as an alternate the majority of the time, and we should just have to select that. And in the exceptional situation we never thought of, or for which it has no data, we will still be able to alter the process, encode our reasoning, and recode the system to suggest the right action the next time the situation arises, meaning that we will not only start off being ten times as productive, but get more productive over time.

The only real constraints we have are on the data we can leverage due to

  1. the lack of good, clean, verified data (and AI will NOT fix that) in most organizations (private and public)
  2. the lack of proper tools to do an office job in the modern age!

For example, if you give me the right modelling, analytics, optimization, and RPA tools, I can leverage ALL the data at my disposal to arrive at the optimal decision (given the time to do so). But how many Procurement personnel have access to all of these tools? Moreover, what percentage of those personnel would know how to fully leverage those tools (considering you need advanced degrees in mathematics and computer science to do so today). And what percentage still would have the time to do so? The percentage can be expressed by a single digit in industry (if you round up). It’s worse in government! But properly designed tools that embed best practice and human intelligence on top of these tools and bring the knowledge requirements down to what an average Procurement professional has would allow them to be ten times as productive in their analysis and make the right decision every time.

Moreover, the compliance slowdown that people are grumbling about is due to lack of good tools (RPA platforms that walk the users through the process) and people to do the work that HAS to be done manually. (And AI is NOT going to fix the fact that health, safety, quality, and oversight inspectors, where you don’t have enough qualified people to begin with, can be fired in droves and further increase backlogs.)

And guess what? We still handle unstructured data better than AI as some of the BS it continues to spit out in what they call “edge cases” is astounding! (the doctor really hopes the maverick doesn’t go mad in his conversations with DeepSeek — it almost drove the doctor mad just reading them!)

In other words, the core of any business function MUST continue to be HUMANs applying HUMAN INTELLIGENCE (HI!), and modern technology must AUGMENT (not replace) every function. Properly (human) designed and (human) implemented systems that use the right Augmented Intelligence technology (not the hype of the day) to supercharge a human-driven process can make the human easily ten times more efficient in some cases. (But left to their own devices, interacting AI agents will, more-or-less, as Meta found out in multiple forays last decade and this decade, self destruct.)