Daily Archives: March 27, 2026

What’s Wrong With 22% of Organizations? Why Do They Trust AI?

In a recent Horses for Sources Piece on The HFS AI Trust Curve: AI isn’t failing … leadership is, the byline is 78% of organizations do not trust their AI.

What the h3ll? 100% of organizations should not trust their AI when

  1. only 6% of organizations are seeing success (MIT, McKinsey) and
  2. there is no true Artificial Intelligence.

As a result, AI should NOT be trusted!

However, properly designed adaptive robotic automation, Machine Learning, and appropriately gated and guard-railed AI which sends exceptions for humans to deal with when the rules don’t cover the situation, the gaps are beyond what should be dealt with automatically with no approved precedents, and the only resolution you can trust is a human one is an AI that should be deployed since, while it might not be 100% perfect, it can still be applied with confidence as the guardrails will ensure no significant failures.

In other words, while I don’t agree that Agentic AI should be embraced to make decisions, because IBM had it right back in 1979:

a computer can never be held accountable, therefore a computer must never make a management decision
 

I do agree that the vast majority of back office tasks are just bit pushing and can be appropriately defined with flexible, parameterized rules, with machine learning that learns the tolerances over time, which means that agentic AI should be widely applied throughout a back-office, and that organizations that don’t embrace this level of AI are going to fall behind, but the trust in technology should not extend to decision making. Just decision execution.

And if 78% of organizations don’t trust their agentic systems to execute decisions, then that is a problem — they are going to fall behind, they won’t embrace SaS (Software as Services) where it makes sense, their overhead costs will stay high in a tight economy, and they’ll get crushed by the competition who will be able to be more competitive and actually sell in a tight economy.

In other words, despite HFS’ implications, organizations should NEVER trust Agentic AI to make decisions, but they absolutely need to trust the AI to execute the decision. If they don’t, they’re in trouble.

Part of the problem might be the framing of the last step of the current HFS Enterprise Adoption Journey.

Stage 1: Can the AI Model Work?
This is where you start. You have to find a viable model.

Stage 2: Do we Believe the Inputs?
This is where you progress to. You need valid inputs.

Stage 3: Will People Act on it?
This is the next step. If you don’t have organizational readiness, the initiative has failed before it begins.

Stage 4: Is the AI allowed to influence outcomes?
Since there is no such thing as Artificial Intelligence, and a computer should never make a decision, the AI should never be allowed to influence outcomes. It should INFORM outcomes. It’s a slight difference, but an important one. Moreover, it doesn’t really affect how the AI should be implemented. You’re still implementing with the goal that the AI will eventually automate at least 99% of all instances of the task(s) it is designed to execute, and the only difference is that you are deciding what to do with an exception and training the AI to execute your decisions, not being trained by it to accept anything as gospel that it recommends.

This minor change creates the trust matrix you adopt, and puts you on the path to proper Agentic AI automation that will allow your workforce to be up to 10X as productive. Augmented Intelligence, be it in-house or through SAS, is the true future. The tech is there for many tasks now, and you don’t have to wait for a promise that won’t materialize within our lifetime.