Category Archives: AI

STOP USING AI FOR WRITING NOW! (BEFORE IT’S TOO LATE FOR YOU!)

If you think AI writing is becoming excellent, that’s the SIGN you should STOP using AI for writing immediately.

The reason you think this is multifold, and no part of it is good.

1) AI is too agreeable (and sycophantic)

(Source)

2) This increases your dependence on it

(Source)

3) Which leads not only to cognitive decline

(Source)

4) but cognitive surrender

(Source)

And as for the “I know how to use it, I’m in control” argument, that’s all BS. It’s an illusion because frequent use of AI BREAKS THE MOST RATIONAL OF THINKERS!

(Source)

You might think you’re guiding it, but it’s brainwashing you to accept without question the same derivative cr@p it always spits out because that’s all it can do. Remember, just because the token size is large enough for the LLM to generate grammatically proper English 99.999% of the time, that doesn’t mean there’s any logic or meaning to what it generates!

And yes, the doctor saw all of this coming, as he understood early on exactly what LLMs were and were not. That’s why SI has had a formal NO AI policy (as well as a NO AI BS policy) for a long time now (and never used AI)!

(You have to remember that, as humans, there is a relatively significant chance we will end up using a nursing home at some point in our lives in North America, with some estimates now putting that chance over 50%, and an ever greater chance that when we end up there, it will be [partially] due to mental decline, dementia, and similar conditions. We’re also suffering population stagnation, if not decline, in most western countries. As a result, it’s in our best interest to do everything we can to keep our mental faculties about us for as long as we can, because there’s barely enough health care workers to care for those who already need care as it is. Think seriously about what’s going to happen if, en-masse, society goes all-in on technology that is essentially turning us into drooling mindless idiots and greatly increasing the chances we become unable to care for ourselves immediately upon entering retirement … )

While Your Supply Chains Are Impacted by War, They are Not At War!

And just because autonomous AI has become a standard tool of the current conflicts, that doesn’t mean that autonomous AI should be a standard tool in your supply chains. AI, defined properly, most definitely should, but not autonomous AI. And even then, only with human oversight!

This rant is inspired by THE PROPHET who tells us that The War in Iran is an AI War. Your Procurement and Supply Chain War Should Be as Well. And, despite parts of it appearing in LinkedIn comments, it is being expanded and reposted now to emphasize our previous article (on Friday) that essentially stated YOU SHOULD NEVER TRUST YOUR AI.

First of all, procurement and supply chain management isn’t a war. It’s a tense conflict between buyer needs and supplier leverage, but not a war.

Secondly, the fact that “AI never stops for a coffee break or to complain about leave not being granted.” is not on its own a valid justification for using it.

Because, by the same token, it also doesn’t care if a strike accidentally hits a school and murders hundreds of innocent children. (Al Jazeera, BBC, and Haaretz)

Nor does it care if multiple civilians get killed in a drone strike just to relieve a human soldier of a guilty conscience as they didn’t order the killing of the target and make the decision that resulted in civilian deaths. (NPR, The Guardian, The Times of Israel)

Given that AI has no ethics and no real intelligence to evaluate a situation beyond data it is provided and the question it is asked, is it really good enough to plan an operation on its own? I’d say it is not. (And also that it was applied without a full understanding of its weak points and how to use it properly.) (And if you want a great post about how critical human command decisions are, check out Michael Salehi‘s post and how the right decision always requires judgement, experience, and accountability — which an AI does not have.)

This is why Anthropic wants some safeguards, why you should too, and why you should be just as careful about where and how you use it in your supply chain. There are two realities with AI:

Properly applied augmented intelligence is a gift from heaven.

If you take the augmented intelligence approach, it can process all the data, give you recommendations, give you a synopsis of the reasoning, and allow you to dig into that reasoning, ask questions about risk and indirect ramifications, and explore the broader picture when you need to.

AI is not human, not ethical, not flawless, and not responsible.

You still need to review the synopsis, dig in when something appears to be off (and even if it’s just an uneasy feeling — your “intuition” can often be just as valid as the AI output), and verify the decision. And often these tools will allow what would take weeks to be done in minutes. But sometimes you’ll find there isn’t enough data, and you won’t be able to act confidently right away.

Now, THE PROPHET didn’t like my response, and countered with a number of questions, which I gladly answered and will repeat here because two of those questions missed the point, and including them helps illustrate what the real questions are.

“Would you take action?”: Yes!
(I don’t care if you agree or disagree with my viewpoint, or THE PROPHET‘s viewpoint, as this is not the point.)

“Would you use all tools available?”: YES!
(Again, I don’t care if you agree or disagree with my viewpoint, or THE PROPHET‘s viewpoint, as this not the point either.)

“Would you trust the tools blindly?“: No!

“Would you rush them into deployment without proper field testing and safeguards?”: NO!

That’s the point. All the hype and promises are resulting in an implicit trust of AI when it should be “Trust … But Verify!“. It’s usually the omission of just one extra step, which is usually just a few minutes of extra human review, that is the difference between success and accuracy vs. failure and widespread destruction. And this is true both in war and in business decisions that impact your supply chain.

This is why I continue to so strongly caution against the use of “autonomous AI” when it is largely built on systems that are flawed at the core, where hallucinations are part of the core function, and one subtle change in a prompt or query can result in a completely different output.

The reality is that, while you need modern tech platforms, constant intelligence monitoring, and pre-defined mitigation strategies just to survive, you usually don’t need AI. (Or at least not the “AI” they are selling … which, as you guessed, isn’t “AI” at all.)

What you do need to do is prepare for AI. If you do that, which involves:

  • getting your data under control
  • building an infrastructure for connectivity, process, and data integration
  • updating your processes for modern environments
  • training your talent accordingly

You will find that you have

  • put data at the core of not just category strategy, but overall operations
  • expanded your definition of risk to include price, partners, and related information flows
  • identified where automation fits; where optimization, analytics, and machine learning fits; and where “AI” doesn’t actually add any additional value
  • figured out that Employees backed by Augmented Intelligence and agents with escalating, but still restricted in critical situations, automation privileges as they learn from those human are best
  • developed a much better understanding of multi-tier exposure
  • begun the process of transitioning to a new, alert, organizational state where you are continually monitoring, optimizing, and re-planning your supply chain in response to emerging disruptive threats … and, as Koray Köse (who we may have to start calling The Oracle due to the insightful nature of his posts) points out, this is where you need to be

… and this is everything THE PROPHET says you need. Most importantly, all of this just might be accomplished without any modern AI (and definitely no BS AI Employees) at all!

What’s Wrong With 22% of Organizations? Why Do They Trust AI?

In a recent Horses for Sources Piece on The HFS AI Trust Curve: AI isn’t failing … leadership is, the byline is 78% of organizations do not trust their AI.

What the h3ll? 100% of organizations should not trust their AI when

  1. only 6% of organizations are seeing success (MIT, McKinsey) and
  2. there is no true Artificial Intelligence.

As a result, AI should NOT be trusted!

However, properly designed adaptive robotic automation, Machine Learning, and appropriately gated and guard-railed AI which sends exceptions for humans to deal with when the rules don’t cover the situation, the gaps are beyond what should be dealt with automatically with no approved precedents, and the only resolution you can trust is a human one is an AI that should be deployed since, while it might not be 100% perfect, it can still be applied with confidence as the guardrails will ensure no significant failures.

In other words, while I don’t agree that Agentic AI should be embraced to make decisions, because IBM had it right back in 1979:

a computer can never be held accountable, therefore a computer must never make a management decision
 

I do agree that the vast majority of back office tasks are just bit pushing and can be appropriately defined with flexible, parameterized rules, with machine learning that learns the tolerances over time, which means that agentic AI should be widely applied throughout a back-office, and that organizations that don’t embrace this level of AI are going to fall behind, but the trust in technology should not extend to decision making. Just decision execution.

And if 78% of organizations don’t trust their agentic systems to execute decisions, then that is a problem — they are going to fall behind, they won’t embrace SaS (Software as Services) where it makes sense, their overhead costs will stay high in a tight economy, and they’ll get crushed by the competition who will be able to be more competitive and actually sell in a tight economy.

In other words, despite HFS’ implications, organizations should NEVER trust Agentic AI to make decisions, but they absolutely need to trust the AI to execute the decision. If they don’t, they’re in trouble.

Part of the problem might be the framing of the last step of the current HFS Enterprise Adoption Journey.

Stage 1: Can the AI Model Work?
This is where you start. You have to find a viable model.

Stage 2: Do we Believe the Inputs?
This is where you progress to. You need valid inputs.

Stage 3: Will People Act on it?
This is the next step. If you don’t have organizational readiness, the initiative has failed before it begins.

Stage 4: Is the AI allowed to influence outcomes?
Since there is no such thing as Artificial Intelligence, and a computer should never make a decision, the AI should never be allowed to influence outcomes. It should INFORM outcomes. It’s a slight difference, but an important one. Moreover, it doesn’t really affect how the AI should be implemented. You’re still implementing with the goal that the AI will eventually automate at least 99% of all instances of the task(s) it is designed to execute, and the only difference is that you are deciding what to do with an exception and training the AI to execute your decisions, not being trained by it to accept anything as gospel that it recommends.

This minor change creates the trust matrix you adopt, and puts you on the path to proper Agentic AI automation that will allow your workforce to be up to 10X as productive. Augmented Intelligence, be it in-house or through SAS, is the true future. The tech is there for many tasks now, and you don’t have to wait for a promise that won’t materialize within our lifetime.

Phil’s new HfS Services-as-Software FlyWheel Is Right On the Mark From a Customer-Centric Viewpoint

… but hides the full support required on the back-end!

This is important to point out for two reasons:

  • Gen-AI Hype-mongers will use this as another excuse to claim most white-collar functions will be entirely eliminated when, in fact, it strengthens the need for true back-office white-collar workers and real software engineers
  • Expert human support becomes more critical at each stage of the process (while bit pushers became less and less useful)

But let’s backup. In his most recent piece where he (re-)introduced the SaS Flywheel, Phil made one critical statement which is constantly overlooked by the industry: Stop treating FDE as optional: Your AI Flywheel will not spin without it.

As Phil astutely points out: the hard question nobody is answering is this: who actually wires AI into your live systems, governs it in production, and makes it keep working when the AI software vendors leave the room. The answer is, of course, your Forward Deployed Engineer (FDE) — and if your transformation strategy does not have it, you are building an AI theatre, not an AI operating model. (Which, FYI, is what most companies are building — and, as Stephen Klein astutely points out, putting on puppet shows. Great for entertainment, but not so great for getting anything done. Especially since they all overlook what AI can actually do.)

Now, a forward deployed engineer alone will not get you out of pilot purgatory, but it is an essential condition — just like you can’t climb out of a deep wide hole with smooth 90° vertical surfaces on all sides without a rope or a ladder, you can’t fly your way out of a pilot without a working plane, which you don’t have without an engineer to keep it running.

As Phil continues, FDE is not implementation – it is the engineering layer that makes AI governable this is because FDE teams build ontologies that reflect how the enterprise actually operates, wire models into real data with real permissions, and design the governance architecture that keeps autonomous systems accountable, which will, and for quite some time into the future, wire in non-overridable human oversight, approval, and review.

Phil goes on to list a few key things that LLMs cannot do on their own. (It’s in no way a complete list, but hopefully enough to get executives questioning all the AI-BS form the AI-Hype-mongers presenting grandiose claims that likely won’t be a reality within most of our professional life-times. Even better, Phil points out that Agentic AI without FDE governance is not transformation. It is risk accumulation!, and points out five key requirements of workable AI that can’t be achieved without an FDE. (There are more, but again, these should be enough key points to help executives realize that not only are LLMs sorely insufficient for almost every task they are being promoted for, but they aren’t even usable at all without the help of a FDE team.)

Phil also does us a great service by pointing out that while vibe coding creates velocity, FDE prevents it from becoming chaos — which is what happens every single time you employe vibe coding without FDEs (and a real engineering team — but we’ll get to that).

Vibe coding is simultaneously one of the biggest boons to software development and the greatest destructors, especially since it is almost universally misunderstood and misapplied. For example, while Phil’s statement that business analysts can express intent and receive working agent code in return is technically correct, it’s not practically correct. That’s because vibe coding produces code that is insecure, inefficient, and not appropriate for enterprise software. In fact, just about every startup that tried to launch an enterprise app on vibe-coding alone have lost hundreds of thousands (or more) attempting to do so — see this great post from Alex Turnbull.

Vibe Coding is super useful because, with the help of an FDE team with a good business analyst, the end user organization can quickly create functional prototypes that demonstrate precisely what they are looking for, which are much more powerful functional specifications than traditional functional specification documents with text descriptions of required functionality and powerpoint mockups. Plus, these prototype specifications can be created in a fraction of the time. But that’s all they are, prototypes. Real applications still need to be built by real software engineering teams who can build optimized, bug-free, secure code — vs. unoptimized, buggy (especially at the boundaries), and insecure code regularly generated by AI-based vibe coding tools (where, depending on what source you access, 53% to 78% of code generated has serious security issues).

In other words, it’s a great article, from a customer-centric viewpoint and written for customer executives. From a back-end, provider perspective, it’s missing one key step — the development step that takes vibe coding prototypes and produces real (AI-backed) enterprise applications.

Moreover, it centralizes the FDE activities when, in reality, they are ongoing throughout the entire cycle.

  1. they activate, and put the foundation in place
  2. they train the users on how to properly use the LLMs for accelerated research and are always on call for help
  3. they maintain the orchestration layer, and improve (and correct) it as necessary
  4. they work with the end users to vibe code prototypes
  5. they work with the development team to build the next generation (or iteration) of the enterprise apps in the SaS model

In other words, AI can enhance SaS, but it cannot replace the need for skilled humans on the provider side (for development, implementation, maintenance, and improvement) or the buyer side (for process definition, improvement, decision criteria, etc.).

At the end of the day, AI can only replace bit-pushers who do tactical data processing tasks which should have been automated by machines 30 years ago (when it was promised), but it can’t replace anyone who needs to make a (strategic) decision. This is true regardless of the model, and the right model, like Phil’s SaS flywheel, actually exemplify the need for the right, skilled, talent.

Dear Graduate, Don’t Skip the Internship … You Need a Gateway to an Apprenticeship!

A number of AI enthusiasts are advising soon-to-be and recent graduates to skip the internship and instead become proficient with AI because that’s how they are going to get a job. And, as you should know by now, it’s bullcr@p. Being able to write a prompt for a Gen-AI LLM that will return a convincing (but not necessarily sound) result is not going to get you a job. The only skill that’s going to get you a job is competence!

As with every over-hyped tech-du-jour that came before ([predictive] analytics, the fluffy magic cloud, SaaS, the WWW, etc), AI is not a silver bullet that’s going to solve all of an organization’s problems and grant magical status to those who have mastered it.

The only thing you’ll master with Gen-AI is the art of the con since whatever it spits out is so well written (compared to the average literary skill of an average high school, and even University, graduate these days) and so convincing that, without expert guidance, an average person is convinced that it must be right when they don’t know better. But that’s not a skill most organizations are going to hire you for (outside of sales and marketing), even if the organization is known for questionable ethics.

Organizations don’t need clueless idiots. They need experts who can assess situations, determine options, decide on the best option, and implement the decision. Someone who knows the analysis to run, the data to collect, the tools to use, the reports to create, the logs to keep, and the contracts to write.

And while you can’t graduate an expert, you can graduate with the skills to start you on the path to becoming one — the traditional skills of math, logic, critical reasoning, project planning, project management, and relevant domain knowledge — not creative crafting of perilous prompts for a flakey LLM that will eventually fail you no matter how much time and effort you put into that prompt.

And if you get get an internship and prove yourself, maybe that will lead to full time job where you can apprentice under a master in the real world and gain the experience you need to go from an adept (with the core knowledge and skills but not the wisdom needed to succeed in the real world) to practitioner (who has gained enough wisdom and experience to manage standard tasks and functions on their own, and who only needs guidance for new or complex situations not yet encountered) and, eventually, to expert where you become the new organizational mentor and the one that new hires turn to for help.

And organizations need (future) experts because only an expert knows when

  • it only has wrong/incomplete data (which will prevent an AI from ever working)
  • an analysis/outcome is wrong based on math fundamentals
    (and when an LLM-based AI multiplied by -1 because you told it to deliver savings vs. find the best opportunities based on price variability, lowest price, market trends, and differential analysis)
  • reasoning is correlative, not causative (which is a failure of not just LLMs, but many people as well)
  • an analysis is incomplete (because only they have specific insight that was not available to the machine or another analyst)
  • etc.

That’s why, if you want to become a true master of your craft, you need to forget the AI mastery and instead land an internship where you can apply the mastery of the real skills you learned in your degree program to stand out, get an apprenticeship, and learn how things work in the real world and acquire the real world mastery you need to get the job you want. Only then will you be able to work your way up to becoming the leader, and expert, you want to be.

There is no Artificial Intelligence (just Artificial Idiocy) and organizations will always need top talent. Automation, and well designed applications that solve real problems efficiently and effectively, will reduce the number of back-office employees that an organization needs and any employee who’s only skill is pushing bits will be eliminated. However, the need for talented employees will only increase to not only oversee the tools and handle the exceptions, but correctly analyze increasingly complex real-world situations and make the right decisions.

At the end of the day, AI tool mastery is meaningless if you can’t logically and holistically analyze the outputs with respect to math fundamentals and a real-world scenario!