Monthly Archives: March 2026

Phil’s new HfS Services-as-Software FlyWheel Is Right On the Mark From a Customer-Centric Viewpoint

… but hides the full support required on the back-end!

This is important to point out for two reasons:

  • Gen-AI Hype-mongers will use this as another excuse to claim most white-collar functions will be entirely eliminated when, in fact, it strengthens the need for true back-office white-collar workers and real software engineers
  • Expert human support becomes more critical at each stage of the process (while bit pushers became less and less useful)

But let’s backup. In his most recent piece where he (re-)introduced the SaS Flywheel, Phil made one critical statement which is constantly overlooked by the industry: Stop treating FDE as optional: Your AI Flywheel will not spin without it.

As Phil astutely points out: the hard question nobody is answering is this: who actually wires AI into your live systems, governs it in production, and makes it keep working when the AI software vendors leave the room. The answer is, of course, your Forward Deployed Engineer (FDE) — and if your transformation strategy does not have it, you are building an AI theatre, not an AI operating model. (Which, FYI, is what most companies are building — and, as Stephen Klein astutely points out, putting on puppet shows. Great for entertainment, but not so great for getting anything done. Especially since they all overlook what AI can actually do.)

Now, a forward deployed engineer alone will not get you out of pilot purgatory, but it is an essential condition — just like you can’t climb out of a deep wide hole with smooth 90° vertical surfaces on all sides without a rope or a ladder, you can’t fly your way out of a pilot without a working plane, which you don’t have without an engineer to keep it running.

As Phil continues, FDE is not implementation – it is the engineering layer that makes AI governable this is because FDE teams build ontologies that reflect how the enterprise actually operates, wire models into real data with real permissions, and design the governance architecture that keeps autonomous systems accountable, which will, and for quite some time into the future, wire in non-overridable human oversight, approval, and review.

Phil goes on to list a few key things that LLMs cannot do on their own. (It’s in no way a complete list, but hopefully enough to get executives questioning all the AI-BS form the AI-Hype-mongers presenting grandiose claims that likely won’t be a reality within most of our professional life-times. Even better, Phil points out that Agentic AI without FDE governance is not transformation. It is risk accumulation!, and points out five key requirements of workable AI that can’t be achieved without an FDE. (There are more, but again, these should be enough key points to help executives realize that not only are LLMs sorely insufficient for almost every task they are being promoted for, but they aren’t even usable at all without the help of a FDE team.)

Phil also does us a great service by pointing out that while vibe coding creates velocity, FDE prevents it from becoming chaos — which is what happens every single time you employe vibe coding without FDEs (and a real engineering team — but we’ll get to that).

Vibe coding is simultaneously one of the biggest boons to software development and the greatest destructors, especially since it is almost universally misunderstood and misapplied. For example, while Phil’s statement that business analysts can express intent and receive working agent code in return is technically correct, it’s not practically correct. That’s because vibe coding produces code that is insecure, inefficient, and not appropriate for enterprise software. In fact, just about every startup that tried to launch an enterprise app on vibe-coding alone have lost hundreds of thousands (or more) attempting to do so — see this great post from Alex Turnbull.

Vibe Coding is super useful because, with the help of an FDE team with a good business analyst, the end user organization can quickly create functional prototypes that demonstrate precisely what they are looking for, which are much more powerful functional specifications than traditional functional specification documents with text descriptions of required functionality and powerpoint mockups. Plus, these prototype specifications can be created in a fraction of the time. But that’s all they are, prototypes. Real applications still need to be built by real software engineering teams who can build optimized, bug-free, secure code — vs. unoptimized, buggy (especially at the boundaries), and insecure code regularly generated by AI-based vibe coding tools (where, depending on what source you access, 53% to 78% of code generated has serious security issues).

In other words, it’s a great article, from a customer-centric viewpoint and written for customer executives. From a back-end, provider perspective, it’s missing one key step — the development step that takes vibe coding prototypes and produces real (AI-backed) enterprise applications.

Moreover, it centralizes the FDE activities when, in reality, they are ongoing throughout the entire cycle.

  1. they activate, and put the foundation in place
  2. they train the users on how to properly use the LLMs for accelerated research and are always on call for help
  3. they maintain the orchestration layer, and improve (and correct) it as necessary
  4. they work with the end users to vibe code prototypes
  5. they work with the development team to build the next generation (or iteration) of the enterprise apps in the SaS model

In other words, AI can enhance SaS, but it cannot replace the need for skilled humans on the provider side (for development, implementation, maintenance, and improvement) or the buyer side (for process definition, improvement, decision criteria, etc.).

At the end of the day, AI can only replace bit-pushers who do tactical data processing tasks which should have been automated by machines 30 years ago (when it was promised), but it can’t replace anyone who needs to make a (strategic) decision. This is true regardless of the model, and the right model, like Phil’s SaS flywheel, actually exemplify the need for the right, skilled, talent.

Tired of Geopolitical Chaos? You Wouldn’t Be if You Were Prepared!

In a recent article, Koray Köse pointed out that Geopolitics Now Lives in the P&L because it can re-price your inputs, trap working capital, and./or change who you are allowed to buy from or sell to, all with the stroke of a pen by a single individual entrusted with too much power.

And, as Koray points out, most organizations are structurally unprepared. This is partially because fewer than half of companies have visibility beyond tier-one suppliers, but mostly because the majority of organizations have to scramble and allocate resources to figure out whether or not the event has changed cost, liquidity, access, or structural dependency.

And, as Koray points out, organizations that don’t know what the real impact of major events on them are will:

  • panic dual- (or tri-) source and increase cost without reducing real risk (as sometimes they’ll source from another distributor or supplier with the same risk in the same region subject to the same events)
  • knee-jerk re-shore, waste 18 to 36 months, and increase costs without addressing the core issue
  • sign emergency renewals at premiums for risks that never materialize
  • continually react in a manner that achieves nothing

and, simply, burn time and value by not doubling down focus on the events that really matter to them. Because they don’t know what those events are.

That’s because they haven’t

  • identified their key product lines,
  • broken them down into components,
  • identified those that have limited supply items or rely on rare earths or other limited substances,
  • mapped the supply chains for those limited items, rare earths, or other limited substances, and
  • marked the supply chains they (and their current suppliers) are currently using

so that, when their constant 24/7/365 global monitoring solution detects a significant event, they can quickly determine

  • what active supply chains it impacts,
  • what substances, rare earths, or items could be impacted,
  • to what extent they are relying on those substances, rare earths, or items,
  • what components they are in,
  • what product lines are impacted and to what degree, and
  • what alternatives the organization has

This way you instantly know

  • what the impact is,
  • what other options you have, and
  • what the cost of those would be

If the event impacts a supply that is easily obtainable from other, unaffected regions; that is only used in a couple of low revenue (and lower profit) product lines, or that can be replaced simply by shifting supply to other suppliers with which you have existing relationships (and contracts), you can simply ignore it; but if the event could cut off a key substance, rare earth, or part, and you were sole sourcing, you need to leap into action immediately to contract another source of supply (before your competition does and its gone).

The only way you can do this is if you did a proper risk assessment of each major component, raw material, and item, and tracked your current and potential sourcing options. i.e. you did proper risk mitigation planning.

But if you take the time to do proper category assessment and risk mitigation planning, you’ll be well on your way to Köse’s Sophisticated Simplicity that will allow you to identify the one or two events that really matter, address those, and get on with business while the world burns around you. (Or, you can continue to react blindly and burn with it. Your choice. Either way, follow Koray. You can’t manage supply without being aware of what threatens it.)

Dear Graduate, Don’t Skip the Internship … You Need a Gateway to an Apprenticeship!

A number of AI enthusiasts are advising soon-to-be and recent graduates to skip the internship and instead become proficient with AI because that’s how they are going to get a job. And, as you should know by now, it’s bullcr@p. Being able to write a prompt for a Gen-AI LLM that will return a convincing (but not necessarily sound) result is not going to get you a job. The only skill that’s going to get you a job is competence!

As with every over-hyped tech-du-jour that came before ([predictive] analytics, the fluffy magic cloud, SaaS, the WWW, etc), AI is not a silver bullet that’s going to solve all of an organization’s problems and grant magical status to those who have mastered it.

The only thing you’ll master with Gen-AI is the art of the con since whatever it spits out is so well written (compared to the average literary skill of an average high school, and even University, graduate these days) and so convincing that, without expert guidance, an average person is convinced that it must be right when they don’t know better. But that’s not a skill most organizations are going to hire you for (outside of sales and marketing), even if the organization is known for questionable ethics.

Organizations don’t need clueless idiots. They need experts who can assess situations, determine options, decide on the best option, and implement the decision. Someone who knows the analysis to run, the data to collect, the tools to use, the reports to create, the logs to keep, and the contracts to write.

And while you can’t graduate an expert, you can graduate with the skills to start you on the path to becoming one — the traditional skills of math, logic, critical reasoning, project planning, project management, and relevant domain knowledge — not creative crafting of perilous prompts for a flakey LLM that will eventually fail you no matter how much time and effort you put into that prompt.

And if you get get an internship and prove yourself, maybe that will lead to full time job where you can apprentice under a master in the real world and gain the experience you need to go from an adept (with the core knowledge and skills but not the wisdom needed to succeed in the real world) to practitioner (who has gained enough wisdom and experience to manage standard tasks and functions on their own, and who only needs guidance for new or complex situations not yet encountered) and, eventually, to expert where you become the new organizational mentor and the one that new hires turn to for help.

And organizations need (future) experts because only an expert knows when

  • it only has wrong/incomplete data (which will prevent an AI from ever working)
  • an analysis/outcome is wrong based on math fundamentals
    (and when an LLM-based AI multiplied by -1 because you told it to deliver savings vs. find the best opportunities based on price variability, lowest price, market trends, and differential analysis)
  • reasoning is correlative, not causative (which is a failure of not just LLMs, but many people as well)
  • an analysis is incomplete (because only they have specific insight that was not available to the machine or another analyst)
  • etc.

That’s why, if you want to become a true master of your craft, you need to forget the AI mastery and instead land an internship where you can apply the mastery of the real skills you learned in your degree program to stand out, get an apprenticeship, and learn how things work in the real world and acquire the real world mastery you need to get the job you want. Only then will you be able to work your way up to becoming the leader, and expert, you want to be.

There is no Artificial Intelligence (just Artificial Idiocy) and organizations will always need top talent. Automation, and well designed applications that solve real problems efficiently and effectively, will reduce the number of back-office employees that an organization needs and any employee who’s only skill is pushing bits will be eliminated. However, the need for talented employees will only increase to not only oversee the tools and handle the exceptions, but correctly analyze increasingly complex real-world situations and make the right decisions.

At the end of the day, AI tool mastery is meaningless if you can’t logically and holistically analyze the outputs with respect to math fundamentals and a real-world scenario!

The King is Dead. Long Live the King!

Learn the phrase, because you will soon be living it in every aspect of your life — it’s not only the new fashion in western politics, but the new fashion in enterprise tech tripling down on the AI hype when the big AI vendors are losing money faster than ever before (as compute costs skyrocket, competition heats up, and a lot of people are getting fed up with a total lack of return on their investments)!

However, in the meantime, as the hype wave makes it way though the mass market, a slew of startups emerge building on LLMs and fake AGI offerings, and the marketing mania takes over, expect the e-Procurement is Dead, Sourcing is Dead, and Contract Management is Dead rhetoric to hit all time highs as these new players cr@p their new apps as fast as they can, with new — natural language centric — interfaces, more automation, and instant gratification. (At least when these apps work as desired.)

As these offerings get adopted at a rapid pace in organizations who are just adopting modern solutions (which make up half of the space, or more), replace first generation apps from the noughts in organizations who decided that anti-complex is the way to go, and start to get noticed, the rhetoric picks up the pace and echos.

But that’s all it is — rhetoric amplified through a microphone. Sourcing, Procurement, and Contract Management are not dead, the fundamental requirements are not changing, and these systems are not being adopted en-masse. Not just because they don’t always work very well, but because they don’t fit. (And even when they do, they are just replacing one interface with another.)

First of all, in the public sector, you have to follow rules and frameworks even for tail spend. These systems have no guardrails, and by their very nature can’t guarantee the rules will always be followed. So these systems can’t be adopted.

Secondly, in many large private organizations, very large investments have been made in big suite models (which still have long term subscriptions in place), so unless the new AI solution enables functionality (regardless of interface) that does not exist in the current platform, or allows for a considerable number of seat-based licenses to be dropped on renewal (for a similar or less number in the new, cheaper and more functional, app), it’s not even going to be considered. Even if buyers get blinded by the hype because the CFO is going to say no.

But yes, some organizations will be in a position to adopt these systems, echo that SaaS is dead, hail the new Agents / AI as king, and go back to doing the same old thing through a shiny new interface.

So while THE PROPHET might find it fun to pontificate who killed the e-Procurement king, the reality is that no one killed the king, because the king will die a death by a thousand paper cuts, and then his clone will be put on the throne.

Why? Well, using THE PROPHET‘s examples:

  • most intake/orchestration platforms just put lipstick on the pig you are already using (and the pig isn’t very happy about it), and the king you will get is Merkimer’s clone
  • ERP will do what they always do, acquire what their customers are already using (and this time do it in fire sales as investors who paid 10X for suites get desperate for anything back as the growth in these suite companies stalls), and the king you will get is the next CEO, who will be picked to clone the current CEO in form and function
  • people will see through the BS of “concierge AI employees” when they falter on more complex purchases, over spend on basic items, and allow Sony PlayStations to be charged to the snack budget (because the only AI employees that perform are those based in India), and they’ll keep the king they have until he nominates his successor (whom he expects to be just like him)
  • the viper strikes from fed up merchants being overloaded with RFIs and RFQs to quote items in their public catalogs at non-discount volumes will be laced with poison, and the only way the king will survive is to back down …
  • data aggregators and intermediaries will thrive, and they help to select the next king, but they won’t be king

The King is Dead. Long Live the King!

This Should Be Obvious But Expert in the Loop …

… is Human in the Loop. Not another (AI) system in the loop, no matter how specialized that system is or how well it is trained!

The future is Augmented Intelligence, NOT Artificial Intelligence (which doesn’t exist and won’t exist any time soon until brilliant researchers come up with a few more insights that get us closer to understanding

  1. what intelligence actually is and
  2. modelling it.)

The algorithms might be getting more accurate in average use cases, but the illusion of intelligence, no matter how grand, is still NOT intelligence. (And, even worse, The Wizard of Oz has been replaced by a very poor digital facsimile.)

Done right, Augmented Intelligence will still let your organization reduce its non-value-add tactical workforce by 80% to 90% because the right tools will enable the strategic experts to be 3, 5, 7, and even 10 times as productive and oversee all the tactical work that needs to be done using an exception based approach where every instruction that is given forms a rule that allows the system to automatically deal with the same, and similar, exceptions should they arise again in the future in a predictable and repeatable fashion.

Instead of having to oversee a team of tactical grunts that just take up space (because they don’t have the education, experience, or raw capability required to make good strategic decisions, manage projects, and identify value), a strategic expert can instead focus her time on value-centric activities and training a protege or two who will be one that posses the right mix of EQ and TQ to grow into, and take over, her expert role (when she moves on and up).

In the near future, there will be no more bodies in seats just to push bits around, because that’s what software does best. Number crunching and thunking. NOT analyzing strategically and thinking. (I admit most humans don’t do that well either, especially these days, because they are too attracted to the principle of least action and/or enjoying the cognitive decline from ChatGPT, but those willing to practice strategic thinking daily still do it way better than a machine ever will based on our current approaches to AI). [And while there might be fewer of us each year that are willing to think, there are still enough of us to get the job done if you let us select tools that work. Not necessarily AI. Tools that work.]