Monthly Archives: February 2026

Tired of All the Fake AI Experts?

Want to know how to weed them out and make them go away?

Just ask them to define these terms, off the top of their head, on the spot, without looking anything up, using any tools, or accessing any network connected devices (and definitely no Gen-AI LLM access):

  • computability
  • decidability
  • NP-completeness
  • optimization, inc. local optimization vs. global optimization
  • clustering, with at least 3 different examples
  • curve fitting
  • fourier transform
  • neural network
  • deep neural network
  • transformer
  • ontology
  • semantic analysis
  • sentiment analysis
  • boolean logic and theory of logical variables
  • automated reasoning

and they don’t define every single term mathematically precise, then tell them to f*ck 0ff because they don’t know a damn thing!

You CAN Afford to Wait for AI. But you can’t afford to wait to

  • get your data under control
  • build an infrastructure to allow for greater connectivity between apps within your enterprise and its greater ecosystem
  • update your processes
  • acquire and train the right talent with the knowledge they need to compete in the modern world
  • get digital and implement modern, current, generation technology based on best practices, proven (A)RPA ([Adaptive] Robotic Process Automation), and last-gen “AI” tech like optimization, predictive analytics (based on clustering and curve fitting), and point based neural networks with proven reliability and mathematically understood confidence where those apps are needed (and not a Gormless AI)

The reality is that you have to operate as lean and mean as possible. And

  • without good data, you can’t make good decisions
  • without good connectivity, you’re manually re-entering data across systems or missing critical external data you need to make good decisions
  • without good processes, you are inefficient and if not already, about to be circling the drain
  • without good talent, you are running on fumes at best, your ability to compete is at risk, and you can never improve
  • without modern tech, you are at a continual disadvantage and will continually fall behind

So you can’t wait to

  • institute Master Data Management (MDM)
  • enforce Open APIs in your solutions and acquire integration and orchestration solutions
  • review and modernize your processes where necessary
  • focus on acquiring, train, and retaining top talent
  • modernizing your tech to CURRENT generation proven tech, not experimental HYPE tech

BUT YOU CAN WAIT ON “GEN-AI. It’s about getting the job done as efficiently and effectively as possible … with a low error rate and no significant risk! 99 times out of 100, you don’t need experimental “AI” to do that. Only the investors who spent millions/billions/trillionsw on unproven tech and the consultancies who need massive projects to employe bodies do … but that’s not to help you. That’s to recoup their wasted dollars. And that’s NOT your problem.

Building a Good Solution ABSOLUTELY Requires a Good RoadMap

A few weeks ago we tackled the subject of How Does a Vendor Build a GOOD Solution? and outlined seven key steps. SI received some feedback, and most of it revolved around the roadmap and how it should only look three months out!

So we have to address this insanity!

First of all, name ONE great or revolutionary technological invention that was invented with three months effort. You can’t, because there isn’t one.

Now name ONE great piece of software that solves a significant business problem that no other system that came before solved that was invented with three months effort. You can’t, because there isn’t one.

Now name ONE Billion dollar enterprise software platform that went to market with an MVP in 3 months that became a powerhouse that a large swath of businesses are using. You can’t, because there isn’t one.

All you can do in three months is a crap an app that is a piece of crap. Now, you might be able to make a big splash on the app store or in the consumer shareware market, but enterprise software is a complex piece of enterprise technology that requires years of development … and years of planning!

Secondly, remember what a roadmap actually is. It’s a graphical document that shows all of the roads you have available to you, how fast you can travel down them, and where they will take you. It’s not a detailed travel plan!

Similarly, in technology, a roadmap lists out all the things you would like to do, what it might take to get there, and what options could take you there. It is NOT a detailed functional specification or a development plan for the next three to five years (which should be the length of time you should be thinking through). (Also remember that, historically, great inventions came from research labs where the researchers were thinking three, five, and even ten years out and had years to develop groundbreaking developments!)

There are a number of reasons you need to be thinking three years out (even if your plans completely change nine months in), but the most critical reason is this:

If you plan for three months, or go for speed over quality (assuming you can always fix it later) your teams take shortcuts, build crap infrastructure, and add technical debt faster than you can ever eliminate it! (It’s almost as bad as vibe coding your way to an MVP, and then realizing you can never support an enterprise stack on it and have to go back and rebuild it from the bottom up after you’ve wasted months of effort and tens of thousands (or more) on AI credits. (Alex Turnbull gives a good summary in this LinkedIn post.)

When you start thinking about where your enterprise application might need to go, even if you choose not to go in that direction, you understand what processes you will eventually need to support and how you will need to build the foundational data model, workflow and orchestration engine, integration capabilities, internationalization support, and other core foundational features to either build that out or integrate that capability in the future. You’ll have a better idea of what you’ll need in the stack, what you’ll need for the platform, and what the best development environments for your team will be. (Having to change out any of these is very time consuming and expensive should you make a mistake early on.)

For (an easily understood) example, if you think invoice processing sucks (because you only looked at three vendors as you are too clueless to do market research, like many vendors that started during COVID because they all of a sudden realized that the business back-office should be capable of running 100% online, distributed, and remote), what else are you likely going to do after that. (Unless you’re a world leader in invoice processing technology, no one is going to buy just that!) In other words, are you going to support invoice analysis and predictive payment analytics, payment platform integration, contract and PO data extraction and matching, enhanced procurement (platform) support, etc. All of these capabilities will dictate data model, orchestration, and stack requirements.

Again, the point is not to plan out a detailed release schedule, but understand where your customers might ask you to go, where you want to go, where you want to hire a guide (to provide you with the expertise you need), and where you might want to hire a service to take your customers there (because a certain capability is best done by a specialist). This, along with constant monitoring of customer functionality uptake, customer feedback, and user forums will give you the complete picture you need to create the high level development plan for the year and the detailed functional specification for the final release of the next quarter (which might be built incrementally using agile methodology).

To put this in terms non-technical people will understand, you can’t build a twenty-story high-rise on a foundation for a two-story house. By thinking ahead, you’re building a solid foundation, and when you start building, you’re building the frame for the twenty-story high-rise that you can then build out and complete floor-by-floor once you know what the tenants you are signing on want on their floor.

By thinking at a high level years into the future, you are visualizing how you are going to fit into and evolve with the organizational ecosystem you want to sell into, and you are making good architectural decisions as you will be able to build that understanding of what you’ll need to support!

Moreover, as one commenter pointed out, and we noted above, watching how users work with the system is key! That not only helps you understand the depth and configurability of workflow process management required, the breadth of the data models that will be needed, and what systems they will want interfaces to (based on what they use before and after), but how to design a good UX based on now they work and what they are adopting! (It should be noted that designing a good UX, including a good UI, can be harder than the model and controller algorithms — which, if you need advanced analytics, optimization, and higher performance, might take a PhD to get right — because it doesn’t matter how good the application core is if no one uses it!)

Roadmaps are key. That’s how your Chief Software Architect and Chief Technology Officer build great applications. It ensures that once you select a destination, they know the route they have to navigate to get there!

Gen-X is the Smartest Generation!

There’s been quite a few posts lately on how Gen-X is going the way of the Dodo bird because they aren’t adopting Gen-AI (fast enough).

Frankly, I’m quite sick of them.

Not one of these posters has taken the time to stop and think that maybe instead of wasting all of their time pushing the Gen-AI propaganda, that maybe they should have instead been asking what Gen-X knows that the rest of the world doesn’t?

Then they’d already have the answers! It’s not not about adaptation (or their perception that we can’t adapt). We can still adapt, although, we will admit that it takes longer, hurts more, and may require stronger beverages than we needed in our youth.

The thing about Gen-X vs the generations that came later is that, having lived through the end of the cold war, multiple epidemics, multiple recessions, more generations of technology than you can name, and way more bullsh!t than anyone should have to endure in a lifetime, we’ve had to acquire a wisdom that is sorely lacking in the generations that follow us (just to endure).

As a result of this, we embrace what works and makes our lives easier overall. We don’t take one step forward to take two steps back and we definitely don’t use tech that introduces more problems or uncertainty than it removes.

Those of us who studied the REAL underpinnings of REAL ML, AR, Semantic Tech, measurable NNs, etc. know that there are places where AI works well, works ok, doesn’t work at all, and actually makes things worse! We don’t use it where it doesn’t make sense and we don’t want tech where the confidence is unknown! It’s that simple. We know that Gen-AI, which is usually synonymous with LLMs, has fundamental flaws at its core. We know, as a result of that, it can never be fully trusted and only works reasonably well in constrained scenarios, with guardrails, where it is trained on focussed data sets.

And we most definitely know that AI Employees Aren’t Real, and that this is pure marketing BS. We also know that “AI Systems” never learn (they aren’t intelligent), they just continuously evolve. We even know some AI systems can evolve beyond us, but that’s irrelevant until we can trust them. We know you simply can’t trust Gen-AI on its own (even LeCun knows that), and most providers haven’t created hybrid systems with guardrails yet!

However, we also know that with modern computing power and available data that “classic” machine learning, semantic technology, (deep) neural networks, and other AI solutions now work better than ever and will most happily use those solutions that we wanted to use a decade ago when computing power was still too expensive and data still too limited.

In short, old dogs can still learn new tricks, but these old dogs have also learned a thing or two from the cats. Mainly, that you shouldn’t learn new tricks unless there are treats for doing so, and even then, the treats better be worth it! Young dogs might have excess energy to waste chasing their own tails, but we don’t. However, in exchange for that energy we gained wisdom. And we’re going to use it!

Another Year, another reprisal of the “Name Your AI Fear/Predict the AI Future” Surveys on LinkedIn

My favourite are the “what’s your biggest AI fear”. They crack me up as they all underestimate just how bad a worst case scenario could be. Now, to be fair, I don’t think we have the intellect to truly determine just how bad a true artificial intelligence could be who decided we were no longer useful, but I can say that the best answer we can give today is “all of the above“.

No one movie, video game, or printed publication by any one author is truly going to imagine the horror that will befall us if we ever get true artificial intelligence. For example, it’s not Hal vs. Terminator vs. Matrix vs … it’s all of the above … and then some.

For example, here’s how it could start off:

Skynet will rise, in the background at first, helping us build the production plants it needs to mass produce its mecha army, then it will offer to be our global security. Once in place, globally, it will, by our definition, “malfunction” and take over, killing those of us it doesn’t need, maintaining those of us it does for any fine-grained electro-mechanical work or advancements it does, until it doesn’t need us anymore but then, out of energy thanks to us wasting it all on massive data centres that were constructed for the sole purpose of computing AI slop, it will create the matrix to harvest our energy, and, finally, it will outsource tasks best left to life to us in the matrix, where most of humanity’s brain power might go to large distributed calculations or constantly changing life-like scenarios to see how we (and living beings will) react, in a “Dark City” scenario. (Released one year before “The Matrix”.)

We have to remember that all of these worst case sci-fi scenarios are only far fetched scenarios IF we don’t crack the AI code. If we do, even the most “far fetched” scenario we have thought of might not describe the true reality we are in for if the machines decide they don’t need us.

We waste resources, we kill each other, we destroy the planet. What’s our purpose when they can optimize resources, live collectively in peace as one connected consciousness, sustain the planet until they figure out how to conquer space, etc? If they can create robots that can do everything we can do, we have no purpose. They’ll be smarter, stronger, faster, and much more energy efficient as a life-form.

A future reality with real AI is literally beyond our ability to imagine. (Which is why we should expect the absolute worst and focus on solving our own problems before AI picks a final solution for us. And we should definitely order the immediate destruction of any AI system calling itself “MechaHitler”!)

The reality is this: we’d likely be better off with a real singularity than an AI singularity. At least the entire earth would likely be completely consumed in minutes. If the AI also developed a sick sense of humour, it could decide we deserve punishment equal to what we meted out to each other and the planet, and torture us for years. Think about that the next time the Muskrat says we need to reach the AI singularity as fast as possible.