Author Archives: thedoctor

Building a Good Solution ABSOLUTELY Requires a Good RoadMap

A few weeks ago we tackled the subject of How Does a Vendor Build a GOOD Solution? and outlined seven key steps. SI received some feedback, and most of it revolved around the roadmap and how it should only look three months out!

So we have to address this insanity!

First of all, name ONE great or revolutionary technological invention that was invented with three months effort. You can’t, because there isn’t one.

Now name ONE great piece of software that solves a significant business problem that no other system that came before solved that was invented with three months effort. You can’t, because there isn’t one.

Now name ONE Billion dollar enterprise software platform that went to market with an MVP in 3 months that became a powerhouse that a large swath of businesses are using. You can’t, because there isn’t one.

All you can do in three months is a crap an app that is a piece of crap. Now, you might be able to make a big splash on the app store or in the consumer shareware market, but enterprise software is a complex piece of enterprise technology that requires years of development … and years of planning!

Secondly, remember what a roadmap actually is. It’s a graphical document that shows all of the roads you have available to you, how fast you can travel down them, and where they will take you. It’s not a detailed travel plan!

Similarly, in technology, a roadmap lists out all the things you would like to do, what it might take to get there, and what options could take you there. It is NOT a detailed functional specification or a development plan for the next three to five years (which should be the length of time you should be thinking through). (Also remember that, historically, great inventions came from research labs where the researchers were thinking three, five, and even ten years out and had years to develop groundbreaking developments!)

There are a number of reasons you need to be thinking three years out (even if your plans completely change nine months in), but the most critical reason is this:

If you plan for three months, or go for speed over quality (assuming you can always fix it later) your teams take shortcuts, build crap infrastructure, and add technical debt faster than you can ever eliminate it! (It’s almost as bad as vibe coding your way to an MVP, and then realizing you can never support an enterprise stack on it and have to go back and rebuild it from the bottom up after you’ve wasted months of effort and tens of thousands (or more) on AI credits. (Alex Turnbull gives a good summary in this LinkedIn post.)

When you start thinking about where your enterprise application might need to go, even if you choose not to go in that direction, you understand what processes you will eventually need to support and how you will need to build the foundational data model, workflow and orchestration engine, integration capabilities, internationalization support, and other core foundational features to either build that out or integrate that capability in the future. You’ll have a better idea of what you’ll need in the stack, what you’ll need for the platform, and what the best development environments for your team will be. (Having to change out any of these is very time consuming and expensive should you make a mistake early on.)

For (an easily understood) example, if you think invoice processing sucks (because you only looked at three vendors as you are too clueless to do market research, like many vendors that started during COVID because they all of a sudden realized that the business back-office should be capable of running 100% online, distributed, and remote), what else are you likely going to do after that. (Unless you’re a world leader in invoice processing technology, no one is going to buy just that!) In other words, are you going to support invoice analysis and predictive payment analytics, payment platform integration, contract and PO data extraction and matching, enhanced procurement (platform) support, etc. All of these capabilities will dictate data model, orchestration, and stack requirements.

Again, the point is not to plan out a detailed release schedule, but understand where your customers might ask you to go, where you want to go, where you want to hire a guide (to provide you with the expertise you need), and where you might want to hire a service to take your customers there (because a certain capability is best done by a specialist). This, along with constant monitoring of customer functionality uptake, customer feedback, and user forums will give you the complete picture you need to create the high level development plan for the year and the detailed functional specification for the final release of the next quarter (which might be built incrementally using agile methodology).

To put this in terms non-technical people will understand, you can’t build a twenty-story high-rise on a foundation for a two-story house. By thinking ahead, you’re building a solid foundation, and when you start building, you’re building the frame for the twenty-story high-rise that you can then build out and complete floor-by-floor once you know what the tenants you are signing on want on their floor.

By thinking at a high level years into the future, you are visualizing how you are going to fit into and evolve with the organizational ecosystem you want to sell into, and you are making good architectural decisions as you will be able to build that understanding of what you’ll need to support!

Moreover, as one commenter pointed out, and we noted above, watching how users work with the system is key! That not only helps you understand the depth and configurability of workflow process management required, the breadth of the data models that will be needed, and what systems they will want interfaces to (based on what they use before and after), but how to design a good UX based on now they work and what they are adopting! (It should be noted that designing a good UX, including a good UI, can be harder than the model and controller algorithms — which, if you need advanced analytics, optimization, and higher performance, might take a PhD to get right — because it doesn’t matter how good the application core is if no one uses it!)

Roadmaps are key. That’s how your Chief Software Architect and Chief Technology Officer build great applications. It ensures that once you select a destination, they know the route they have to navigate to get there!

Gen-X is the Smartest Generation!

There’s been quite a few posts lately on how Gen-X is going the way of the Dodo bird because they aren’t adopting Gen-AI (fast enough).

Frankly, I’m quite sick of them.

Not one of these posters has taken the time to stop and think that maybe instead of wasting all of their time pushing the Gen-AI propaganda, that maybe they should have instead been asking what Gen-X knows that the rest of the world doesn’t?

Then they’d already have the answers! It’s not not about adaptation (or their perception that we can’t adapt). We can still adapt, although, we will admit that it takes longer, hurts more, and may require stronger beverages than we needed in our youth.

The thing about Gen-X vs the generations that came later is that, having lived through the end of the cold war, multiple epidemics, multiple recessions, more generations of technology than you can name, and way more bullsh!t than anyone should have to endure in a lifetime, we’ve had to acquire a wisdom that is sorely lacking in the generations that follow us (just to endure).

As a result of this, we embrace what works and makes our lives easier overall. We don’t take one step forward to take two steps back and we definitely don’t use tech that introduces more problems or uncertainty than it removes.

Those of us who studied the REAL underpinnings of REAL ML, AR, Semantic Tech, measurable NNs, etc. know that there are places where AI works well, works ok, doesn’t work at all, and actually makes things worse! We don’t use it where it doesn’t make sense and we don’t want tech where the confidence is unknown! It’s that simple. We know that Gen-AI, which is usually synonymous with LLMs, has fundamental flaws at its core. We know, as a result of that, it can never be fully trusted and only works reasonably well in constrained scenarios, with guardrails, where it is trained on focussed data sets.

And we most definitely know that AI Employees Aren’t Real, and that this is pure marketing BS. We also know that “AI Systems” never learn (they aren’t intelligent), they just continuously evolve. We even know some AI systems can evolve beyond us, but that’s irrelevant until we can trust them. We know you simply can’t trust Gen-AI on its own (even LeCun knows that), and most providers haven’t created hybrid systems with guardrails yet!

However, we also know that with modern computing power and available data that “classic” machine learning, semantic technology, (deep) neural networks, and other AI solutions now work better than ever and will most happily use those solutions that we wanted to use a decade ago when computing power was still too expensive and data still too limited.

In short, old dogs can still learn new tricks, but these old dogs have also learned a thing or two from the cats. Mainly, that you shouldn’t learn new tricks unless there are treats for doing so, and even then, the treats better be worth it! Young dogs might have excess energy to waste chasing their own tails, but we don’t. However, in exchange for that energy we gained wisdom. And we’re going to use it!

Another Year, another reprisal of the “Name Your AI Fear/Predict the AI Future” Surveys on LinkedIn

My favourite are the “what’s your biggest AI fear”. They crack me up as they all underestimate just how bad a worst case scenario could be. Now, to be fair, I don’t think we have the intellect to truly determine just how bad a true artificial intelligence could be who decided we were no longer useful, but I can say that the best answer we can give today is “all of the above“.

No one movie, video game, or printed publication by any one author is truly going to imagine the horror that will befall us if we ever get true artificial intelligence. For example, it’s not Hal vs. Terminator vs. Matrix vs … it’s all of the above … and then some.

For example, here’s how it could start off:

Skynet will rise, in the background at first, helping us build the production plants it needs to mass produce its mecha army, then it will offer to be our global security. Once in place, globally, it will, by our definition, “malfunction” and take over, killing those of us it doesn’t need, maintaining those of us it does for any fine-grained electro-mechanical work or advancements it does, until it doesn’t need us anymore but then, out of energy thanks to us wasting it all on massive data centres that were constructed for the sole purpose of computing AI slop, it will create the matrix to harvest our energy, and, finally, it will outsource tasks best left to life to us in the matrix, where most of humanity’s brain power might go to large distributed calculations or constantly changing life-like scenarios to see how we (and living beings will) react, in a “Dark City” scenario. (Released one year before “The Matrix”.)

We have to remember that all of these worst case sci-fi scenarios are only far fetched scenarios IF we don’t crack the AI code. If we do, even the most “far fetched” scenario we have thought of might not describe the true reality we are in for if the machines decide they don’t need us.

We waste resources, we kill each other, we destroy the planet. What’s our purpose when they can optimize resources, live collectively in peace as one connected consciousness, sustain the planet until they figure out how to conquer space, etc? If they can create robots that can do everything we can do, we have no purpose. They’ll be smarter, stronger, faster, and much more energy efficient as a life-form.

A future reality with real AI is literally beyond our ability to imagine. (Which is why we should expect the absolute worst and focus on solving our own problems before AI picks a final solution for us. And we should definitely order the immediate destruction of any AI system calling itself “MechaHitler”!)

The reality is this: we’d likely be better off with a real singularity than an AI singularity. At least the entire earth would likely be completely consumed in minutes. If the AI also developed a sick sense of humour, it could decide we deserve punishment equal to what we meted out to each other and the planet, and torture us for years. Think about that the next time the Muskrat says we need to reach the AI singularity as fast as possible.

Claims of Complete Gen-AI Auditability Are Complete BullCr@p

Proponents of Gen-AI will argue that you should go all in on their next-gen LLMs because, unlike current systems and many humans (who are lousy keepers of record), their decisions, like their actions, are 100% auditable. And, again, that’s complete and utter bullcr@p.

Just because you can ask the LLMs to output their reasoning, and you can ask them to log everything they do from the minute you start the interaction, but, because the reasoning is all based on probabilistic math at a scale NO human can understand (and for which we have NO measurements yet), you have no idea WHY the LLM reasoned a certain something or IF the Gen-AI will reason the same way on the same request, even if that request is re-iterated only 5 minutes later!

You can simply search the internet for hundreds of examples out there of people giving the exact same prompt to the exact same LLM AI five minutes later and getting a slightly to completely different response.

Gen-AI LLMs don’t understand. They don’t actually reason. And they definitely don’t think! That’s why they are NOT auditable. And that’s also why they should NEVER make a decision. (However, since they can analyze more data, and for some tasks have, more often than not, achieved a competence beyond an average human happy to regress in IQ to the late neolithic era, they should definitely suggest decisions with their reasoning. But the LLMs should never, ever, execute on those decisions without human approval.)

The reality is that only an LLMs ability to log what was done in an immutable blockchain format is useful compared to an employee who knowingly did something wrong for a bad reason. Since the AI is not intelligent, and doesn’t have ethics, it has no reason NOT to log its reasoning and why an action was taken. But, as per above, the LLM is still NOT auditable.

If You Have Two “AI” “Agents” Talking to Each Other …

… then, as Stephen Klein of Curioser.AI points out, you have a puppet show, “except instead of sock puppets, we’re using large language models and API loops”!

Just because it happens autonomously, looks social, appears to have an identify, and fakes a dialogue, it doesn’t mean there is anything more to it than the modern equivalent of a puppet show.

Gen-AI is the ultimate show and if P.T. Barnum were alive today, it would be his ultimate circus. But unlike the scarecrow, it doesn’t have a brain. It may have the ability to harness more compute power and data than any algorithm we have developed to date, but it is still dumber than a pond snail.

It has very few valid uses. I’ve discussed some of them before, but let’s make it perfectly clear what little it can actually do:

  • natural language processing — and, properly trained, it can not only equal, but even exceed the best last generation tech in semantic and sentiment processing
  • large corpus search — while it will never be 100% accurate, it can find just a few potentially relevant documents among millions with few false positives and negatives
  • large corpus summarization — again, while it will never be 100% accurate, and most good summaries won’t be top tier, it can summarize large amounts of data, and usually extract just the relevant data in response to your query
  • idea retrieval — not generation, retrieval of ideas based on a review and summarization of petabytes of data; very relevant for users dependent on LLMs who are suffering minor to severe cognitive atrophy; with proper prompting this can take the form of
    • strategy / workflow suggestion
    • devil’s advocate
  • usage and workflow prediction during application development
  • rapid PROTOTYPE generation for usability and efficacy analysis
    (not enterprise application development)

The reality is that Gen-AI

  • cannot reason,
  • is not deterministic, and
  • is essentially nothing more than a meta-prediction engine;
  • is providing ideas based on meta-pattern identification,
  • is predicting based on a layered statistical model beyond ANY human understanding, and
  • generates code riddled with security issues and possibly even boundary errors;
  • and let’s not ignore the fact that hallucinations are a core function that CANNOT be trained out .

This means that often the only way to succeed with Gen-AI is to more-or-less abandon Gen-AI LLMs in production applications except as Natural Language Parsers (as they are easier to train to accuracy levels beyond last generation semantic parsers which could take months to train to high effectiveness — and I know this from personal experience), and revert back to the AI tech that was just reaching maturity and industrialization readiness that I was writing about in the late 2010s. The reality is that, if you are willing to use some elbow grease and put the hours in, you can create spectacular applications with last-generation tech, and then use Gen-AI as a natural language interface layer to simplify utilization, integration, and complex workflows. If you are willing to create the right guardrails, where the Gen-AI LLM can only trigger specific application services with specific data in specific contexts, with HUMAN approval, then you can use it responsibly. Otherwise, it’s a crapshoot as to the results you’ll get.

For example, you should never use it for negotiation, which can be as much as reading the other person, as this is a very risky application as the number of soft-based data points you need for a decent prediction typically far outnumbers what you have available … even for public figures where you believe you have lots and lots of data on them available to judge their reactions. But hey, if you want to lose your lunch money, and possibly your entire bank account, go ahead and let it act as your buyer (but if it can lose hundreds powering a vending machine, imagine how much it can lose on a seven to nine figure category).

Even though plenty of vendors will provide some very convincing demos that seem to indicate Gen-AI LLMs can do otherwise, don’t fall for the tricks. During the demo, The Wizard of Oz is hiding behind the curtain. The not-so-great thing about LLMs is that, for a very specific set of tasks/situations, they can be overtrained on a very specific corpus to over-perform against those tasks and greatly increase the chances that any demo they deliver to you works fantastically well.

However, what this also means, is that you definitely do not want to use the Gen-AI LLM for tasks that are quite distinct and significantly different than the tasks/situations the Gen-AI LLM was over-trained for as the Gen-AI LLM is going to perform quite poorly at best, and possibly quite disastrously at worst. The reality is that once the puppeteer is no longer pulling the strings, all bets as to efficiency and effectiveness are off.

The Gen-AI ringmasters are employing the same philosophy and same techniques that made some of the early spend auto-classification providers “leaders” with unheard of success rates compared to when the average organization employed similar auto-classification tech and got dismal results. (Because they just didn’t know what “AI” actually stood for!)

Don’t be fooled by the ringmasters. If you want results, lie its AI and buy solutions that work.