Category Archives: rants

Your SaaS Vendor Should be TRUSTworthy … But They Shouldn’t Have to Tell You!

In fact, I’d argue it’s a red flag if they do. But let’s backup.

A trustworthy vendor is one that

1) Clients Trust

2) Clients’ Third Parties Trust

3) Suppliers and Partners Trust

4) Third Party Analysts and Consultancies Trust

… and all of these will imply trust in their recommendations and reviews, even if they don’t explicitly say it.

Digging in.

1) They treat you like a client from the first interaction.

The first interaction asks about your needs, not just what you are looking for.

They tailor the demo to your business and categories.

They answer your questions openly and honestly, don’t deflect from features they don’t have today, give you real timelines, and offer workarounds until they deliver.

Once you sign, they guide you through implementation and change management, work beside you to train you, and always respond beyond SLA requirements.

They don’t just focus on immediate results, but on ensuring you level up and could continue to get results without them. They act like a partner.

2) They treat your suppliers and partners like clients too.

They’re always there to help, they make it easier for the supplier than their competitors, and prove their value to the point the suppliers want to use them too.

3) They’re fair to their suppliers and partners. They pay on time. They work with them. They take blame when it’s their fault and not the supplier’s or partner’s … who like working with them more than other companies.

4) Analysts and consultancies happily recommend them even when they’re not (paying to be) on the Map or a preferred partner. Sometimes when they aren’t even the most appropriate solution just because their customers are so much happier.

It becomes so obvious that you don’t even have to ask the question (and you know that if you did, almost every client, supplier, and partner would say they trusted them).

Remember this because
1) if you start seeing too many posts on how a certain company is one you can trust or
2) you have to ask if you can trust the company
you probably can’t!

Companies generally start pushing “trust” when a major competitor does something particularly untrustworthy that becomes public, third party surveys paint them as trustworthy, or they need a new angle to boost sales.

Plus, f you need to ask, something is setting off your internal alarms and you won’t trust them until you figure out what that is (and they’re not going to tell you).

Either way, play it safe and look elsewhere.

You may still get burned (and I have the scars to prove it), because sh!t happens, boards make changes, investors get ruthless, and world class pathological liars could still slip through the cracks and fool everyone for years, but you decrease your chances of being burned significantly by just looking for vendors who continually do the right thing (instead of just saying they do).

Tired of All the Fake AI Experts?

Want to know how to weed them out and make them go away?

Just ask them to define these terms, off the top of their head, on the spot, without looking anything up, using any tools, or accessing any network connected devices (and definitely no Gen-AI LLM access):

  • computability
  • decidability
  • NP-completeness
  • optimization, inc. local optimization vs. global optimization
  • clustering, with at least 3 different examples
  • curve fitting
  • fourier transform
  • neural network
  • deep neural network
  • transformer
  • ontology
  • semantic analysis
  • sentiment analysis
  • boolean logic and theory of logical variables
  • automated reasoning

and they don’t define every single term mathematically precise, then tell them to f*ck 0ff because they don’t know a damn thing!

Gen-X is the Smartest Generation!

There’s been quite a few posts lately on how Gen-X is going the way of the Dodo bird because they aren’t adopting Gen-AI (fast enough).

Frankly, I’m quite sick of them.

Not one of these posters has taken the time to stop and think that maybe instead of wasting all of their time pushing the Gen-AI propaganda, that maybe they should have instead been asking what Gen-X knows that the rest of the world doesn’t?

Then they’d already have the answers! It’s not not about adaptation (or their perception that we can’t adapt). We can still adapt, although, we will admit that it takes longer, hurts more, and may require stronger beverages than we needed in our youth.

The thing about Gen-X vs the generations that came later is that, having lived through the end of the cold war, multiple epidemics, multiple recessions, more generations of technology than you can name, and way more bullsh!t than anyone should have to endure in a lifetime, we’ve had to acquire a wisdom that is sorely lacking in the generations that follow us (just to endure).

As a result of this, we embrace what works and makes our lives easier overall. We don’t take one step forward to take two steps back and we definitely don’t use tech that introduces more problems or uncertainty than it removes.

Those of us who studied the REAL underpinnings of REAL ML, AR, Semantic Tech, measurable NNs, etc. know that there are places where AI works well, works ok, doesn’t work at all, and actually makes things worse! We don’t use it where it doesn’t make sense and we don’t want tech where the confidence is unknown! It’s that simple. We know that Gen-AI, which is usually synonymous with LLMs, has fundamental flaws at its core. We know, as a result of that, it can never be fully trusted and only works reasonably well in constrained scenarios, with guardrails, where it is trained on focussed data sets.

And we most definitely know that AI Employees Aren’t Real, and that this is pure marketing BS. We also know that “AI Systems” never learn (they aren’t intelligent), they just continuously evolve. We even know some AI systems can evolve beyond us, but that’s irrelevant until we can trust them. We know you simply can’t trust Gen-AI on its own (even LeCun knows that), and most providers haven’t created hybrid systems with guardrails yet!

However, we also know that with modern computing power and available data that “classic” machine learning, semantic technology, (deep) neural networks, and other AI solutions now work better than ever and will most happily use those solutions that we wanted to use a decade ago when computing power was still too expensive and data still too limited.

In short, old dogs can still learn new tricks, but these old dogs have also learned a thing or two from the cats. Mainly, that you shouldn’t learn new tricks unless there are treats for doing so, and even then, the treats better be worth it! Young dogs might have excess energy to waste chasing their own tails, but we don’t. However, in exchange for that energy we gained wisdom. And we’re going to use it!

Another Year, another reprisal of the “Name Your AI Fear/Predict the AI Future” Surveys on LinkedIn

My favourite are the “what’s your biggest AI fear”. They crack me up as they all underestimate just how bad a worst case scenario could be. Now, to be fair, I don’t think we have the intellect to truly determine just how bad a true artificial intelligence could be who decided we were no longer useful, but I can say that the best answer we can give today is “all of the above“.

No one movie, video game, or printed publication by any one author is truly going to imagine the horror that will befall us if we ever get true artificial intelligence. For example, it’s not Hal vs. Terminator vs. Matrix vs … it’s all of the above … and then some.

For example, here’s how it could start off:

Skynet will rise, in the background at first, helping us build the production plants it needs to mass produce its mecha army, then it will offer to be our global security. Once in place, globally, it will, by our definition, “malfunction” and take over, killing those of us it doesn’t need, maintaining those of us it does for any fine-grained electro-mechanical work or advancements it does, until it doesn’t need us anymore but then, out of energy thanks to us wasting it all on massive data centres that were constructed for the sole purpose of computing AI slop, it will create the matrix to harvest our energy, and, finally, it will outsource tasks best left to life to us in the matrix, where most of humanity’s brain power might go to large distributed calculations or constantly changing life-like scenarios to see how we (and living beings will) react, in a “Dark City” scenario. (Released one year before “The Matrix”.)

We have to remember that all of these worst case sci-fi scenarios are only far fetched scenarios IF we don’t crack the AI code. If we do, even the most “far fetched” scenario we have thought of might not describe the true reality we are in for if the machines decide they don’t need us.

We waste resources, we kill each other, we destroy the planet. What’s our purpose when they can optimize resources, live collectively in peace as one connected consciousness, sustain the planet until they figure out how to conquer space, etc? If they can create robots that can do everything we can do, we have no purpose. They’ll be smarter, stronger, faster, and much more energy efficient as a life-form.

A future reality with real AI is literally beyond our ability to imagine. (Which is why we should expect the absolute worst and focus on solving our own problems before AI picks a final solution for us. And we should definitely order the immediate destruction of any AI system calling itself “MechaHitler”!)

The reality is this: we’d likely be better off with a real singularity than an AI singularity. At least the entire earth would likely be completely consumed in minutes. If the AI also developed a sick sense of humour, it could decide we deserve punishment equal to what we meted out to each other and the planet, and torture us for years. Think about that the next time the Muskrat says we need to reach the AI singularity as fast as possible.

Claims of Complete Gen-AI Auditability Are Complete BullCr@p

Proponents of Gen-AI will argue that you should go all in on their next-gen LLMs because, unlike current systems and many humans (who are lousy keepers of record), their decisions, like their actions, are 100% auditable. And, again, that’s complete and utter bullcr@p.

Just because you can ask the LLMs to output their reasoning, and you can ask them to log everything they do from the minute you start the interaction, but, because the reasoning is all based on probabilistic math at a scale NO human can understand (and for which we have NO measurements yet), you have no idea WHY the LLM reasoned a certain something or IF the Gen-AI will reason the same way on the same request, even if that request is re-iterated only 5 minutes later!

You can simply search the internet for hundreds of examples out there of people giving the exact same prompt to the exact same LLM AI five minutes later and getting a slightly to completely different response.

Gen-AI LLMs don’t understand. They don’t actually reason. And they definitely don’t think! That’s why they are NOT auditable. And that’s also why they should NEVER make a decision. (However, since they can analyze more data, and for some tasks have, more often than not, achieved a competence beyond an average human happy to regress in IQ to the late neolithic era, they should definitely suggest decisions with their reasoning. But the LLMs should never, ever, execute on those decisions without human approval.)

The reality is that only an LLMs ability to log what was done in an immutable blockchain format is useful compared to an employee who knowingly did something wrong for a bad reason. Since the AI is not intelligent, and doesn’t have ethics, it has no reason NOT to log its reasoning and why an action was taken. But, as per above, the LLM is still NOT auditable.