Category Archives: rants

Tomorrow is International Women’s Day.

So prepare for a massive onslaught of posts by companies large and small, from far and wide, that will lavish heaps of praise on their female (identifying) employees and all the hard work they do … and then prepare to hear absolutely nothing about how great these female employees are for the next year!

Right now, there is a lot of pushback in the US against DEI, and rightfully so since the whole point of DEI — equal opportunity and equity in treatment of all individuals from an employment perspective (future, present, and past) — has been replaced with objective outcome measures that result in the first person who checks the right mix of race-religion-gender (identifying) boxes being hired, and not the first person who qualifies for the job, which not only results in poorer organizational performance but resentment and backlash when qualified candidates are discriminated against because they don’t check certain boxes (and this includes discrimination against more qualified female applicants who would be rejected in place of a disabled male Asian Zoroastrian because that checks 3 boxes on the DEI bingo card).

But there isn’t nearly as much pushback against virtue signalling for accepted causes, or, even worse, basic decency. And this is a shame, because
* you don’t recognize your female employees by publicly lavishing praise on them one day a year and then completely ignoring them the other 364 days,
* you don’t respect your female employees by paying them less than their male counterparts because “that’s just how it works”, and
* you definitely don’t honour your female employees by claiming they aren’t suitable for C-Suite positions because they want more family time or you expect them to take a career break to raise the next generation.

Instead
* you recognize your female employees by acknowleding them when they do something significant — no one wants lip service,
* you respect your female employees by paying them as much as you’d pay a man for the same job — especially when these female employees are probably more qualified, and
* you honour your female employees by recognizing that they are probably more capable of a C-Suite job than you are! (Remember, they regularly juggle work life and family management — which typically includes their work schedule, their partner’s schedule, and the schedules of 2 to 3 active kids — when you struggle to schedule your own meetings and make your tee time.)

In other words, if all you are going to do is annual virtue signalling, please don’t. It’s disrespectful and I personally can’t wait for the day the next #metoo movement in the corporate world calls out this hypocrisy.

Last year I penned a long post after IWD asking what you are doing TODAY to help women. Of course there were NO RESPONSES from any of the companies in our space who did multiple women’s day posts and ads, and in the next month where I scrolled LinkedIn feeds daily for at least 15 minutes looking to see if any of these same corporate feeds recognized a female employee, I came across three posts from three companies doing so — compared to the well over 100 posts from over 100 companies claiming to celebrate women on IWD.

I think our resident unwoke/uncancellable anti-virtue signalling crusader Jason Busch needs to take up this cause too! True equality for all! (And no lip service!)

Without Human Smarts, There Will Be No (Usable) AI!

And I’m so happy I’m not the only one pushing this theory. Mr. Stephen Klein recently published a great post on The Age of Pretend.

In the post he notes that:

Everyone assumes AI’s biggest bottleneck is compute. … That assumption is wrong. The real bottleneck … is architecture, specifically, a design decision made in 1945. … The real constraint: the von Neumann bottleneck. Modern computers separate memory and processing. Data has to move back and forth between them. For most software, that’s fine.
For AI, it’s catastrophic.

Some numbers the industry rarely highlights:

  • Accessing off-chip memory consumes ~200× more energy than the computation itself
  • Roughly 80% of Google TPU energy goes to electrical connections, not math
  • A 70-billion-parameter model moves ~140 GB of data just to generate one token”

LET THAT SINK IN. Us old timers remember “640K out to be enough for anyone”! The Apollo Guidance Computer — you know, the one that was installed on each Apollo Command Module and Lunar Module in the Apollo Missions, had 2K Core RAM Memory and a 36K ROM. Even today, unless you have an iPhone 17, your phone probably only has 128 GB of storage. That means, even with the processing power of your phone (that dwarfs most computers us old timers have ever owned), you can only process ONE token. (Now do you understand why the data center [energy] demands for your Gen-AI chat-bots are destroying the planet? Anyway, we digress …)

This means that (Gen-)AI has hit a wall. Computer Architecture supports massive compute at scale, massive storage at scale, but not massive transfers at scale.

So what does this mean?

Do you remember the days of RAM drives? Not only did it speed things up, but it kept your machine cooler because, as Stephen noted, less energy accessing data in RAM than on disk.

And do you remember the fun of Assembly? (Okay, that’s sarcasm!) Once you learned to maximize register usage (i.e. re-sequencing processing so that you minimized reads from, and writes to, memory), your code got faster still (and machines stayed cooler longer, which was obvious by the lack of noisy fans spinning up).

We’ve known about this problem for decades. (Eight decades to be exact!) It’s too bad today’s students don’t study the basics and understand it’s not strength that determines computational speed and energy requirements, it’s data scale — whether the data fits in memory or not, whether “significant” chunks fit in the onboard GPU memory or not. (And specifically, can you scale the data down enough for the efficiency you require?)

But this is still the key point in Stephen’s article:
The next major improvements will likely come from smarter algorithms.”

We might need brute force to detect patterns we can’t (yet) see, but the only way to truly advance is to understand those patterns and code optimal, light-weight algorithms that exploit fundamental rules to allow us to process data quickly and efficiently.

Until we figure that out. You’ll never have usable AI (and definitely never have REAL AI as not only will it never be intelligent, but it will never, ever, get anywhere close).

Your SaaS Vendor Should be TRUSTworthy … But They Shouldn’t Have to Tell You!

In fact, I’d argue it’s a red flag if they do. But let’s backup.

A trustworthy vendor is one that

1) Clients Trust

2) Clients’ Third Parties Trust

3) Suppliers and Partners Trust

4) Third Party Analysts and Consultancies Trust

… and all of these will imply trust in their recommendations and reviews, even if they don’t explicitly say it.

Digging in.

1) They treat you like a client from the first interaction.

The first interaction asks about your needs, not just what you are looking for.

They tailor the demo to your business and categories.

They answer your questions openly and honestly, don’t deflect from features they don’t have today, give you real timelines, and offer workarounds until they deliver.

Once you sign, they guide you through implementation and change management, work beside you to train you, and always respond beyond SLA requirements.

They don’t just focus on immediate results, but on ensuring you level up and could continue to get results without them. They act like a partner.

2) They treat your suppliers and partners like clients too.

They’re always there to help, they make it easier for the supplier than their competitors, and prove their value to the point the suppliers want to use them too.

3) They’re fair to their suppliers and partners. They pay on time. They work with them. They take blame when it’s their fault and not the supplier’s or partner’s … who like working with them more than other companies.

4) Analysts and consultancies happily recommend them even when they’re not (paying to be) on the Map or a preferred partner. Sometimes when they aren’t even the most appropriate solution just because their customers are so much happier.

It becomes so obvious that you don’t even have to ask the question (and you know that if you did, almost every client, supplier, and partner would say they trusted them).

Remember this because
1) if you start seeing too many posts on how a certain company is one you can trust or
2) you have to ask if you can trust the company
you probably can’t!

Companies generally start pushing “trust” when a major competitor does something particularly untrustworthy that becomes public, third party surveys paint them as trustworthy, or they need a new angle to boost sales.

Plus, f you need to ask, something is setting off your internal alarms and you won’t trust them until you figure out what that is (and they’re not going to tell you).

Either way, play it safe and look elsewhere.

You may still get burned (and I have the scars to prove it), because sh!t happens, boards make changes, investors get ruthless, and world class pathological liars could still slip through the cracks and fool everyone for years, but you decrease your chances of being burned significantly by just looking for vendors who continually do the right thing (instead of just saying they do).

Tired of All the Fake AI Experts?

Want to know how to weed them out and make them go away?

Just ask them to define these terms, off the top of their head, on the spot, without looking anything up, using any tools, or accessing any network connected devices (and definitely no Gen-AI LLM access):

  • computability
  • decidability
  • NP-completeness
  • optimization, inc. local optimization vs. global optimization
  • clustering, with at least 3 different examples
  • curve fitting
  • fourier transform
  • neural network
  • deep neural network
  • transformer
  • ontology
  • semantic analysis
  • sentiment analysis
  • boolean logic and theory of logical variables
  • automated reasoning

and they don’t define every single term mathematically precise, then tell them to f*ck 0ff because they don’t know a damn thing!

Gen-X is the Smartest Generation!

There’s been quite a few posts lately on how Gen-X is going the way of the Dodo bird because they aren’t adopting Gen-AI (fast enough).

Frankly, I’m quite sick of them.

Not one of these posters has taken the time to stop and think that maybe instead of wasting all of their time pushing the Gen-AI propaganda, that maybe they should have instead been asking what Gen-X knows that the rest of the world doesn’t?

Then they’d already have the answers! It’s not not about adaptation (or their perception that we can’t adapt). We can still adapt, although, we will admit that it takes longer, hurts more, and may require stronger beverages than we needed in our youth.

The thing about Gen-X vs the generations that came later is that, having lived through the end of the cold war, multiple epidemics, multiple recessions, more generations of technology than you can name, and way more bullsh!t than anyone should have to endure in a lifetime, we’ve had to acquire a wisdom that is sorely lacking in the generations that follow us (just to endure).

As a result of this, we embrace what works and makes our lives easier overall. We don’t take one step forward to take two steps back and we definitely don’t use tech that introduces more problems or uncertainty than it removes.

Those of us who studied the REAL underpinnings of REAL ML, AR, Semantic Tech, measurable NNs, etc. know that there are places where AI works well, works ok, doesn’t work at all, and actually makes things worse! We don’t use it where it doesn’t make sense and we don’t want tech where the confidence is unknown! It’s that simple. We know that Gen-AI, which is usually synonymous with LLMs, has fundamental flaws at its core. We know, as a result of that, it can never be fully trusted and only works reasonably well in constrained scenarios, with guardrails, where it is trained on focussed data sets.

And we most definitely know that AI Employees Aren’t Real, and that this is pure marketing BS. We also know that “AI Systems” never learn (they aren’t intelligent), they just continuously evolve. We even know some AI systems can evolve beyond us, but that’s irrelevant until we can trust them. We know you simply can’t trust Gen-AI on its own (even LeCun knows that), and most providers haven’t created hybrid systems with guardrails yet!

However, we also know that with modern computing power and available data that “classic” machine learning, semantic technology, (deep) neural networks, and other AI solutions now work better than ever and will most happily use those solutions that we wanted to use a decade ago when computing power was still too expensive and data still too limited.

In short, old dogs can still learn new tricks, but these old dogs have also learned a thing or two from the cats. Mainly, that you shouldn’t learn new tricks unless there are treats for doing so, and even then, the treats better be worth it! Young dogs might have excess energy to waste chasing their own tails, but we don’t. However, in exchange for that energy we gained wisdom. And we’re going to use it!