Category Archives: AI

Open Gen-AI Isn’t Just Dumbing Your Business, It’s Killing the Planet!

Open Gen-AI is not just one of the most dangerous technologies we’ve ever invented* (as it lulls the uninformed into a false sense of security who will depend on it to make increasingly more critical decisions that could have increasingly more disastrous consequences), it’s also about to pose the biggest threat to planetary survival!

As it is, an average Data Center requires at least 10X the energy consumption of an average American home per square meter, with Open Gen AI data centers (which require ultra dense servers with cores running flat out all the time) requiring even more energy than that. However, whereas traditional AI models, including traditional Deep Learning Neural Nets which can be optimized post-training to often 10% of their original size using techniques developed by MIT researchers (including those described in this article) are now smaller and more stable than they used to be, these models just keep expanding exponentially in a futile quest to have them do more and now require models thousands of times bigger (and more energy intensive) than traditional models, often to generate output that wouldn’t even net a C grade in a high school class!

Think about that and read this article by Kate Crawford on Nature on how AI’s environmental costs are soaring (which notes that even OpenAI’s CEO has finally admitted that the AI industry is heading towards an energy crisis as there just isn’t enough power to keep up with the exponential energy demands [with ChatGPT already requiring more power than 33,000 average American homes … think about that, if you shut down just TWO Open Gen-AI models, you could power an entire small city]) before needlessly throwing a solution you don’t understand at a problem you don’t even have (when a better process would eliminate that problem and replace it with a smaller, different, problem that traditional technology and a human with just a bit of training could completely solve).

Because Open Gen-AI is just NOT ready for prime time, and just because these companies raised Billions of dollars on false promises that it would be ready years or decades sooner than AI development has traditionally taken, that doesn’t make it our responsibility to adopt the technology before it’s ready.

* And if a man afraid of nothing acknowledges this, we really should listen! (See this article.)

Forget Consequence Free. I wanna be Gen-AI Free!

To the tune of Consequence Free by Great Big Sea.

Na na-na, na na na-na na na!
Na na-na, na na na-na na na!

Wouldn’t it be great,
if no one ever was redundant?
Wouldn’t it be great,
if we made all the decisions?

I’ve always said,
All the rules are made for bending.
And if I did the right thing,
What’s wrong with that vision?

I wanna be Gen-AI free!
I wanna be where humans always matter.
I wanna be Gen-AI free!
And say: Na na-na, na na na-na na na!
Oh! Na na-na, na na na-na na na!

I could really use,
To lose my ethical conscience.
Cause I’m getting sick,
Of feeling angry all the time.

I won’t abuse it,
Yeah I’ve got the best intentions.
For a little bit of anarchy,
But not the hurting kind.

I wanna be Gen-AI free!
I wanna be where humans always matter.
I wanna be Gen-AI free!
And say: Na na-na, na na na-na na na!
Oh! Na na-na, na na na-na na na!

Oh! I couldn’t sleep at all last night,
‘Cause I had AI on my mind.
Why can’t we leave it all behind,
You know it could be that easy.

It just takes one person
Wouldn’t it be great,
If the CEO made that call
We could do the work,
And we would never get the slip.

Wouldn’t need to worry about illogic or bad data.
We could slip off the edge,
And never worry about the fall.

I wanna be Gen-AI free!
I wanna be where humans always matter.
I wanna be Gen-AI free!
And say: Na na-na, na na na-na na na!
Oh! Na na-na, na na na-na na na!
Oh! Na na-na, na na na-na na na!

the doctor, while an early adopter of SSDO, rule-based RPA, Machine Learning, and other “AI” technologies, is serious here. Gen-AI is garbage at best, bull crap the majority of the time, and toxic waste when it fails. What other technology produces hallucinations, hate speech, and hot (as in stolen) data on a regular basis? What other technology has literally convinced people to commit suicide?

It’s not ready for prime-time, and may never be. Go back to carefully constructed NLP solutions on carefully designed data sets that actually work. We don’t need Artificial Idiocy where you need more training in prompting to have a chance at solving a problem than developers need training in coding to write a reliable deterministic algorithm that actually solves the problem. Sure it seems to work “okay” 90% of the time with normal usage, but what about that 9% of the time it doesn’t or the 1% it fails so drastically it could cost you millions of dollars in direct and indirect damages? Is it worth it? (The answer is NO!)

Some light reading. More can be found by Googling Gen-AI Fails and similar search terms.

An Absolutely Fabulous Article by Cory Doctorow on the (Gen) AI Bubble …

and how it’s going to pop like every other tech bubble since the first dot com bust!

What Kind of Bubble is AI?
  by Cory Doctorow

Cory doesn’t say it, but he makes it pretty clear that when the bubble pops, like every tech bubble that has come before, there may not be much less to salvage when it does (especially since no one is thinking about what happens when it does pop).

So I’ll clarify:

A lot of people are going to lose a lot of money

(and while stupid investors hyping this bandwagon heading for a cliff probably deserve to lose every penny, all of the pensioners in the pension funds they scammed don’t; so if you run a pension fund, please pull out of ridiculously overvalued Gen AI NOW!)

A lot of people are going to lose their jobs

(and it’s going to be more devastating to the tech sector than the Silicon Valley Bank failure this year combined with the recession forecast that resulted in over 250K IT jobs being slashed in the USA alone)

A lot of hardware is going to suddenly go idle

and smaller cloud providers are going to go under when the big name cloud providers all of a sudden drop their prices to the floor just to keep the revenue coming in (resulting in the monopolies of Amazon, Google, and Microsoft controlling most of the servers outside of China and Russia)

The problem is, as Cory clearly lays out, when you take one step back and look at the ridiculous hype from a business/revenue lens, all of the big, exciting use cases for AI are either

a) low dollar [and low-stakes and fault-tolerant] (helping us cheat on our [home]work or generating stock-art for bottom feeders [who won’t pay an artist and don’t mind ripping off the IP from thousands of artists]) or

b) high-dollar but high-stakes and fault-intolerant (self driving cars, radiological cancer detection, worker screening and hiring, etc.)

and when you consider the data center costs of these super-sized models (as these data centers consume MORE energy than a small town), low-dollar AI applications won’t pay the bills and high-dollar AI applications cost MORE to deploy than to just do it the traditional way with an educated and capable human!

E.g. self-driving cars don’t work (and “Cruise” needs to employ 1.5 times as many supervisors as a taxi service would employ drivers to keep their cars, which still hit and critically injure people, relatively safe)

E.g. radiological cancer detection requires a human expert to spend the usual amount of time in diagnosis before consulting the AI, and then, if the AI doesn’t agree, spend that much time again

Not that we’re not stopping you from jumping on the (Gen-)AI bandwagon or selling that silicon snake oil that Open AI and Microsoft AI are selling. We’re just not joining you on the (Gen-)AI bandwagon as the steering algorithm is defective and it’s heading straight for a very high cliff at a very high speed …

Merry Christmas!

Good Questions to Ask If Procuring Tools With AI, Especially If You’ve Answered the First Question Wrong!

Continuing on with our statement that sometimes you have to listen to a lawyer, a recent article over on Bloomberg Law noted that Companies Should Ask These Risk Questions When Procuring AI Tools and gave us four questions in particular that were good:

Do I Understand the Data

The article gets it right when it says that AI tools are only as robust as the data they’re trained on, as well as the need to know what data is collected, how, and if all rights are respected when doing so. But what they didn’t get is that the data determines what models and techniques can be used, and what models won’t be that effective or reliable. A vendor sales rep will tell you that whatever technique it’s using is just right for your problem, but the reality is that the sales rep likely doesn’t have anywhere close to the mathematical knowledge to know if its appropriate or not, especially since that sales person may have barely passed remedial junior math (as not all US states require remedial senior math to graduate High School). Furthermore, there’s no guarantee that even the tech teams know if the model is appropriate or not. If the company just hired a bunch of developers with maybe a year of university math, gave them access to a bunch of libraries, and all they did was test out various machine learning models until one appeared to work to a sufficient degree of accuracy on the test suites they compiled, it doesn’t mean they understand the model, why it worked, or even the appropriate characteristics of the data set that allowed the model to work — it just means that they can say for data sets that look like this, it should work. (But what is look like?) You need to understand the data, and find someone who understands the models that it is appropriate for.

Have I considered Regulatory Scrutiny?

Not only do you have to take note that The Department of Justice, Federal Trade Commission, and other regulators are focused on whether technology companies and their tools create anti-competitive environments or put consumers at a disadvantage, but many jurisdictions are considering or implementing laws against the use of black-box technology where the output — which determines whether or not a person can get a loan, be insured, or even apply for a job or government program — and the logic behind the decisions, and the rules that were applied, cannot be explained. You could also be in trouble if the process is fully automated and there isn’t a human in the loop to validate the decision, if the systems uses (third party) data that it has no right to use, or if the output data is not sufficiently protected if it was generated from input data that must be protected and the output can be reverse engineered.

Have I Mitigated Security Risks?

It’s not just traditional cyber attacks on the system, it’s well calculated queries that can slightly perturb the system over time until the outputs after the 10th, 100th, or 1000th slight, imperceptable, perturbation result in an output the system never should have given in the first place, such as approving a ten million dollar loan to a high-risk foreigner who will take the money and run or denying insurance to all people with a genetic defect likely to result in a specific condition down the road that can only be treated by a single drug owned by a single pharmaceutical who will drive people into bankruptcy for a pill that costs $5 to make.

Did I include Best Practices in the Contract?

More specifically, did you include the best practices you want followed in the contract? Don’t leave best practices up to the vendor to define however they want to define them. Make sure you cover all necessary security measures, compliance with all government and regulatory guidelines on AI in the regions you intend to use it (and open standards if there are none, guidelines from the UN, the Responsible AI Institute, or something similar), and so on.

And these are great questions, but the first question you should always ask is:

Do I Really Need AI?

And only when you choose the wrong answer, and say yes, do you need to ask the questions above. The reality is that you don’t ever need AI. AI means that you, or the vendor, were just unwilling to take the time to understand the problem and design an appropriate solution. Remember that when you try to jump on the AI bandwagon heading off the cliff (for the sixth decade in a row).

The first jobs lost to OpenAI were at OpenAI? I LOVE IT!

In honour of the first five jobs that were lost to OpenAI, at Open AI (where it was announced the CEO, president, and 3 senior staff were stepping down and/or let go this week).

To the tune of I Love It by Icona Pop (feat. Charli XCX)!

I got this feeling on the winter day when you were gone
You crashed your car into the bridge
I watched, you let it burn
You threw our shit into a bag and pushed it down the stairs
You crashed your car into the bridge

I don’t care, I love it
I don’t care

I got this feeling on the winter day when you were gone
You crashed your car into the bridge
I watched, you let it burn
You threw our shit into a bag and pushed it down the stairs
You crashed your car into the bridge

I don’t care, I love it
I don’t care

I’m on an Earthern road, you’re in the Milky Way
You want me down on earth, but you’re up in space
You’re so damn hard to find, that AI took over
You said it’d take our jobs, but it f*ck3d you over!

I love it
I love it