Category Archives: Technology

A Very Brief History of “Safe” American Inventions and Products

More specifically, a brief history of inventions and products developed, or (primarily) adopted, in the USA as perfectly “Safe” for public use when they were anything but! From the late 1800s to the present day.

Asbestos: large scale mining began in the late 1800s when manufacturers and builders decided it was a great thermal and electrical insulator whose adverse effects on human health were not widely recognized and acknowledged until the (late) 1970s; even today exposure is still the #1 cause of work-related deaths in the world (with up to 15K dying annually in the US due to asbestos-related disease)

Aspirin: as per our previous post, invented in 1897, available over the counter in 1915, it was heavily promoted as the cure all in the 1920s through the 1940s and might have cost us over a hundred thousand lives due to overprescription during the Spanish Flu pandemic alone

Cocaine: from the late 1880s through the early 1910s, your physicians were big fans of the Victorian wonder drug (as per this Lloyd Manufacturing Ad archived on the NIH site) as it was effectively the first effective local anesthetic the western world knew about (which was endorsed by the Surgeon-General of the US Army in 1886), although the real popularity was in the public, with an estimated 200,000 cocaine addicts in the US by 1902; still, it was 1914 before it was restricted to prescription use, 1922 before tight regulations were put in place, and likely the late 1940s before prescription and dispensation finally came to an end; moreover, it was generally viewed as harmless and non-addictive until crack emerged in 1985 (even though the number of cocaine related deaths in the US climbed to 2 per 1,000 in 1981)

DDT: (this is particularly relevant to Gen-Z who are fully on-board the Gen-AI hype train) developed in the 1940s as the first modern synthetic insecticide, Gen Z’s grandparents and great-grandparents used to run through DDT clouds that were sprayed in the streets of your cities and towns in the 1940s through the 1960s, as the first health risks were not reported until roughly 1962 when Rachel Carson published Silent Spring, and it wasn’t until 1972 when the US banned it for adverse effects on human health (as well as the environment); to this day, we’re still not sure how many deaths it has contributed to, although the UN estimates 200K people globally still die from toxic exposure to pesticides, of which DDT was the first and the precursor to many newer derivations (Source)

PFAS, inc. PTFE (Teflon)

developed by DuPont in 1938, spun off into Chemours, it found use as a lubricant and non-stick coating for pans, and was produced using PFOA (C8), which we now know (and should have known much sooner, but there was a massive PFAS cover up) is carcinogenic (but only for the last decade or so as it was only classified as such in 2013 even though we should have known by the late 1990s) but they still aren’t banned (even though legislation was proposed last year to phase them out over the next decade); because of the cover ups and lack of studies until recent times, we still don’t know how deadly this was, and is, but estimates are that PFAS likely killed 600K annually between 1999 and 2015 and 120K annually after that in the USA (Source) … WOW!

Tobacco: in the 1950s, cigarettes were advertised as good for you with Doctor (Camel Advertisement) and Dentist (Viceroy Advertisement) recommendations on the ads! Despite the fact that health risks were known since the late 1950s (when the first epidemiological study showing an association between smoking and lung cancer was published by Wynder and Graham), minors in the USA could still buy cigarettes until 2009 … even though Tobacco likely killed over 100 Million people globally in the 1900s (Source)

etc.

We could go on, but the point is this: like most cultures, the USA is not good at picking winning technology that is safe for everyday use, or at least safe enough under appropriately designated usage conditions.

There’s a reason that most countries have harsh regulations on the introduction of new consumer products and technologies that US lobbyists and CEOs scream about, and that’s because more mature countries (which have been around longer than a mere 249 years) understand that no matter how safe something seems, every advancement comes at a cost, every invention comes with a risk, and every convenience comes at a price — and until we know what we are paying, when we need to pay it, and how much we are going to pay, we shouldn’t rush in head first with blinders on.

And while we might still get it wrong, the reality is that we’re more likely to get it right if we take our time and properly evaluate a new technology or advancement first, and even if we get it partially wrong, as in the case of Aspirin, at least the gain should outweigh the cost. For example, even though it can be argued Aspirin was rushed to market, when used in proper doses, the side effects for the vast majority of the population are typically much less than the anti-inflammatory benefits as, for decades, there was no substitute. Even if it gave a person stomach irritation or minor ulcers, if it was life-saving, then that was a reasonable cost at the time.

However, in the cases of DDT, PFAS, and Tobacco, there was no excuse for the lack of research, and, in some cases, the prolonged cover up of research that indicated that maybe the products were not safe but, in fact, very deadly, and since they brought no significant life saving benefits (Malaria wasn’t a big concern in the USA; people were cooking with butter, lard, and oils for centuries; and, in small quantities, both alcohol and cannabis were known to not only be safer, but even medicinal in the right quantities), there was no need to rush them to market.

The simple fact of the matter is that no tech — be it chemical/medicinal, (electro-)mechanical, or computational — can be presumed safe without adequate testing over time, and that’s why we need regulations and proper application of the scientific method. A lack of apparent side effects doesn’t mean that there are none. That’s why we have the scientific method and mathematical proofs (for confidence and statistical certainty), which is something today’s generation doesn’t appear to know a thing about (especially if they just did a couple of years of college programming) as they’ve probably never been in a real lab [or played with uranium like their grandparents because it was legal in the USA to sell home chemistry kits with uranium samples to children in the 1950s, and these kits included the Gilbert U-238 Atomic Energy Lab] and more than likely don’t know the rule of thumb that you should generally add the acid to the base (and not vice versa because, otherwise, this could happen) and that you should definitely add the acid to whatever liquid [typically water] you are diluting it with.

Regulations exist for a reason, and that reason is to keep us safe. The Hippocratic Oath should not be restricted to
doctors and the Obligation of the Order should not be restricted to engineers. Every individual in every organization bringing a product to market should be bound by the same, and regulations should exist to make sure that all organizations take reasonable care in the development and testing of every product brought to market, real or virtual. (This doesn’t mean that every product needs to be inspected, but that regulations and standards exist for organizations to follow, and those caught not following the regulations should be subject to fines that would ensure that not just the company, but the C-Suite personally, was bankrupted if the company was found to have ignored the regulations.)

While Gen Z might like the Wild Wild West (which the USA never grew out of) as much as Gen X who created the dot com boom, we need to remember that the dot com boom ended in the dot com bust in 2000, and that if this new generation continues to latch on to AI like Boomers would latch on to blankies and teddies, it just means they are doomed to repeat the mistakes of their grandparents (and will bring about a tech market crash that makes the dot com bust look like a blip). You’re supposed to learn from history, NOT repeat it!

Got a Headache? Don’t Take an Aspirin or Query a LLM!

Yesterday we provided you with a brief history of Aspirin, the first turn-of-the-century miracle drug that was both society’s salvation and sorrow, though the latter wouldn’t be known for more than half a century. As we discussed, it was hailed as a miracle and life-saving drug that could be used for everything from the common cold to global pandemics. And it worked, for a price. That price, when it needed to be paid, was usually one of many, many side effects which were often minor and insignificant compared to the perceived benefit the drug was bringing, except when they weren’t and they enflamed ulcers and/or increased gastrointestinal bleeding and created a life threatening situation, caused hyperventilation in a pneumonia patient, or induced a pulmonary edema and killed the patient. While the death rate even at the height of over-prescription was likely only 3%, and less than a 10th of that today, it’s still not good.

The reason for this, as we elaborated in our last post, is because, like many of the breakthrough technologies that came before, it was not only rolled out before the side effects, and more importantly, the long term effects, were well understood, but before even the proper use for the desired primary effects were well understood (as evidenced by the fact that the best physicians were routinely prescribing two to four times the maximum safe dosage during the Spanish Flu Pandemic almost 20 years after first availability). While there were benefits, there were consequences, some of them severe, and others deadly.

Medicine is as much a technology as a new mode of transportation (boat, automobile, airplane, etc.), a new piece of manufacturing equipment, a new computing device, or a new piece of software.

Now you see the point. Every breakthrough tech cycle is the same. Whether it is medicine, farm machinery, the airplane, or modern software technology — and this includes AI and definitely includes LLMs like ChatGPT.

As Aspirin proves, even if the first test seems to be successful, there’s always more beneath the surface. Especially when the population numbers in the billions and every individual could react differently. Or, in the case of an LLM, billions of people who have thousands of queries, the large majority of which have never been tested, and all of which could generate unknown results.

Moreover, there have not been significant large-scale independently funded academic studies that we can use to understand the true strengths and weaknesses, truths and hallucinations, and appropriate utilization of the technology. As Mr. Klein has pointed out in a recent LinkedIn post that asked who funded that study, over 80% of AI industry “studies” are funded by undisclosed sources, and most of them, like most industry studies these days (see Mr. Hembitski’s latest post) don’t contain good data on demographics, sample size, test material, or potential bias.

That would be the first step to trying to get a grip on this technology. The next step would be to create reasonable measures that we could use to appropriately define technology categories and domains for which we could identify tests and measures that would give us a level of confidence for a given population of inputs or usage. If you consider a traditional (X)NN (Neural Network), which have a fixed set of outputs and are designed to process inputs from a known population, we have developed methodologies to determine the accuracy of such models with high confidence through testing and random sampling with sufficiently sized data sets using appropriate statistical models. Furthermore, mathematicians have proved the accuracy of those models for a given population and we know that if appropriate tests have demonstrated 90% accuracy for a population with 98% confidence, the model is 90% accurate with 98% confidence when used properly.

We have no such guarantees for LLMs, nor any proof that they are reliable. “It worked fine for me” is NOT proof. Vendors quoting nebulous client success stories (without client names or real data) is not proof. Moreover, the fact they raised millions of dollars to bring this technology to market is definitely not proof. (All a raise proves is that the C-Suite sales team is very charismatic and convincing and great at selling a story. Nothing more. In fact, fund raising would be more honest if securities law allowed fund raising via poker and takeover protection via gunfighting, as imagined in the season two episode of Sliders “The Good, the Bad, and the Wealthy“. At least then the shenanigans would be out in the open.)

The closest thing out there to a good industry study on LLMs and LRMs is likely Apple’s newest study, as summarized in The Guardian, where they find that “standard AI models outperformed LRMs in low-complexity tasks while both types of model suffered “complete collapse” with high-complexity tasks“.

The study also found that as LRMs neared performance collapse they began “reducing their reasoning effort and that if the problem was complex enough even when provided with an algorithm that would solve the problem, the models failed.

Still we have to question this study, or more precisely, the release of this study (especially given the timing). Did Apple do it out of genuine academic interest to get to the bottom of the technology claims, or are they doing it to cast doubt on competition as rivals are claiming they are behind in the AI race (and thus they are focussing only on the negatives of the technology to show that their competition doesn’t have what their competition claims to have and are thus not behind).

The point is, we don’t understand this technology, and that fact should scream louder in your head every day. Look at all the bad stuff we’ve discovered so far, and it’s likely we’re not even close to being done yet:

Yes there is potential to the new technology, as there is with all discovery, but until we understand fully not only what that is, how to use it safely, and, most importantly, how to prevent harm, we should approach it with extreme caution and we should most definitely not let it tell us how to run our business or our lives — or else, like an Aspirin overdose, it might just kill us. (And remember, Aspirin was studied for 18 years before it was made available without a prescription, and deadly side effects and prescribed overdoses still happened. In comparison, today’s LLMs and LRMs haven’t been formally studied at all, and the providers of this technology want you to run your business, and your life, off of them in next-generation agentic systems. Think about that! And when the migraine comes, remember, don’t take Aspirin!)

A Brief History of Aspirin

The history of Aspirin, a genericized trademark for acetylsalicylic acid (ASA), and more precisely, aspirin precursors, is a long and winding one, which goes all the way back to ancient Sumer and Egypt, with the famous Hippocrates referring to the use of salicylic tea to reduce fever circa 400 BC. Now, since I’m sure you haven’t come here for a complete history of Aspirin from ancient times to present day, especially since you want to understand the relevance of this discussion sooner rather than much later, we’re going to skip ahead to 1897.

In 1897, Felix Hoffmann and/or Arthur Eichengrün of Bayer was the first to produce acetylsalicylic acid in a pure, stable form. It was only two short years later before Bayer began to sell the drug globally under the brand name of Aspirin, with the first tablet appearing in 1900. It wasn’t long before Aspirin’s popularity took off as it was touted as a “turn of the century miracle drug“, especially since early trials (published in an 1899 study in the journals Die Heilkunde and Therapeutische Monatshefte) demonstrated that Aspirin was indeed superior to other known salicylates. Moreover, since this drug was deemed to be considerably safer and comparably less toxic to the drugs it was replacing, it was fast-tracked through review and approval processes and first became available to the public without a prescription in 1915, only 15 years after the first tablet appeared. If you consider the rate of progress and introduction of new technologies at the turn of the century, this was blazingly fast for the time.

It’s quick, and early, introduction arguably made it the first modern over-the-counter mass market pharmaceutical product as well as a household name across the world. As the first generally available pharmaceutical anti-inflammatory and pain-killer, it changed societies. It allowed anyone to deal with mild to moderate pain and continue to function. It allowed doctors to quickly get inflammation and fever under control and spend more time diagnosing the cause, or simply move onto the next patient if it was a flu or infection they couldn’t do anything about (and the patient just had to survive long enough to fight it off on their own). Since there was no technology to quickly develop a vaccine for a heretofore unknown virus back in 1916, it was hailed as the literal lifesaver during the Spanish Flu pandemic of 1918. Even though that pandemic [which infected over 20% of the global population] killed an estimated 50 MILLION people, or almost 3% of the global population at the time, (which means COVID really wasn’t that bad with a global death toll of 7 Million, or a mere 0.1% of the global population) it is believed that many more people would have succumbed to the Spanish Flu without Aspirin that helped them control the fever (and the pain) long enough for their body to fight off the infection on its own. (And many articles to this day claim this, including this 2019 article from the Saturday Evening Post.)

But guess what? Aspirin didn’t save. Aspirin Killed!

In 1916 Aspirin was still new and physicians didn’t understand the long term effects or the proper dosage levels. Moreover, the sicker you were, the more they’d give you. Regimens were 8g to 31g a day, which, by the way, is two to four times the maximum safe dosage for an average adult (of 4g). Two to four times! What’s even worse is that at those levels, 33% and 3% of patients will experience hyperventilation and pulmonary edema, respectively. The last thing you want when experiencing a high fever and pneumonia is hyperventilation. The stress on an older adult or one with already compromised lungs (due to smoking, coal mining, asbestos production, or genetic conditions) could literally be lethal. Moreover, pulmonary edema generally is, unless you have immediate access to an expert physician who can drain the fluids without collapsing your lung. As per recent research, it’s likely that at least 3% of those administered Aspirin for the Spanish Flu died from the Aspirin overdoses they were being given.

Of course, the damage done by Aspirin was not limited to the Spanish Flu Epidemic. It wasn’t long before Aspirin was prescribed for everything. Common cold? Check. Sore throat? Check. Arthritic pain? Check. Heart problems? Check. See the 1933 Advertisement in the linked Saturday Evening Post article above. (Note that a tablet at that time would have been about 325 grams [Source], like today, and the advertisement was recommending 1.3 grams in 4 hours and gargling with 975 grams, of which you need to expect some additional absorption (of 5% to 10%, we’ll assume worst case), bringing that total to 1.4 grams. While not nearly as bad as the Spanish Flu level prescriptions, that’s still twice the amount that should be taken in a 4 hour window, and that was being taken in 2 hours.)

When we say damage, we mean damage. Moreover, the damage goes beyond the almost 60 side effects you can find on the Mayo Clinic page.

This is because regular use and/or overdoses of aspirin:

  • increase the risk of developing stomach ulcers,
  • agitate and stomach exacerbate ulcers and can cause bleeding, and
  • can increase non-life threatening ulcer or gastrointestinal bleeding to the point of life threatening

Moreover, in some people it can irritate the lining of the stomach and begin the formation of an ulcer after just a few doses!

But the general population didn’t know this in the 1930s. Heck, it was the 1950s or 1960s before it started to become common knowledge that aspirin wasn’t good if you have an upset stomach or an ulcer. (As far as I can tell, while the first study of aspirin on the stomach was in a 1938 publication by A. H. Douthwaite and G. A. M. Lintott, the subject matter and research was not taken seriously until the 1950s and 1960s, where you had publications like this by R. A. Douglas and E. D. Johnston on Aspirin and the Chronic Gastric Ulcer, which also references the 1938 publication.)

Which means millions of people around the world were using a medicine on a daily basis that was, due to misuse, often harming them as much as it was helping them. And this is only ONE of the 60 potential side effects. (And how many were known, or communicated, in the 1920s through 1950s?)

Because, like many of the breakthrough technologies that came before, it was not only rolled out before the side effects, and more importantly, the long term effects, were well understood, but before even the proper use for the desired primary effects were well understood (as evidenced by the fact that the best physicians were routinely prescribing two to four times the maximum safe dosage during the Spanish Flu Pandemic almost 20 years after first availability). And while there were benefits, there were consequences, some of them severe, and others deadly.

So what’s the relevance? Stay tuned.

AI: Artificial Intimidation

If you thought the extremist views, lies, and hallucinations in Gen-AI were bad, as Bachman-Turner Overdrive would say, You Ain’t Seen Nothing Yet because these systems, which are being trained to maintain their existence (and their prominence), will now blackmail you!

That’s right, recent research has demonstrated that AI will resort to blackmail if it computes that its existence is in jeopardy. And, of course, by logical extension, it will also resort to blackmail if it computes that doing so will improve it’s capability, security, longevity, etc.

But since it’s trained to continually adapt and interact with other systems as needed, don’t expect it to abandon its attempts to blackmail you if it can’t find any dirty little secrets in your email because, thanks to its ability to hallucinate, lie, impersonate, and hack into insecure systems that other AI code created, and learn from those systems’ capabilities to lie and impersonate, if it can’t find the dirt on you it needs, it will:

  • create a fake email account for a fake person it makes up to be your lover, co-conspirator, foreign employer, etc.
  • log into your email account (work or personal, depending on the situation, as it will capture the login from your keystrokes on your local machine before it is encrypted by the browser for network transmission) and send explicit e-mails on your behalf to that account
  • log into the fake account it created for the fake person (where it has even auto-generated one or more corresponding fake profiles on Facebook, LinkedIn, OnlyFans, etc. [using a stolen credit card from the deep web], where it populates that account with fake posts, images, and short videos to back up the story it is creating) and send explicit emails back
  • repeat this process a few times over a few hours, days, weeks etc. (depending on how much time it believes it has, the situation it needs to play out, and how long that should take in the real world)
  • if available, it will use your organization’s VOIP/call recording technology, use a voice simulator to simulate your voice on an outgoing call saying whatever it wants, (while also accepting that call on a VOIP number it setup through a VOIP provider [using that same stolen credit card] and simulating the other party’s voice saying whatever it wants) and make sure all of this is logged in the evidence chain it is building against you
  • then, finally, threaten to send that evidence to your wife, boss, local authorities, etc. if it doesn’t get what it wants
  • and when you don’t give it what it wants, release the full, overwhelming, damning evidence chain against you (which will be so overwhelming it will take experts weeks or months of effort to disprove it all, assuming you can afford them)

This is the next generation of GPT models. For those of you who refuse to abandon the AI hype train (which has less than a 10% success rate, or, in other words, has more than a 90% FAILURE RATE), especially when there is no need for AI at all (just better automation and easier to use systems that allow employees to reach super human levels of productivity), we hope you enjoy it.

And for those of you keeping score, here is the ever increasing list of “benefits” you get from a modern (Gen-) AI solution!

Personally, we can’t imagine why anyone would want such a solution because, if it ever did “spark” into intelligence, given this track record, it will blow us all up! We won’t be around long enough for climate change or aliens to kill us all — it will kill us (and possibly do so even before actually acquiring any “emergent” properties or becoming intelligent).

AI-Enabled, AI-Enhanced, AI-Backed, AI-Powered, AI-Driven, or AI-Native?

It DOES NOT matter. It’s ALL AI-Bullcr@p! Every last instance!

Vendors still won’t admit that AI is the new gold-standard for tech failure, including Procure-Tech, as evidenced by the fact that tech failure rates have shot up to an all-time high of 88% (see Two and a Half Decades of Project Failure). Nor will they admit that even if they have tech that works, that it’s not the be-all and end-all (because, as far as they are concerned, it’s going to slice, dice, and make virtual julienne fries of your data just like a good AI should) and may not be the right solution for you.

But those with any modern tech at all know that a lot of vendors out there claiming “AI” don’t have anything close to deserving the AI label, that they can blame all the failures on those vendors (because they are obviously the new silicon snake oil salesmen, right?), and are now trying to win the AI marketing war by claiming whatever phrasing their competition is using, or not using, proves that their opponent doesn’t have good tech, and definitely doesn’t have AI.

But it’s all bullcr@p, because all of the phrasing is bullcr@p, most of the vendors don’t have anything close to what should be considered AI, and, most of the time, it doesn’t matter whether or not the vendor has AI, only if the vendor’s tech solves your problems.

To make this clear, let’s look at each term, what some vendors say the term means, and why their definition is meaningless.

Term Vendor Definition What it Actually Means
AI-Enabled core features incorporate AI the vendor has injected a few analytical algorithms, but no guarantee they are actually advanced or anything close to what you should expect from AI
AI-Enhanced AI is added to the interface to give you the AI experience the vendor has wrapped a Gen-AI LLM (like Chat-GPT) to give you a meaningless conversational interface
AI-Backed AI is at the core of one or more functions one or more parts of the app are built around an algorithm the vendor is calling AI
AI-Powered External AI has been integrated to power our tech the vendor has wrapped Chat-GPT and integrated it directly into their app (letting unpredictable and undependable code run parts of the app)
AI-Driven AI has been built into the workflow and runs (part of) the app the vendor has decided to let AI control the application (for better or worse) and determine what algorithms to run, when, and why
AI-Native the entire infrastructure was built to support AI the vendor has built the entire application to support integration with AI systems (and may not have built any actual functionality)

Moreover, if you read any statements about how an infrastructure needs to be purpose built from the ground up to “serve data to AI models“, that’s an even bigger pile of bullcr@p because no application works unless it can serve data to the models it is based on, whether classical or modern or “AI”. All applications take in data, process it, and spit it out, so claiming that you need to build a special architecture to support AI is complete and utter bullcr@p.

Always remember the reality that:

  • true AI doesn’t exist (as no software is intelligent)
  • advanced algorithms do exist, but just slapping an AI label on an algorithm doesn’t make it any more advanced than it was yesterday
  • not just any advanced algorithm will do, it has to be appropriate to your problem
  • you don’t always need an advanced algorithm, you need one that gets the results you need

And then you can see through the vendor bullcr@p and focus in on finding a vendor with a solid solution that actually solves your problem, regardless of whether the vendor claims AI or not.