Without Human Smarts, There Will Be No (Usable) AI!

And I’m so happy I’m not the only one pushing this theory. Mr. Stephen Klein recently published a great post on The Age of Pretend.

In the post he notes that:

Everyone assumes AI’s biggest bottleneck is compute. … That assumption is wrong. The real bottleneck … is architecture, specifically, a design decision made in 1945. … The real constraint: the von Neumann bottleneck. Modern computers separate memory and processing. Data has to move back and forth between them. For most software, that’s fine.
For AI, it’s catastrophic.

Some numbers the industry rarely highlights:

  • Accessing off-chip memory consumes ~200× more energy than the computation itself
  • Roughly 80% of Google TPU energy goes to electrical connections, not math
  • A 70-billion-parameter model moves ~140 GB of data just to generate one token”

LET THAT SINK IN. Us old timers remember “640K out to be enough for anyone”! The Apollo Guidance Computer — you know, the one that was installed on each Apollo Command Module and Lunar Module in the Apollo Missions, had 2K Core RAM Memory and a 36K ROM. Even today, unless you have an iPhone 17, your phone probably only has 128 GB of storage. That means, even with the processing power of your phone (that dwarfs most computers us old timers have ever owned), you can only process ONE token. (Now do you understand why the data center [energy] demands for your Gen-AI chat-bots are destroying the planet? Anyway, we digress …)

This means that (Gen-)AI has hit a wall. Computer Architecture supports massive compute at scale, massive storage at scale, but not massive transfers at scale.

So what does this mean?

Do you remember the days of RAM drives? Not only did it speed things up, but it kept your machine cooler because, as Stephen noted, less energy accessing data in RAM than on disk.

And do you remember the fun of Assembly? (Okay, that’s sarcasm!) Once you learned to maximize register usage (i.e. re-sequencing processing so that you minimized reads from, and writes to, memory), your code got faster still (and machines stayed cooler longer, which was obvious by the lack of noisy fans spinning up).

We’ve known about this problem for decades. (Eight decades to be exact!) It’s too bad today’s students don’t study the basics and understand it’s not strength that determines computational speed and energy requirements, it’s data scale — whether the data fits in memory or not, whether “significant” chunks fit in the onboard GPU memory or not. (And specifically, can you scale the data down enough for the efficiency you require?)

But this is still the key point in Stephen’s article:
The next major improvements will likely come from smarter algorithms.”

We might need brute force to detect patterns we can’t (yet) see, but the only way to truly advance is to understand those patterns and code optimal, light-weight algorithms that exploit fundamental rules to allow us to process data quickly and efficiently.

Until we figure that out. You’ll never have usable AI (and definitely never have REAL AI as not only will it never be intelligent, but it will never, ever, get anywhere close).

Contract Management for Small Companies is …

James Meads isn’t saying it in this LinkedIn post, but he’s hit the nail on the head with an old-school hammer. (Unlike the shiny new hammer, the old school hammer actually works.) For most small enterprises, they don’t need full contract lifecycle management, they need document centralization and visibility and time-based reminders. That’s it!

This is because they:

  • do negotiations through phone and Word-redlining,
  • use hand signatures through scans and emails,
  • place orders through e-docs in standard format to receipt email addresses because they don’t have a fancy e-Procurement system which does integrated P2P
  • don’t have a modern AP system that can ingest contract meta-data and they still need a clerk to enter the price tables manually
  • still need to enter the non-order commitments manually into their project planning tool
  • etc.

What they need is old-school document management built on a CMS (Content Management Solution) tailored for contract documents and Procurement needs. That’s it!

This is not a 50K to 250K solution, but a 5K solution … (especially since most CMS is essentially shareware these days)!

Now, once you hit the true mid-market, and start spending 50M to 100M a year or more, you need a lot more advanced capability across the board, and if you’re contract heavy, spending 50K to centralize all of the above and do true automated end-to-end lifecycle management efficiently is peanuts. However, when you’re less than 50M revenue, spending at most 20M externally, and only have a few categories large enough to negotiate significant discounts, you just don’t need advanced S2P solutions, or the price tag. Anything that enables a standard process is all you need. (Even if you are a F500/G1000, the reality is that just having a basic solution that enables a standard process will likely get you 90% of the “savings” the most advanced suites promise at 5X to 10X the price tag. At the end of the day, most firms only have a few [dozen] categories [at most] where a more advanced solution is needed to extract value.)

(And then, as you grow, there are great Mid-Market S2P suites that start in the 50K range, with the best/most extensive maxing out around 250K a year, meaning you don’t need to go to a mega suite and pay millions. But since Gartner, Forrester, etc. maps will never list them, you do have to look for them. But you have resources. James’ site. Sourcing Innovation. etc.)

Your SaaS Vendor Should be TRUSTworthy … But They Shouldn’t Have to Tell You!

In fact, I’d argue it’s a red flag if they do. But let’s backup.

A trustworthy vendor is one that

1) Clients Trust

2) Clients’ Third Parties Trust

3) Suppliers and Partners Trust

4) Third Party Analysts and Consultancies Trust

… and all of these will imply trust in their recommendations and reviews, even if they don’t explicitly say it.

Digging in.

1) They treat you like a client from the first interaction.

The first interaction asks about your needs, not just what you are looking for.

They tailor the demo to your business and categories.

They answer your questions openly and honestly, don’t deflect from features they don’t have today, give you real timelines, and offer workarounds until they deliver.

Once you sign, they guide you through implementation and change management, work beside you to train you, and always respond beyond SLA requirements.

They don’t just focus on immediate results, but on ensuring you level up and could continue to get results without them. They act like a partner.

2) They treat your suppliers and partners like clients too.

They’re always there to help, they make it easier for the supplier than their competitors, and prove their value to the point the suppliers want to use them too.

3) They’re fair to their suppliers and partners. They pay on time. They work with them. They take blame when it’s their fault and not the supplier’s or partner’s … who like working with them more than other companies.

4) Analysts and consultancies happily recommend them even when they’re not (paying to be) on the Map or a preferred partner. Sometimes when they aren’t even the most appropriate solution just because their customers are so much happier.

It becomes so obvious that you don’t even have to ask the question (and you know that if you did, almost every client, supplier, and partner would say they trusted them).

Remember this because
1) if you start seeing too many posts on how a certain company is one you can trust or
2) you have to ask if you can trust the company
you probably can’t!

Companies generally start pushing “trust” when a major competitor does something particularly untrustworthy that becomes public, third party surveys paint them as trustworthy, or they need a new angle to boost sales.

Plus, f you need to ask, something is setting off your internal alarms and you won’t trust them until you figure out what that is (and they’re not going to tell you).

Either way, play it safe and look elsewhere.

You may still get burned (and I have the scars to prove it), because sh!t happens, boards make changes, investors get ruthless, and world class pathological liars could still slip through the cracks and fool everyone for years, but you decrease your chances of being burned significantly by just looking for vendors who continually do the right thing (instead of just saying they do).

Tired of All the Fake AI Experts?

Want to know how to weed them out and make them go away?

Just ask them to define these terms, off the top of their head, on the spot, without looking anything up, using any tools, or accessing any network connected devices (and definitely no Gen-AI LLM access):

  • computability
  • decidability
  • NP-completeness
  • optimization, inc. local optimization vs. global optimization
  • clustering, with at least 3 different examples
  • curve fitting
  • fourier transform
  • neural network
  • deep neural network
  • transformer
  • ontology
  • semantic analysis
  • sentiment analysis
  • boolean logic and theory of logical variables
  • automated reasoning

and they don’t define every single term mathematically precise, then tell them to f*ck 0ff because they don’t know a damn thing!

You CAN Afford to Wait for AI. But you can’t afford to wait to

  • get your data under control
  • build an infrastructure to allow for greater connectivity between apps within your enterprise and its greater ecosystem
  • update your processes
  • acquire and train the right talent with the knowledge they need to compete in the modern world
  • get digital and implement modern, current, generation technology based on best practices, proven (A)RPA ([Adaptive] Robotic Process Automation), and last-gen “AI” tech like optimization, predictive analytics (based on clustering and curve fitting), and point based neural networks with proven reliability and mathematically understood confidence where those apps are needed (and not a Gormless AI)

The reality is that you have to operate as lean and mean as possible. And

  • without good data, you can’t make good decisions
  • without good connectivity, you’re manually re-entering data across systems or missing critical external data you need to make good decisions
  • without good processes, you are inefficient and if not already, about to be circling the drain
  • without good talent, you are running on fumes at best, your ability to compete is at risk, and you can never improve
  • without modern tech, you are at a continual disadvantage and will continually fall behind

So you can’t wait to

  • institute Master Data Management (MDM)
  • enforce Open APIs in your solutions and acquire integration and orchestration solutions
  • review and modernize your processes where necessary
  • focus on acquiring, train, and retaining top talent
  • modernizing your tech to CURRENT generation proven tech, not experimental HYPE tech

BUT YOU CAN WAIT ON “GEN-AI. It’s about getting the job done as efficiently and effectively as possible … with a low error rate and no significant risk! 99 times out of 100, you don’t need experimental “AI” to do that. Only the investors who spent millions/billions/trillionsw on unproven tech and the consultancies who need massive projects to employe bodies do … but that’s not to help you. That’s to recoup their wasted dollars. And that’s NOT your problem.