Category Archives: AI

Cutting Edge Tech Is NOT Defined by the C-Suite, …

… Financiers, or the Marketers pushing hype (from the A.S.S.H.O.L.E.) at you 24/7.

Nor is it defined by the algorithms it uses, the software stacks it runs on, or the hardware stacks it makes use of.

Cutting edge tech is ANY and ALL tech that

  • solves one or more significant problems that not being solved by your tech today and
  • does it by automating at least 90% of the tactical data processes that can be automated

It can be based on the latest AI algorithm or twenty year old RPA. It doesn’t matter if it shines a light using a LAMP stack, if it is an edgy MEAN stack, an Austin Powers inspired Grails stack, or even a .NET stack (though the doctor personally shuddered typing those last three caps out). The entire point of enterprise software is to solve your problems.

The point of software is NOT to provide

  • an excuse for a C-Suite to cut his tech-bro buddy a fat check,
  • a new Tulip Market for greedy financiers who think they can score big and get out before the crash, or
  • marketers a platform to pump out pompous poop on a daily basis.

As Mr. Koray Köse penned in a recent piece on LinkedIn on how you are in need of cutting edge technology, the last thing you want to do is take your direction from the VC-pumped C-Levels who do nothing but repeat the marketing garbage they are fed, sometimes changing the baseline of the garbage mid-sentence!

You have to remember:

  • All the VCs and most of the PEs want to create the next unicorn and get rich quick overnight. So most of these VCs and PE firms are pumping huge amounts into companies with little to no product (to support their grand vision that even SAP and Oracle haven’t managed to achieve after 5 decades and trillions of dollars) and directing the majority of that to be spent on buzz-creating sales and marketing (and not real product development). After all, you don’t actually have to create anything beyond buzz to get rich — market crashes throughout history have proven that since Tulip Mania. (What was created there of value? Nothing. But hype made a few men rich and many men poor.)
  • Even though today’s LLMs are dumber than a doorknob (and demonstrate more than any previous generation of the tech that AI should stand for Artificial Idiocy), with performance degrading every iteration (because there is no more data to steal, and the AI engines are now training each other on regurgitated digital garbage), marketers are still taking storytelling to a whole new level (and we mean storytelling because ALL the claims are fake) with a spin that even the Spin Doctors of old would be envious of. (Little Miss Can’t Be Wrong now, right? They want to Make You A Believer so you hand over all your Money when you should Exit … Stage Left and Run To The Hills!)
  • This copy is being pushed onto the C-Suites of all of the other investments in their portfolio with assurances that it’s all true, and being similarly echoed to all of the CXOs that attend the conferences they speak at, the golf outings they are invited to, and the exclusive social gatherings they arrange.

Not one of these groups knows what you need, and two of these groups have absolutely no interest in giving it to you — their interest is all about getting your money, building the hype, inflating the value, and, hopefully, cashing out big before the next hype cycle and/or the inevitable market crash that’s coming.

The technology you need is the technology that is:

  • built with real world problems in mind,
  • tested on real world problems in real companies and proven to deliver, and
  • scalable and extendable to your operations and needs.

This type of tech is built over years and doesn’t use unreliable probabilistic AI at the core. (It runs on traditional, configurable, RPA that is 100% reliable and auditable. Now, this tech might employe AI to help with the configuration by analyzing your systems and processes and self-assessed gap analysis by recommending configurations for you to approve, and that’s okay, because it’s not randomly making decisions, its recommending options and letting you confirm or deny. It might also use SLMs for specific problems where they work a high percentage of the time to jump-start searches, documents, and processes for you, and as long as you retain full control and can accept, modify, or reject, that’s okay too. But everything is built on a solid core, with 100% dependable automation for all key data intake, processing, and pushes that is done without any manual intervention, appropriately calibrated to your business rules, processes, and goals.)

It’s also built slow, rolled out to a small group of beta customers or development partners, and hardened in the field before being rolled out en masse.

And, most importantly, it’s built by a company that is boot-strapped or frugally running on a shoe-string budget from minority SEED investors to get that first version up-and-running successfully in its first 10 customers before that company goes for any VC funding to scale up. A company that has the time to get it right before being under constant pressure to make demanding, if not impossible, sales targets.

Moreover, to have any chance of getting this software, you need to know three things:

  1. how to identify what you need that will form the heart and soul of the RFX,
  2. how to write a good technology RFX and analyze the responses, and
  3. how to identify the right companies to invite to the RFX.

What You Need

For Supply Chain, as Mr. Koray Köse points out, if you need help identifying your true needs and cutting through the noise, he can help you out with that with the eyes of a hawk, the skill of a surgeon, and the wit of a Williams (a Robin Williams). For Procurement, Joël Collin-Demers can slice through your organizational landscape like a hot knife through butter and let you know exactly what you need in priority order.

The Technology RFP

Every consultancy and their mascot claims they can help you here, but you need to be very, very careful.

  • many of their consultants are not technology experts and tend to prioritize features over functions, as that’s all they know
  • many of these firms have partnerships with the (mega) suite players, and you don’t maintain sycophant, sorry, Gold/Platinum/Diamond, status unless you direct a LOT of business their way each year, so they tend to try to fit everyone into one of these buckets
  • many follow the old consulting model of “give the client exactly what he thinks he wants” and don’t take the time to figure out what you actually need and educate you, leading to an RFP that is no better than what you would have written yourself, as they spend half their time questioning you, and the other half writing down your responses

For true success, you need someone who is simultaneously:

  • an expert in the domain,
  • an expert in technology, and
  • not incentivized to help any vendor whatsoever and, preferably
  • an expert in project assurance (but not always necessary)

If you need help writing that ProcureTech / Direct Sourcing/Supply Chain RFP, feel free to reach out. This is my expertise. And for some tips, feel free to start with our recent series on How Do You Write A Good RFP?

Vendor Selection

Very few analysts and consultants know more than a handful of vendors. The big firms focus on the big vendors who cut big cheques, which are the 20-ish same vendors you see in their maps year after year after year. They don’t know about the other 700. Only a few of us independent analysts go far and wide and actually know what is out there and how it can help you.

For ProcureTech, SI should be your first stop. It’s the site that gave you the mega map to open your eyes as to just how wide the playing field is. Moreover, if you need something really niche where the doctor doesn’t have the expertise, he’ll find the right expert to refer you to.

For SupplyChain, Mr. Köse knows a lot of the players. But don’t overlook Bob Ferrari of The Ferrari Group. As one of the original supply chain analysts, he knows all the players and what their platforms can and can’t do inside and out.

And if you reach out to the right experts and get the right help, maybe you can get true cutting edge tech that actually helps you!

AI Employees Aren’t Real! Don’t Believe The Lunacy!

This should be so obvious that it shouldn’t need to be said, but with multiple companies still promising to (soon) deliver “AI Employees”, it apparently needs to be said.

First of all, why it should be obvious:

  1. There is no Artificial Intelligence. The tools are as dumb as a doorknob. The best you can get is Augmented Intelligence, which, by the way, is what you really need because it can provide almost instantaneous insights that would take a traditional analyst with traditional tools days to weeks (or months) to discover.
  2. An employee is a person. A PERSON! Not a piece of software.
    (As we don’t have AI, we don’t have autonomous robots, so we can’t even have the theoretical argument about whether or not a robot should be recognized as a person for legal means.)

Secondly, we’ve already seen how autonomous software agents don’t work (because they are not intelligent or people). Klarna, one of the first companies to fire the majority of its customer support team with the false claim AI can do that, quickly found it it really can’t and now has to hire back hundreds of support agents because what AI was really doing was the work of 700 really bad agents! And their customers didn’t want to talk to these bots (essentially because of how dumb and useless they were).

Thirdly, there have been no, and nor will there be with existing algorithms, stacks, and technologies, any magical emergence that will suddenly allow these “AI agents” to become intelligent and be able to perform their tasks autonomously. Because

  1. if Neural Networks were the right models, today’s models (constrained to commercial compute capacity) would put them on par with a pond snail, maybe a sea slug;
  2. compute power doesn’t double year over year anymore; Moore’s law is quickly becoming a historical footnote due to quantum limits; and
  3. there’s no more data to train them on — the big AI tech plays have already illegally stolen all of the copyrighted data on the internet, and that’s still not enough (and AI generated data just worsens performance because it’s not real, or good, data).

Moreover, we don’t need AI Employees. What we need are more productive employees! Employees that don’t waste up to 80% (or more) of their time doing more-or-less nothing but data wrangling trying to turn data into knowledge and knowledge into the insights needed to make a decision. Tasks that are purely tactical calculations and conversions that are precisely what computers were built for. Computers can do trillions of calculations a second error free, while we can only do a few, and not necessarily error free.

Which means what we really need are Augmented Intelligent Agent Assistants that do the computational tasks we need done and either

  1. automate processing that we would do almost thoughtlessly if we determined that the appropriate conditions were met or
  2. present us with the data, knowledge, and insights we need to make a decision and take action, including suggestions for that action if there are standard response patterns

Because, when the 80% time wasting tactical data processing is taken off of our plates, we will be at least 5 times as effective with these Automated Intelligent Agent Assistants, and that is what will propel organizations forward. Not dumb tech, and definitely not false promises of fake AI Employees that do not, and fundamentally cannot, exist.

Why Are We Inundated By AI Slop?

And I don’t just mean the slop produced by AI, which we should all know by now is 100% AI slop, but all of the human and “expert” guidance produced, or co-produced, by real people that isn’t much better!

In one way the answer is simple: there is a considerable lack of knowledge and understanding about AI, even among the firms and practitioners who are touted as, or claim to be, “the experts”. There is both a failure to realize this as well as admit this.

But let’s back up. Recently, THE REVELATOR asked, in response to a Gartner post (screenshots below, because Gartner has a habit of deleting posts where THE REVELATOR asks hard questions or points out major issues, asked for my “thoughts” on the infographic that referenced a two year old paper. A two year old paper that didn’t even mention a number of critical concepts that should have been discussed in reference to the AI capability and tooling breakdown the infographic presented, and all but one of those concepts should have been mentioned if it was a serious evaluation of AI technology at the time.

My thoughts on the matter would be obvious to anyone who’s read more than a handful of my articles, but I decided to step back and assume the real question was not “is this bad” but “why does this keep happening” — why do Gartner, and almost every other analyst and consulting firm (because it’s not just Gartner, so they shouldn’t be singled out), keep producing content that just doesn’t cut it — that doesn’t address the core issues, outline the challenges, discuss the plethora of failures (with an 88% tech project failure rate in the last published study with indications it could now be as high as 92% in AI), or provide any deep understanding of AI technology and how to differentiate between it?

The reason is two-fold. At best, the big firms have only a handful of employees who have a real understanding of the technology, but

  1. 100 times as many analysts and consultants taking advisory on the matter from vendors (who we have already told you have lured big analyst firms astray) and clients who know even less, and this is the workforce powering
  2. the relentless marketing machine (powered by AI content writers) that believes they have to pump out multiple articles a day to be relevant (even though not one of those articles has an original thought, insight, or suggestion on how to better make use of this technology because all AI bots can do is regurgitate someone else’s ideas and content)

The reality is that very few people understand advanced technology, especially new (or recently sexy) advanced technology. To truly understand this technology, you need the equivalent of a PhD — either years studying it in an academic environment or the equivalent number of years studying it in R&D labs or proof-of-concept implementation pilots.

A few years of “prompt engineering” an LLM or configuring pre-built models on Sci-Kit that “work the majority of the time for the use cases they tested” doesn’t cut it. Not even close!

You need to understand the core algorithms and the fundamental mathematics that underlies them, and that’s not easy. Even classical curve-fitting, nearest neighbor, clustering, regression, and knowledge graphs can be much more intricate than you think. The complexity intensifies when you migrate to multi-layer (feedback) (deep) neural networks, semantic technology built on ML(F)(D)NNs, and now LLMs which don’t just use very advanced statistical processing to map an input of a fixed type to an output in a fixed set (that can computed with mathematical confidence) but an arbitrary input to a generated output using layered feedback statistical calculations on parts of the input that are statistically stitched together (like Frankenstein’s monster, but worse) to make parts of the output, which means that hallucinations are a core feature of these platforms (as well as behavior that is much, much worse). Furthermore, if you’re trying to put it all together, them, unless you understand the limitations in interplay between different algorithms and models … good luck. (And, unless you understand the underlying mathematical models and their strengths, and limitations, good luck with that too!)

And this isn’t easy, especially when you need to start asking questions about computability (and decidability).

To put this in perspective, I have an earned PhD in Computer Science (specializing in data structures and computational geometry, but also included study of late 90s “AI” (including ML, Expert Systems, and Neural Networks) and when you earn one of these degrees, don’t wimp out (and try to stick to coding or “software engineering”, and take all of the (cross-listed with Mathematics) logic and theory courses, at least when I studied, you studied the classics in fundamental algorithms, automata, P vs NP, computability and decidability. If you do well in these advanced courses, you leave with the nagging feeling that you still don’t really understand what you studied (and tested on) — and you don’t! For example, it’s not just P vs NP — it’s P vs NP Hard vs NP Complete. And P isn’t always P, because if it’s n^8, well, that might as well be NP Hard for practical purposes! And categorization in NP is way harder in practice than it is in theory. And advanced algorithms often perform no better than stupid simple ones and it takes years to “see” why. And so on.

It takes years to get a grip on and really understand the fundamentals, which is what you need to understand to get a grip on what you can and can’t do with advanced algorithms in the fields of optimization, predictive analytics, and AI — which each take additional years of study, research, development and implementation experience to understand what they can and can’t do and evaluate new developments from technical papers, not marketing BS and fairy tales weaved by master storytellers that would leave PT Barnum in awe.

Script kiddies, “prompt patsies” (they are not prompt engineers, that is utter BS), consultants, and analysts with no formal background in CS or appropriate areas of STEM and limited experience beyond installing someone else’s software and doing a few parametric modifications don’t understand this. Not even close! (And don’t even have the background to understand where there understanding is [more] limited!) But yet, this is what most of the firms are asking of their consultants and analysts everyday, which is why we get so much AI slop that completely misses the point.

You have too many people without the deep background and experience being told that everything they do has to be “AI” (even if they have no clue what it means) because of all of the funding being poured into it, too many more “influencers” (or should I say silicon snake oil peddlers) trying to take advantage of the confusion, not enough deep understanding, and almost no one willing to cut through the noise and say “wait a minute; the AI they are selling is not the AI you are looking for“.

That, and everyone forgetting, as happens every hype cycle, that context matters. But that’s another article, because context doesn’t matter if you don’t know what you’re doing.


The original post of joy pain.




Even in the age of “AI”, SaaS Startup Valuation Isn’t That Hard

The Prophet recently penned a long LinkedIn post on The New Diligence Questions for SaaS in an “AI”-dominated world that, on a first read, makes it sound like diligence is going to get insanely difficult unless you’re backing AI (because, apparently, AI is going to replace everything and everyone).

The reality is that AI doesn’t really complicate the equation, especially if you already realized that a lot of software is becoming a commodity and making the right investment is all about focussing on what’s not commodity and then, when you find that subset of potential investments, which one of those is the most user friendly. And you can narrow down to a good potential investment pretty quick with just 3 short questions:

What data is being captured, created, or curated?
Tech replicates quickly, and easier to build now than ever. But good data is scarcer and scarcer.
What unique algorithmic capabilities does the platform possess that can’t be accomplished by today’s, and likely tomorrow’s, AI?
Orchestration, workflow, NLP, et.? Sorry but that’s all pretty common place. We’ve had we-based middleware since a year after the world wide web was invented (and orchestration is just middleware 3.0), workflow for decades longer, NLP for decades (although LLMs now make it easier to use and more accessible), etc. You need to look for unique algorithmic capability that can’t be plug and play from open source components or learned by dumb AI (like advanced optimization, new types of mathematically sound predictive analytics algorithms, etc.)
Does the platform enable users, through Augmented Intelligence capabilities, to be 10X as productive as they would be without it?
i.e. where data collection, processing, workflow, etc. etc. etc. can be fully automated, is it? does it employ NLP interfaces to the extent possible for non-technical users?

This is what defines winning software, not plugging in overhyped 3rd party LLMs and AI tech that is still, more-or-less, experimental, hallucinatory, and fundamentally flawed.

Once you have successfully answered these questions, chances are that there is nothing else super significant to answer about the tech (beyond the standard due diligence process, inc. security and privacy reviews where needed) and you can focus on the business and market questions. Does the market exist, and does the business have the right people, processes, and support to capture the market.

So, in other words, if the platform

The SaaS play has value, and you can move onto the business and market analysis.

The only real question will be how to define the market and the new market value in an age of (temporarily) overhyped AI / Agentic plays (when, as we have pointed out many times, it’s not new, just better) to determine its real valuation (when you are being flooded with nonsense).

And of course,

  • beyond pure S2P,
  • easy agentic co-worker interfaces, and
  • plays well with “AI”,

as pointed out by The Prophet, will increase value, but that’s not the core of what you’re looking for.