Category Archives: Market Intelligence

The Gen AI Fallacy

For going on 7 (seven) decades, AI cult members have been telling us if they just had more computing power, they’d solve the problem of AI. For going on (seven) 7 decades, they haven’t.

They won’t as long as we don’t fundamentally understand intelligence, the brain, or what is needed to make a computer brain.

Computing will continue to get exponentially more powerful, but it’s not just a matter of more powerful computing. The first AI program had a single core to run on. Today’s AI program have 10,000 core super clusters. The first AI programmer had only his salary and elbow grease to code, and train the model. Today’s AI companies have hundreds of employees and Billions in funding and have spent 200M to train a single model … which told us we should all eat one rock per day upon release to the public. (Which shouldn’t be unexpected as the number of cores we have today powering a single model is still less than the number of neurons in a pond snail.)

Similarly, the “models” will get “better”, relatively speaking (just like deep neural nets got better over time), but if they are not 100% reliable, they can never be used in critical applications, especially when you can’t even reliably predict confidence. (Or, even worse, you can’t even have confidence the result won’t be 100% fabrication.)

When the focus was narrow machine learning/focussed applications and accepting the limitations we had, progress was slow, but it was there, was steady, and the capabilities, and solutions improved yearly.

Now the average “enterprise” solution is decreasing in quality and application, which is going to erase decades of building trust in the cloud and reliable AI.

And that’s the fallacy. Adding more cores and more data just accelerates the capacity for error, not improvement.

Even a smart Google Engineer said so. (Source)

Challenging the Data Foundation ROI Paradigm

Creactives SpA recently published a great article Challenging the ROI Paradigm: Is Calculating ROI on Data Foundation a Valid Measure, which was made even greater by the fact that they are technically a Data Foundation company!

In a nutshell, Creactives is claiming that trying to calculate direct ROI on investments in data quality itself as a standalone business case is absurd. And they are totally right. As they say, the ROI should be calculated based on the total investment in data foundation and the analytics it powers.

The explanation they give cuts straight to the point.

It is as if we demand an ROI from the construction of an industrial shed that ensures the protection of business production but is obviously not directly income-generating. ROI should be calculated based on the total investment, that is, the production machines and the shed.

In other words, there’s no ROI on Clean Data or on Analytics on their own.

And they are entirely correct — and this is true whether you are providing a data foundation for spend analysis, supplier discovery and management, or compliance. If you are not actually doing something with that data that benefits from better data and better foundations, then the ROI of the data foundation is ZERO.

Creactives is helping to bringing to light three fallacies that the doctor sees all the time in this space. (This is very brave of them considering that they are the first data foundation company to admit that their value is zero unless embedded in a process that will require other solutions.)

Fallacy #1. A data cleansing/enrichment solution on its own delivers ROI.

Fallacy #2. You need totally cleansed data before you can deploy a solution.

Fallacy #3. Conversely, you can get ROI from an analytics solution on whatever data you have.

And all of these are, as stated, false!

ROI is generated from analytics on cleansed and enriched data. And that holds true regardless of the type of analytics being performed (spend, process, compliance, risk, discovery, etc.).

And that’s okay, because is a situation where the ROI from both is often exponential, and considerably more than the sum of its parts. Especially since analytics on bad data sometimes delivers a negative return! What the analytics companies don’t tell you is that the quality of the result is fully dependent on the quality, and completeness, of the input. Garbage in, garbage out. (Unless, of course, you are using AI, in which case, especially if Gen-AI is any part of that equation, it’s garbage in, hazardous waste out.)

So compute the return on both. (And it’s easy to partition the ROI by investment. If the data foundation is 60% of the investment, it is responsible for 60% of the return, and the ROI is simply 0.6 Return/Investment.)

Then, find additional analytics-based applications that you can run on the clean data, increase the ROI exponentially (while decreasing the cost of the data foundation in the overall equation), and watch the value of the total solution package soar!

Procurement 2024 or Procurement’s Greatest Hits? McKinsey’s on the money, but … Part 4

… in some cases this is money you should have been on a decade ago!

Let’s backtrack. As we noted in Part 1, McKinsey ended Q1 by publishing a piece on Procurement 2024: The next ten CPO actions to meet today’s toughest challenges which had some great advice, but in some cases these were actions that your Procurement organization should have been taking five, if not ten years ago. And, if your organization was doing so in these cases, should be moving on to true next actions the article didn’t even address.

So, as you probably guessed, we’re in the midst of discussing each one, giving credit where credit is due (they are pretty good at strategy after all), and indicating where they missed a bit and tell you what to do next if you are already doing the actions you should have been doing years ago. And, just like we did to THE PROPHET‘s predictions, grade them. In this third instalment, we’ll tackle the next three actions, which they group under the heading of:

OPERATING MODEL OF THE FUTURE

9. Digitize end-to-end procurement processes. A-

This is yet another action that you should have been working on since the first Procurement platform hit the market over 25 years ago, but an action that you likely couldn’t have completed until recently when the introduction of orchestration platforms made the interconnection of all systems used by Procurement affordable for the average company, even when the company needed connectivity with systems in Finance, Logistics, Risk, and Supply Chain to access necessary pieces of information for Procurement to adequately do its job. (Before these systems, you needed to be a large enterprise with a huge IT budget to afford the integration work to attempt this.) Plus, the “AI” tools you need to digitize paper documents used by old-school, classify data, process contracts to identify potential issues, etc. used to be too pricey — now that seemingly every vendor has them, they are affordable to.

10. Build new capabilities for the buyer of the future. A+

Prepare the organization for Procurement’s future by investing in new abilities for advanced market research, integrated technology, and talent development. Equip procurement professionals with deep insights and tools to understand and address supply market dynamics, risks, economics, and ESG. Given how supply chains have been in constant flux since the start of COVID, with no end in sight as a result of geo-political conflict, natural events (drought in the Panamanian canal) and disasters, supply shortages, rampant cost increases, and so on, the organization is going to need every capability available today, and a few not invented yet, to survive tomorrow. Start researching, testing, and developing now … before it’s too late. (Just leave Gen-AI out of the picture!)

So, putting it all together, the grades were B, B, A, B+, A-, A, B+, A+, A-, A+. Not a bad report card.

Procurement 2024 or Procurement’s Greatest Hits? McKinsey’s on the money, but … Part 3

… in some cases this is money you should have been on a decade ago!

Let’s backtrack. As we noted in Part 1, McKinsey ended Q1 by publishing a piece on Procurement 2024: The next ten CPO actions to meet today’s toughest challenges which had some great advice, but in some cases these were actions that your Procurement organization should have been taking five, if not ten years ago. And, if your organization was doing so in these cases, should be moving on to true next actions the article didn’t even address.

So, as you probably guessed, we’re in the midst of discussing each one, giving credit where credit is due (they are pretty good at strategy after all), and indicating where they missed a bit and tell you what to do next if you are already doing the actions you should have been doing years ago. And, just like we did to THE PROPHET‘s predictions, grade them. In this third installment, we’ll tackle the next three actions, which they group under the heading of:

INTEGRATED MARGIN MANAGEMENT

Coordinate Response for Integrated Margin Management. B+

This is something that should have been done since your Procurement department started strategic sourcing — without a good estimate of the supplier’s cost of goods sold (COGS), there’s no way to know if the bid, or negotiated price, is good or not, or how much margin you are getting taken for (especially if there is price collusion among all the suppliers you invited to the event to keep bids in a certain range, no matter what).

However, in order to truly understand the COGS, you need to monitor commodity and raw material costs in almost real-time; understand the labour, energy and water requirements in production and the local market costs where the supplier’s production facilities are; monitor the local transportation costs (and any surges due to fuel increases, truck or container shortages, re-routings due to route closures or geopolitical situations, etc.) and that’s a much taller order than you had to worry about last decade to maintain a good grip on your supplier’s current COGS. You’ll even have to work closely with engineering, supply chain and logistics to make sure you’re getting it right.

Redefine Portfolio and Product Design. A+

While the latter is something R&D/Engineering should be doing every few years, it’s now critically important that Procurement get involved and guide engineering with respect to which products are, or in danger of, becoming too expensive due to material scarcity, a limited supply base that can manufacture the products, and changing consumer perception and desires regarding what they want from a certain product. As the article says, it’s becoming critical to help R&D/Engineering look for ways to reduce these dependencies, expedite qualification, and increase resilience by pointing out key products to address, key concerns, potential alternatives that are now available to the organization, and so on.

Procurement also needs to aggressively work with Marketing and Sales to shift demand away from products that need to be sunset due to end of life, cost, supply, or sustainability considerations, as well as shift demand to products that are more profitable, sustainable, or secure from a supply availability perspective. It can no longer be about what Sales thinks it can sell, but what Sales needs to sell for the organization as a whole to be successful.

Bring Back the Interns!

Even the offshore interns!

And since, like Meat Loaf,

I know that I will never be politically correct
And I don’t give a damn about my lack of etiquette

I’m going to come out and say I long for the days when AI meant “Another Indian”. (In the 2000s, the politically incorrect joke when a vendor said they had AI, especially in spend classification, was that the AI stood for “Another Indian” in the backroom manually doing all of the classifications the “AI” didn’t do and redoing all the classifications the “AI” got wrong over the weekend when the vendor, who took your spend database on Friday, promised to have it by Monday).

The solution providers of that time may have been selling you a healthy dose of silicon snake oil, but at least the spend cube they provided was mostly right and reasonably consistent (compared to one produced with Gen-AI). (The interns may not have known the first thing about your business and classified brake shoes under apparel, but they did it consistently, and it was a relatively easy fix to remap everything on the next nightly refresh.)

At the end of the day the doctor would rather one competent real intern than an army of bots where you don’t know which will produce a right answer, which will produce a wrong answer, and which will produce an answer so dangerous that, if executed and acted on, could financially bankrupt or effectively destroy the company with the brand damage it would cause.

After all, nothing could stop me from giving that competent, intelligent, intern tested playbooks, similar case studies, and real software tools that use proper methodologies and time-tested algorithms guaranteed to give a good answer (even if not necessarily the absolute best answer) and access to internal experts who can help if the intern gets stuck. Maybe I only get a 60% or 70% solution at best, but that’s significantly better than a 20% solution and infinitely better than a 0% solution, and unmeasurably better than a solution that bankrupts the business. Especially if I limit the tasks the intern is given to those that don’t have more than a moderate impact on the business (and then I use that intern to free up the more senior resources for the tasks that deserve their attention).

As for all the claims that the “insane development pace” of (Gen)-AI will soon give us an army of bots where each bot is better than an intern, given that the most recent instantiation of Gen-AI released to the market, where 200 MILLION was spent on its development and training, is telling us to eat one ROCK a day (digest that! I sure can’t!), I’d say the wall has been hit, been hit hard, and until we have a real advancement in understanding intelligence and in modelling intelligence, you can forget any further GENeric improvements. (Improvements in specific applications, especially based on more traditional machine learning, sure, but this GEN-AI cr@p, nope.)

When it comes to AI, it’s not just a matter of more compute power. That was clear to those of us who really understood AI a couple of decades ago. AI isn’t new. Researchers were discussing it in the (19)50’s, ’56 saw the creation of Logic Theorist, which was arguably the birth of Automated Reasoning, ’59 saw the founding of the MIT AI lab by McCarthy and Minsky, and ’63, in addition to seeing the publication of “Computers and Thoughts“, saw the announcement of “A Pattern Recognition Program That Generates, Evaluates, and Adjusts Its Own Operators“, which was arguably the first AI program (as AI needs to adjust its parameters to “learn”).

That was over SIXTY (60) years ago, and we still haven’t made any significant advances towards “AI”.

Remember that we were told in the ’70s that AI would reshape computing. Then we were told in the 80s that the new fifth generation computer systems they were building would give us massively parallel computing, advances in logic, and lay the foundation for true AI systems. It never happened. Then, when the cloud materialized in the 00’s, we saw a resurgence in distributed neural nets and were told AI would save the day. Guess what? It didn’t. Now we’re being told the same bullshit all over again, but the reality is that we’re no closer now then we were in the 60s. First of all, while computing is 10,000 times more powerful than it was six decades ago (as these large models have 10,000 cores), at the end of the day, a pond snail has more active neurons (than these models have cores), and neuronal connections, in its brain. Secondly, we still don’t really understand how the brain works, so these models still don’t have any intelligence (and the pond snail is infinitely more intelligent). (So even when we reach the point when these systems are one million times bigger than they are today, which could happen this century, we still won’t have intelligence.)

So bring back the interns, especially the ones in India. With five times the population of the US, statistically speaking, India has five times the number of smart people, and your chances of success are looking pretty good compared to using an application that tells you to eat rocks.