Category Archives: AI

Who’s Funding Your ProcureTech Vendor?

This question is more important now than ever! Not only is the RCD (Relative Corporate Debt) of many FinTech companies too high right now (See: Calculating RCD), signalling a decline in customer service and potential abandonment, if not outright vendor failure down the road, but the ongoing viability of many VC and PE firms, or at least their ability to support their investments, is also in question.

Many firms are too heavy on AI plays that are still losing as much as $4 (or more) for every $1 of revenue they take in, requiring massive ongoing investments to maintain. Even big PE funds only have so much cash to burn, and the only way they can do this is to liquidate assets and holdings if they can, or, in the worst case, simply write off losses (and associated future costs) of those holdings they can’t liquidate.

Softbank’s end-of-year investment in OpenAI really puts this into perspective, as chronicled by Mr. Klein of Curiouser.AI and Berkley in this LinkedIn post.

As far as I am concerned, this is bad news for any of SoftBank’s FinTech holdings that may require funding in the next few years, and a warning to make sure you don’t select / continue / depend on any of their FinTech holdings where they have a large or majority stake until verifying those holdings are profitable and likely to stay that way! (Now, SoftBank has traditionally had very good investment chops, so it’s likely the majority of holdings are profitable …)

However, they aren’t the only firm making huge over-investments in AI and weighting the portfolio down with companies that might never see a profit. This means that this warning also applies to many other Tech investment funds, starting with Thrive, Dragoneer, Altimeter, and Coatue who also have large stakes in OpenAI. They could all end up in the position where they are going to have to sell off / dump assets to maintain the ridiculous losses OpenAI is seeing, and any holdings not performing well will likely be the first to go / get dropped. (Remember that the average age of the first three of these groups is 15 years, and they are [becoming] modern SaaS/AI heavy, whereas Softbank Capital has been investing for 30 years, and is a lot more diversified. Softbank may be able to weather a complete crash in OpenAI valuation if it occurs. But these other firms may not!)

But, as we noted, the real warning is not for SoftBank or these other mega funds (in the significant 8 and 9 digit range) that have funds to weather a storm. It is for the smaller funds, especially those less than 1 Billion, that are too AI heavy.

As a result, when selecting any FinTech platform, you need look at the portfolio of any investment player with a substantial majority stake. If a large segment of the portfolio of a significant/majority investor is “AI” companies losing money hand over fist, then the vendor of that FinTech platform cannot be considered a stable vendor if it is not profitable. This is because you can’t count on the fund having the resources to support the vendor to profitability, even if vendor is a fund darling. This is the case even if the RCD calculation looks good! A lot of the smaller funds can’t afford an AI crash given the AI-heavy focus of their SaaS portfolio.

(Face it. An AI crash is coming. Too much valuation against too little return, and investors only have so much patience. The only thing we don’t know is how severe the crash is going to end up being. Is it going to be a minor drop across the tech markets or a major crash like the 2008 housing crash or the 1999/2000 dot com crash?)

A Review of The October Diaries (in 4 Parts)

Part I

The October Diaries is a supernatural drama centred on the interaction of the protagonist, Jon W. Hansen, a distinguished analyst with a 40 year career in tech and, in computing years, an AI RAM Model 5 based on centuries of development. His work becomes increasingly complicated as other models continually challenge his and self-proclaimed AI Experts continually threaten our space from the shadows. The book chronicles the complex relationships as Jon tries to find new ways to preserve the truth and protect …

Oh wait, that’s the plot archetype for the Vampire Diaries. Did I read the right book?

Yes I did. But I just made you think, and that’s one of the primary goals of Jon’s book and one of the key points I have to make.

Every Influencer, Consultant, and Analyst needs to read this book, but 99% won’t learn anything if they don’t think and question everything they read. (And that’s one of the unwritten reasons Jon says you’ll have to read the book two and even three times.) If they don’t come to suspect the truths on their own before Jon exposes most of them in later chapters. If they don’t understand that this is not a guide or manual for success or the answer to all their problems (as there is none) …

It’s a book designed to make you do what we don’t do enough of in the age of AI: think, and, most importantly think in a way that will, in time (may not today, tomorrow, or even next year) allow you to actually use modern AI tools productively and extract value in real time.

Gen-AI efforts are failing across all the board, from large scale corporate projects down to small scale individual efforts to extract useful content for reasons that include:

  • lack of focus
  • lack of verified data & reinforcement training
  • lack of knowledge
  • lack of skill

You see, for success, you need to have

  • focussed domain models
  • deep context
  • deep domain knowledge to know when the output is good, ok, and bad
  • appropriate skills to utilize the models effectively

Jon gets at this with his six skills of conversational fluency, which is his name for the methodology he uses to train the models to do what computers do best (identify patterns, surface them, and draw correlations) while he does the strategic thinking humans do best.

As well as his five common mistakes that are one of the reasons the vast amount of human prompted content generated is AI slop.

But he also goes deeper into what is truly required for long-term success. Which may shock many of you who aren’t from the old-school we are, but, like Billy Idol, you have to deal with the shock to the system it will give you and push forward.

Discuss Part I on LinkedIn

Part II

Every Influencer, Consultant, and Analyst needs to read this book, but not for the reasons they think. It’s because they need to think deeply about AI, and that’s what this book forces them to do. It may be framed as a step by step guide to take you from zero hero, but that’s just to psychologically convince you that this is the guide for you — and if you want to understand AI, it is!

Most people are using AI wrong. More specifically, they are using the A.S.S.H.O.L.E. to sh!t out plagiarized slop that is turning the internet into massive sewer that is likely making Jon Oliver rethink his Facebook is a Toilet rant (from 2018) (because now the entire Internet is a sewer).

While that is one of the few things that LLMs can actually do, that’s NOT what they should do. They might be lying, hallucinating, soulless algorithms that will happily tell you to commit suicide, suppress life saving alarms while you’re locked in a server room on fire, or even ignore your shadow and have the self-driving car run you over, but they have their uses.

While they can’t do 94%/95% of what the firms selling them advertise (or we wouldn’t have 94%/95% failure rates, as per McKinsey and MIT), they can do four things very well, with reasonably high reliability when appropriately trained and deployed, that we can’t. The first two, as I keep promoting, are:

1) large corpus search & summarization
2) natural language processing

The third, as Jon makes clear in this book is

3) deep pattern detection and surfacing

But only if you know how to get the algorithm to do it!

You see, all these systems are trained to deliver direct responses to direct requests. As a result, when you give them a typical direct request in your “carefully calibrated prompt“, they give you what they think you are asking for, and that’s it. But that doesn’t help you, or me, or anyone, especially if they weren’t trained on the right data or it’s not available and the only way they can give you what you want is to make sh!t up.

Sure it might spit out 2997 characters for your Linkedin post 10 times faster while addressing the seven points you wanted, but is that really helping you when you have to read it, edit it, and copy and paste and verify it? That takes time — and even worse, it’s not productive time. If you’re not thinking about the 5 Ws, not only are you not sharing anything valuable, but you’re not advancing your thinking. (Right now, the only edge we have over machines is our ability to think critically and strategically — so what happens if we lose that?)

But if you can learn how to work with the technology, instead of getting bland plagiaristic derivations, you can get it to surface patterns across related bodies of work, document progressions over time, and use that to more quickly validate your instincts and formalize your ideas, allowing you to advance your own abilities while ensuring you can serve your customers faster and better by speeding up research and delivery efforts by multiplicative factors.

Discuss Part II on LinkedIn

Part III

Today we continue with our review of the supernatural drama that chronicles the interaction of the protagonist, Jon W. Hansen, and the RAM Model 5 that we’re sure you’ll find more thrilling than the pages of the Vampire Diaries we thought we were reviewing (due to the similarities in plot archetypes). You might not have the love triangle, but I’m sure the dollar signs will be more than enough to get your attention. (What dollar signs? Well, you’ll have to read it.)

In part one we said that you need to read the book because it will make you think (if you’re reading it right).

In part two we said you need to read the book because it helps you understand the power of LLMs is not its ability to create watered down plagiarized slop 10 times faster than the drunken plagiarist intern ever could but uncover patterns that you might never uncover on your own due to lack of time.

Today we’re giving you a third reason — and that reason is that it helps you understand why you are invaluable in the age of AI. While it has been true since the introduction of computers that monkeys could do all back office jobs if they knew what buttons to push, the reality is that AI, which should be called Artificial Idiocy, still doesn’t know what buttons to push, it’s just able, in many situations, to compute what button to push with high probability. But it DOES NOT know. Only YOU know! (You see, what AI really stands for is Algorithmic Improvement, as it is the label that is consistently applied to any algorithm that is an advancement over a previous algorithm, and that has nothing to do with intelligence.)

Now, it does mean that if your job is simply tactical data processing then you’re out of work, and it does mean some of your peers who aren’t as good and efficient as you are also out of work since the tech will make those who know how to use it up to 10 times as efficient at some tasks, but if you’re a skilled expert, then you are more desperately needed than ever because, as per our last post, only you will be able to detect the very convincing inaccuracies, lies, and hallucinations it returns.

But understanding is not enough, you need to be able to explain it, and when pressed, demonstrate it. That is what the book, after a few reads, will help you do. Use AI in a way that demonstrates you are what’s needed to make AI effective and make sure the organization isn’t part of the 95% failure statistic.

Part IV

In Part I of our review of Jon W. Hansen’s October Diaries, his take on the modern thriller, I told every Analyst, Consultant, and Influencer (ACI) that they need to read it because it will force them to finally think — deep — about AI.

In Part II of our review I told the ACI they need to read it because it will help them use LLMs properly and surface patterns they might not ever find on their own due to time constraints.

In Part III of our review I told the ACI that it will help them defend their positions in the “Age of AI” purge that is coming. (Since it’s a new excuse to fire people so the organizational shareholders can [temporarily] get richer!)

Now, in Part IV, I tell most of the ACI that I’m sorry. You shouldn’t read it. You want a quick fix and an easy solution to your relevance problem and this isn’t it. In fact, for some of you, it won’t even be worth the cost of the minuscule amount of storage it takes up on your hard drive.

Because it makes a few assumptions.

1) You have, or are willing to build (with your own hands), a deep archive of unique, human authored content to augment the models with.

2) You are willing to take the time to not only ensure the models are trained on this, and only this, archive but to learn how to both use the models appropriately and get them to retain and access relevant context across multiple sessions over days, weeks, and months, which is a skill that goes beyond creating executable ChatGPT prompts.

3) You have, or are willing to develop, the expertise necessary to know when the model is 100%, 95%, 90%, 50%, and 0% right, no matter how convincing the words are that it returns, and how to correct it and guide it to 95% every time (so you can make the corrections faster than doing the work from scratch), which could take minutes, hours, or days for any particular request you throw at it.

But let’s face it.

1) Most of you don’t have the archive, unless you work for a consultancy that has been delivering projects for at least five years, and preferably 10. Jon and I remember the early days with hundreds of blogs, and the 3/3/3/3 rule. Up to 90% of wanna-bees would quit after 3 posts/3 days, then the next batch by 9 posts/3 weeks, then the next batch by 27 posts/3 months, and the majority by 3 years would say “hey bloggie, I’m packing you in“. The hundreds of blogs I chronicled on the now-defunct SI resource site were down to a few dozen by the 2010s.

2) You won’t put in the months necessary to get the model and your skills to the point you are getting close to what you want every time. And it will be months!

3) Not only do you have to keep learning tech, you have to be constantly seeking out experts to learn your trade. That’s also a lot of work. When you’re Bowling for Soup, you know that High School Never Ends!

In an age where founders want to vibe code and flip companies within 3 years, you want instant gratification, but you’re not going to get that!

All it will give those of you starting out is a way to build a skill that is sustainable for life. But the vast majority of you will have to wait for the good things to come. And I don’t think you will. Sorry.

But if you want to prove me wrong, get the book!

Finally A Good Webinar on Gen-AI in ProcureTech …

… from SAP?!?

Yes, the doctor is surprised! In ProcureTech, SAP is not known for being on the leading edge. It’s latest Ariba refresh is 3 to 6 years late. (Had it been released in 2019, before the intake and orchestration players started hitting the scene and siphoning off SAP customers with their ease of use and ability to integrate into the back-end for data storage, it would have been revolutionary. Had it been released in 2022 before these players really started to grow beyond the early adopters, it would have been leading. Now, no matter how good it is, SAP Ariba is going to be playing catch up in the market for the next two years! This is because it’s been fightiong to not only keep its current customers, but grow when it now has suites, I2O [Intake-to-Orchestrate] Providers, and mini-suites in the upper mid-market all chomping at its customer base!)

Most players in ProcureTech jumping on the Gen-AI Hype Train are just repeating the lies, damn lies, and Gen-AI bullcr@p that the big providers (Open AI, Google, DeepSeek, etc.) are trying to shove down our collective throats, especially since these ProcureTech players don’t have real AI experts in house to know what’s real and what’s not. Given that SAP Procurement is not a big AI player, one would expect that, despite their best efforts, they might be inclined to take provider and partner messaging and run with it. But they didn’t.

In fact, they went one step further and engaged Pierre Mitchell of Spend Matters (A Hackett Group Company) in their webinar (now on demand) who is one of the few analysts in our space more-or-less getting it right (and trying to piece together a plan for companies to successfully identify, analyze, and implement AI in their ProcureTech operations). (Now, the doctor doesn’t entirely agree with all of his architecture or all of his viewpoints, but the effort and accuracy of Pierre’s work is leagues beyond anything else he’s seen in our space, and if you’re careful and follow his models and advice properly, low risk. Moreover, you’re starting from sanity if you follow his guidance! More than can be said for the majority of AI approaches out there.)

When it was said that that architecting the solution and the area around [the] business data cloud and managing data and data models is really important because AI has shown that, hey, we have all this amazingly powerful data that’s out there, but we got to tap it and we got to make it more structured and we have to make it useful and that the data quality around data coming out of those models right now needs to be limited to co-pilots and chatbots because we’re not ready to turn the keys over to the LLMs and that they have to be wrapped into deterministic tooling they are not only making clear the limitations of the LLM technology but making clear they understand those limitations and that they have to do more than just plug in an LLM to deliver dependable, reliable value to their customers.

When even the leading LLM, ChatGPT, generates responses with incorrect information 52% of the time, that tells you just how unreliable LLM technology is! Moreover, it’s not going to get any better considering that OpenAI (and its peers) literally downloaded the entire internet (including illegally using all of the copyright data that had been digitized to date [until the Big Beautiful Bill that restricted Federal AI Regulation for 10 years was past, retroactively making their IP theft legal]) to train their models and the vast majority of data produced since then (which now accounts for half of the internet) is AI slop. (This means that you can only expect performance to get worse, and not better!) This means that you can’t rely on LLMs for anything critical or meaningful.

However, if you go back to the basics and focus on what LLMS are good for, namely:

  • large document search and summarization and
  • natural language processing and translation to machine friendly formats

then you realize these models can be trained with high accuracy to parse natural language requests and return machine friendly program calls that execute reliable deterministic code and then parse the programmatic strings returned and convert them to natural language responses. If you then use LLMs only as an access layer, and take the time to build up the cross-platform data integration, models, and insights a user will need in federated cubes and knowledge libraries, you can provide real value to a customer using traditional, dependable, analytics, optimization, and Machine Learning (ML) in an interface that doesn’t require a PhD to use it!

This is what they did, as they explained in their example of what should be done when your CFO asks for a breakdown of your laptop and keyboard spend to potentially identify opportunities to consolidate vendors. Traditionally, this request might take your business analyst days to compile across multiple systems stakeholders and spreadsheets but if you have SAP spend control tower with AI, they unify data across multiple sources in the platform for you. Whether your purchases are coming through existing contracts, P cards expense reports, or any other channel they federate the data by apply[ing] intelligent classifications to automatically categorize your purchases with standard UNSPSC codes to ensure that items like your Dell XPS 15 and your MacBook Pro 16 are both properly classified as laptops, despite the different naming conventions. Moreover, since they have also integrated with Dun & BradStreet, you can easily consolidate your suppliers. So rather than it looking like you’re purchasing items from three different subsidiaries, your purchases will align to the same parent company. This says they are using traditional categorizations, rules, and machine learning on the backend to build one integrated cube with summary reports, and all the LLM has to do is create an English summary, to which you can attach the supporting system generated reports.

Moreover, this also says that if you need to source 500 laptops and 500 [external] keyboards with the goal of cut[ing] current costs from what you’ve been paying by 15% it can automatically identify the target prices, identify the suppliers/distributors who have been giving you the best prices, automatically run predictive analytics to estimate the quotes you would get from awarding all of the business to one supplier (who would then be inclined to give better price breaks), and if none of those looked like they’d generate the reduction, access its anonymized community data, identify other suppliers/distributors supplying the same laptops you typically buy, compute their average price reduction over the past three months, and identify those that should be invited to an RFX or Auction to increase competition and the chances of you achieving the target price reduction while informing you of the price reduction it predicts (which might only be 10%, or 5%, if you are already getting better than average market pricing). And it will do all of this with a few clicks. You’ll simply tell the system what your demand is and what your goal is and all of these computations will be run, supplier and event (type) recommendations generated, and it will be one click to kick off the sourcing event.

Moreover, when the webinar said that if you think about this area around workflow and process orchestration, there’s no reason why you can’t take pieces of that, like on the endpoints, around intake or invoices or whatever and use AI there and bake it in a controlled way
into your processes
. Because that’s they key. Taking one tactically oriented process, that consumes too much manual intervention, at a time and using advanced tech (which need not be AI, by the way, modern Adaptive RPA [ARPA] is often more than enough) to improve it. Then, over time, stringing these together to automate more complex processes where you can gate them to ensure exceptional situations aren’t automated without over guidance. One little win at a time. And after a year it cumulatively adds up to one big win. (Versus going for a big-bang project, which always ends in a big-bang that blows a whole in your operation that you might not be able to recover from.)

The only bad part of this webinar was slide 24, Spend Matters recommendation #1: “Aggressively Implement GenAI”!

Given that Gen-AI is typically interpreted as “LLM”, as per above, this is the last AI tech you should aggressively implement given its unreliability for anything but natural language translation and search and summarization. Moreover, any tech that is highly dynamic and emerging should be implemented with care.

What the recommendation should be is aggressively implement AI because now that we have the computational power and data that we didn’t have two decades (or so) ago, which was the last time AI was really hot, tried and true (dependable) machine learning and AI is now practical and powerful!

Now, in his LinkedIn post, Pierre asked what we’d like to see next in terms of research/coverage (regardless of venue). So I’m going to answer that:

Gen-AI LLM-Free AI Transformation!

Because you don’t need LLMs to achieve all of the value we need out of AI in ProcureTech and, to be honest, any back office tech. As I have been saying recently, everything I penned in the classic Spend Matters series on AI in Procurement (Sourcing, Supplier Management) today, tomorrow, and the day after in the last decade … including the day after, was possible when I penned the series. It just wasn’t a reality because there were few AI experts in our space, data was lacking, and the blood, sweat and tears required to make it happen was significant. We didn’t have readily available stacks, frameworks and models for the machine learning, predictive analytics, and semantic processing required to make it happen. Vendors would have had to build the majority of this themselves, which would have been as much (or more) work than building their core offering. But it was possible. And with all the modern tech at our disposal, now it’s not only possible, but doable. There is zero need to embed an untested unreliable LLM in an end-user product to provide all of the advantages AI can offer. (Or, if you don’t have the time to master traditional semantic tech for NLP, zero need to use an LLM for anything more than NLP.)

So, I’d like to see this architecture and explanation of how providers can roll out safe AI and how buying organizations can use it without fear of being another failure or economic disaster when it screws up, goes rogue, and orders 100,000 units of the wrong product!

Tomorrow Doesn’t Matter In Procurement. Only Today.

Stop racing towards a future that won’t happen, or running away from one you don’t believe. It doesn’t matter. As per our prior posts this week, the doctor has been reading future of Procurement white papers for 20 years now. All of which have promised us radical change. This means that they should have started to come true 10 years ago. Not one did. Not ONE! The reality is that we can’t predict the future, and trying to do so just wastes time and effort. However, we can be vigilant about where things are today, learn the tools and techniques that can make us much more efficient in our job, identify those vendors who offer the tools backed by the right technology to enable us to be more effective, acquire and use those tools, and become at least five times more efficient in our job than the average Procurement employee.

For those who tuned out for a while, this is more-or-less Part 5 of the series we have been running this week inspired by the recent white paper by Jonathan O’Brien of Positive Purchasing and Guy Strafford of OneSupplyPlanet on the Functional ExtAInction Battle where the authors claim that AI might just lead to the extinction of Procurement as a business function. To get to the punchline, it won’t, but the non-stop bullcr@p AI Hype might! (Given how many C-Suites are blinded by the hype that is generated 24/7/365 by the A.S.S.H.O.L.E.)

In that series we told you that, despite a few false assumptions, the authors still got to the right answer, more or less. The conclusion that the only Procurement organizations that are going to survive are those that manage to automate and mostly eliminate the tactical, double down on the strategic, and find new value to bring to the business is the correct one. Moreover, those are the Procurement departments that will be rewarded and maintain more headcount than their peers because, after the massive losses from AI failures and the forthcoming AI market crash, the C-Suites who lead their businesses to survival will be those that realize the value of best-in-class Procurement People and invest in them.

However, that doesn’t mean the training budgets that disappeared two and a half decades ago are coming back. They aren’t. Since they C-Suites are still hoping for the day they can fire you, they won’t invest in you, which means that you need to get there on your own. It also means you need to start now. Start learning, start studying, start identifying very cost effective tools that can be put on a P-Card that will significantly improve a function and return value the quarter the tool is acquired (and before you get the third degree about that unexpected P-Card purchase). Real technological progress, with or without AI, comes from one little win at a time — for each task you do, identify the most time consuming tactical part of that task and automate it. Start with the tasks you do the most and continue until you’ve taken 80% out of all of the most time-consuming tactically oriented tasks you do on a monthly basis. When you reach that point you will find that you have not only digitized, but revolutionized, your function and reached the point where have flipped the tables and are spending 80% of your time on strategic decision making and relationship building and only 20% on tactically oriented tasks — a percentage that will decrease over time as you improve the tools and end-to-end automation across functions.

Furthermore, no super powers are required. Just intelligence, the willingness to study late, get up early, roll up the sleeves and work hard until you sweat through your tears. Like all real progress, it’s hard at first, but it will pay off later — when you still have a job and are delivering above peers while only working reasonable hours.

Moreover, you won’t need deep software (or even system) architecture skills either. Just the ability to define what a process should be, how a tool should support it, and find that tool. You need to be a solution architect — leave the technical and system architecture skills to the experts. If the tool they are selling gets it right at low cost with low compute and high reliability, the architecture is probably such that you wouldn’t do any better.

And whatever you do, don’t waste time playing the paradigm game. Leave that to the influencers, who won’t last near as long as they think they will. Or to the consultants, who will be walked out the door and never invited back once the C-Suite realizes they flushed millions down the drain chasing an AI utopia that doesn’t exist. Just consistently get results, push those results in front of the C-Suite, and tell them that they can call it whatever they want, but Procurement is the function — and sometimes the ONLY function — that gets results.