Finally A Good Webinar on Gen-AI in ProcureTech …

… from SAP?!?

Yes, the doctor is surprised! In ProcureTech, SAP is not known for being on the leading edge. It’s latest Ariba refresh is 3 to 6 years late. (Had it been released in 2019, before the intake and orchestration players started hitting the scene and siphoning off SAP customers with their ease of use and ability to integrate into the back-end for data storage, it would have been revolutionary. Had it been released in 2022 before these players really started to grow beyond the early adopters, it would have been leading. Now, no matter how good it is, SAP Ariba is going to be playing catch up in the market for the next two years! This is because it’s been fightiong to not only keep its current customers, but grow when it now has suites, I2O [Intake-to-Orchestrate] Providers, and mini-suites in the upper mid-market all chomping at its customer base!)

Most players in ProcureTech jumping on the Gen-AI Hype Train are just repeating the lies, damn lies, and Gen-AI bullcr@p that the big providers (Open AI, Google, DeepSeek, etc.) are trying to shove down our collective throats, especially since these ProcureTech players don’t have real AI experts in house to know what’s real and what’s not. Given that SAP Procurement is not a big AI player, one would expect that, despite their best efforts, they might be inclined to take provider and partner messaging and run with it. But they didn’t.

In fact, they went one step further and engaged Pierre Mitchell of Spend Matters (A Hackett Group Company) in their webinar (now on demand) who is one of the few analysts in our space more-or-less getting it right (and trying to piece together a plan for companies to successfully identify, analyze, and implement AI in their ProcureTech operations). (Now, the doctor doesn’t entirely agree with all of his architecture or all of his viewpoints, but the effort and accuracy of Pierre’s work is leagues beyond anything else he’s seen in our space, and if you’re careful and follow his models and advice properly, low risk. Moreover, you’re starting from sanity if you follow his guidance! More than can be said for the majority of AI approaches out there.)

When it was said that that architecting the solution and the area around [the] business data cloud and managing data and data models is really important because AI has shown that, hey, we have all this amazingly powerful data that’s out there, but we got to tap it and we got to make it more structured and we have to make it useful and that the data quality around data coming out of those models right now needs to be limited to co-pilots and chatbots because we’re not ready to turn the keys over to the LLMs and that they have to be wrapped into deterministic tooling they are not only making clear the limitations of the LLM technology but making clear they understand those limitations and that they have to do more than just plug in an LLM to deliver dependable, reliable value to their customers.

When even the leading LLM, ChatGPT, generates responses with incorrect information 52% of the time, that tells you just how unreliable LLM technology is! Moreover, it’s not going to get any better considering that OpenAI (and its peers) literally downloaded the entire internet (including illegally using all of the copyright data that had been digitized to date [until the Big Beautiful Bill that restricted Federal AI Regulation for 10 years was past, retroactively making their IP theft legal]) to train their models and the vast majority of data produced since then (which now accounts for half of the internet) is AI slop. (This means that you can only expect performance to get worse, and not better!) This means that you can’t rely on LLMs for anything critical or meaningful.

However, if you go back to the basics and focus on what LLMS are good for, namely:

  • large document search and summarization and
  • natural language processing and translation to machine friendly formats

then you realize these models can be trained with high accuracy to parse natural language requests and return machine friendly program calls that execute reliable deterministic code and then parse the programmatic strings returned and convert them to natural language responses. If you then use LLMs only as an access layer, and take the time to build up the cross-platform data integration, models, and insights a user will need in federated cubes and knowledge libraries, you can provide real value to a customer using traditional, dependable, analytics, optimization, and Machine Learning (ML) in an interface that doesn’t require a PhD to use it!

This is what they did, as they explained in their example of what should be done when your CFO asks for a breakdown of your laptop and keyboard spend to potentially identify opportunities to consolidate vendors. Traditionally, this request might take your business analyst days to compile across multiple systems stakeholders and spreadsheets but if you have SAP spend control tower with AI, they unify data across multiple sources in the platform for you. Whether your purchases are coming through existing contracts, P cards expense reports, or any other channel they federate the data by apply[ing] intelligent classifications to automatically categorize your purchases with standard UNSPSC codes to ensure that items like your Dell XPS 15 and your MacBook Pro 16 are both properly classified as laptops, despite the different naming conventions. Moreover, since they have also integrated with Dun & BradStreet, you can easily consolidate your suppliers. So rather than it looking like you’re purchasing items from three different subsidiaries, your purchases will align to the same parent company. This says they are using traditional categorizations, rules, and machine learning on the backend to build one integrated cube with summary reports, and all the LLM has to do is create an English summary, to which you can attach the supporting system generated reports.

Moreover, this also says that if you need to source 500 laptops and 500 [external] keyboards with the goal of cut[ing] current costs from what you’ve been paying by 15% it can automatically identify the target prices, identify the suppliers/distributors who have been giving you the best prices, automatically run predictive analytics to estimate the quotes you would get from awarding all of the business to one supplier (who would then be inclined to give better price breaks), and if none of those looked like they’d generate the reduction, access its anonymized community data, identify other suppliers/distributors supplying the same laptops you typically buy, compute their average price reduction over the past three months, and identify those that should be invited to an RFX or Auction to increase competition and the chances of you achieving the target price reduction while informing you of the price reduction it predicts (which might only be 10%, or 5%, if you are already getting better than average market pricing). And it will do all of this with a few clicks. You’ll simply tell the system what your demand is and what your goal is and all of these computations will be run, supplier and event (type) recommendations generated, and it will be one click to kick off the sourcing event.

Moreover, when the webinar said that if you think about this area around workflow and process orchestration, there’s no reason why you can’t take pieces of that, like on the endpoints, around intake or invoices or whatever and use AI there and bake it in a controlled way
into your processes
. Because that’s they key. Taking one tactically oriented process, that consumes too much manual intervention, at a time and using advanced tech (which need not be AI, by the way, modern Adaptive RPA [ARPA] is often more than enough) to improve it. Then, over time, stringing these together to automate more complex processes where you can gate them to ensure exceptional situations aren’t automated without over guidance. One little win at a time. And after a year it cumulatively adds up to one big win. (Versus going for a big-bang project, which always ends in a big-bang that blows a whole in your operation that you might not be able to recover from.)

The only bad part of this webinar was slide 24, Spend Matters recommendation #1: “Aggressively Implement GenAI”!

Given that Gen-AI is typically interpreted as “LLM”, as per above, this is the last AI tech you should aggressively implement given its unreliability for anything but natural language translation and search and summarization. Moreover, any tech that is highly dynamic and emerging should be implemented with care.

What the recommendation should be is aggressively implement AI because now that we have the computational power and data that we didn’t have two decades (or so) ago, which was the last time AI was really hot, tried and true (dependable) machine learning and AI is now practical and powerful!

Now, in his LinkedIn post, Pierre asked what we’d like to see next in terms of research/coverage (regardless of venue). So I’m going to answer that:

Gen-AI LLM-Free AI Transformation!

Because you don’t need LLMs to achieve all of the value we need out of AI in ProcureTech and, to be honest, any back office tech. As I have been saying recently, everything I penned in the classic Spend Matters series on AI in Procurement (Sourcing, Supplier Management) today, tomorrow, and the day after in the last decade … including the day after, was possible when I penned the series. It just wasn’t a reality because there were few AI experts in our space, data was lacking, and the blood, sweat and tears required to make it happen was significant. We didn’t have readily available stacks, frameworks and models for the machine learning, predictive analytics, and semantic processing required to make it happen. Vendors would have had to build the majority of this themselves, which would have been as much (or more) work than building their core offering. But it was possible. And with all the modern tech at our disposal, now it’s not only possible, but doable. There is zero need to embed an untested unreliable LLM in an end-user product to provide all of the advantages AI can offer. (Or, if you don’t have the time to master traditional semantic tech for NLP, zero need to use an LLM for anything more than NLP.)

So, I’d like to see this architecture and explanation of how providers can roll out safe AI and how buying organizations can use it without fear of being another failure or economic disaster when it screws up, goes rogue, and orders 100,000 units of the wrong product!