Open Gen-AI technology may be about as reliable as a career politician managing your Nigerian bank account, but somehow it’s won the PR war (since there is longer any requirement to speak the truth or state actual facts in sales and marketing in most “first” world countries [where they believe Alternative Math is a real thing … and that’s why they can’t balance their budgets, FYI]) as every Big X is pushing Open Gen-AI as the greatest revolution in technology since the abacus. the doctor shouldn’t be surprised, given that most of the turkeys on their rafters can’t even do basic math* (but yet profess to deeply understand this technology) and thus believe the hype (and downplay the serious risks, which we summarized in this article, where we didn’t even mention the quality of the results when you unexpectedly get a result that doesn’t exhibit any of the six major issues).
The Power of Real Spend Analysis
If you have a real Spend Analysis tool, like Spendata (The Spend Analysis Power Tool), simple data exploration will find you a 10% or more savings opportunity in just a few days (well, maybe a few weeks, but that’s still just a matter of days). It’s one of only two technologies that has been demonstrated, when properly deployed and used, to identify returns of 10% or more, year after year after year, since the mid 2000s (when the technology wasn’t nearly as good as it is today), and it can be used by any Procurement or Finance Analyst that has a basic understanding of their data.
When you have a tool that will let you analyze data around any dimension of interest — supplier, category, product — restrict it to any subset of interest — timeframe, geographic location, off-contract spend — and roll-up, compare against, and drill down by variance — the opportunities you will find will be considerable. Even in the best sourced top spend categories, you’ll usually find 2% to 3%, in the mid-spend likely 5% or more, in the tail, likely 15% or more … and that’s before you identify unexpected opportunities by division (who aren’t adhering to the new contracts), geography (where a new local supplier can slash transportation costs), product line (where subtle shifts in pricing — and yes, real spend analysis can also handle sales and pricing data — lead to unexpected sales increases and greater savings when you bump your orders to the next discount level), and even in warranty costs (when you identify that a certain supplier location is continually delivering low quality goods compared to its peers).
And that’s just the Procurement spend … it can also handle the supply chain spend, logistics spend, warranty spend, utility and HR spend — and while you can’t control the HR spend, you can get a handle on your average cost by position by location and possibly restructure your hubs during expansion time to where resources are lower cost! Savings, savings, savings … you’ll find them ’round the clock … savings, savings, savings … analytics rocks!
The Power of Strategic Sourcing Decision Optimization
Decision optimization has been around in the Procurement space for almost 25 years, but it still has less than 10% penetration! This is utterly abysmal. It’s not only the only other technology that has been generating returns of 10% or more, in good times and bad, for any leading organization that consistently uses it, but the only technology that the doctor has seen that has consistently generated 20% to 30% savings opportunities on large multi-national complex categories that just can’t be solved with RFQ and a spreadsheet, no matter how hard you try. (But if you want to pay them, a Big X will still claim they can with the old college try if you pay their top analyst’s salary for a few months … and at 5K a day, there goes three times any savings they identify.)
Examples where the doctor has repeatedly seen stellar results include:
- national service provider contract optimization across national, regional, and local providers where rates, expected utilization, and all-in costs for remote resources are considered; With just an RFX solution, the usual solution is to go to all the relevant Big X Bodyshops and get their rate cards by role by location by base rate (with expenses picked up by the org) and all-in rate; calc. the expected local overhead rate by location; then, for each Big X – role – location, determine if the Big X all-in rate or the Big X base rate plus their overhead is cheaper and select that as the final bid for analysis; then mark the lowest bid for each role-location and determine the three top providers; then distribute the award between the three “top” providers in the lowest cost fashion; and, in big companies using a lot of contract labour, leave millions on the table because 1) sometimes the cheapest 3 will actually be the providers with the middle of the road bids across the board and 2) for some areas/roles, regional, and definitely local, providers will often be cheaper — but since the complexity is beyond manageable, this isn’t done, even though the doctor has seen multiple real-world events generate 30% to 40% savings since optimization can handle hundreds of suppliers and tens of thousands of bids and find the perfect mix (even while limiting the number of global providers and the number of providers who can service a location)
- global mailer / catalog production —
paper won’t go away, and when you have to balance inks, papers, printing, distribution, and mailing — it’s not always local or one country in a region that minimizes costs, it’s a very complex sourcing AND logistics distribution that optimizes costs … and the real-world model gets dizzying fast unless you use optimization, which will find 10% or more savings beyond your current best efforts - build-to-order assembly — don’t just leave that to the contract manufacturer, when you can simultaneously analyze the entire BoM and supply chain, which can easily dwarf the above two models if you have 50 or more items, as savings will just appear when you do so
… but yet, because it’s “math”, it doesn’t get used, even though you don’t have to do the math — the platform does!
Curve Fitting Trend Analysis
Dozens (and dozens) of “AI” models have been developed over the past few years to provide you with “predictive” forecasts, insights, and analytics, but guess what? Not a SINGLE model has outdone classical curve-fitting trend analysis — and NOT a single model ever will. (This is because all these fancy-smancy black box solutions do is attempt to identify the record/transaction “fingerprint” that contains the most relevant data and then attempt to identify the “curve” or “line” to fit it too all at once, which means the upper bound is a classical model that uses the right data and fits to the right curve from the beginning, without wasting an entire plant’s worth of energy powering entire data centers as the algorithm repeatedly guesses random fingerprints and models until one seems to work well.)
And the reality is that these standard techniques (which have been refined since the 60s and 70s), which now run blindingly fast on large data sets thanks to today’s computing, can achieve 95% to 98% accuracy in some domains, with no misfires. A 95% accurate forecast on inventory, sales, etc. is pretty damn good and minimizes the buffer stock, and lead time, you need. Detailed, fine tuned, correlation analysis can accurately predict the impact of sales and industry events. And so on.
Going one step further, there exists a host of clustering techniques that can identify emergent trends in outlier behaviour as well as pockets of customers or demand. And so on. But chances are you aren’t using any of these techniques.
So given that most of you haven’t adopted any of this technology that has proven to be reliable, effective, and extremely valuable, why on earth would you want to adopt an unproven technology that hallucinates daily, might tell of your sensitive employees with hate speech, and even leak your data? It makes ZERO sense!
While we admit that someday semi-private LLMs will be an appropriate solution for certain areas of your business where large amount of textual analysis is required on a regular basis, even these are still iffy today and can’t always be trusted. And the doctor doesn’t care how slick that chatbot is because if you have to spend days learning how to expertly craft a prompt just to get a single result, you might as well just learn to code and use a classic open source Neural Net library — you’ll get better, more reliable, results faster.
Keep an eye on the tech if you like, but nothing stops you from using the tech that works. Let your peers be the test pilots. You really don’t want to be in the cockpit when it crashes.
* And if you don’t understand why a deep understand of university level mathematics, preferably at the graduate level, is important, then you shouldn’t be touching the turkey who touches the Gen-AI solution with a 10-foot pole!