Category Archives: Decision Optimization

2025 Is Just Another Year … But Is It All Doom and Gloom? Part 2 (Real Tech!)

As per our first instalment, it all depends on your point of view and whether you are willing to look beyond the hype, buckle down, and get the real job done.

For instance, just the following five technologies will eliminate 95% or more of your tactical sourcing, procurement, and supplier monitoring work — and all you have to do is find them, properly implement them, and use them. Let’s talk about them briefly.

Real DIY Analytics

The ability to analyze the data you want, when you want, how you want, enriched and augmented using the auxiliary data you want … and not in predefined dash-boards or hidden “AI Agents” which may, or may not, do the analysis you want (and need) … cannot be underestimated! Real value comes from ad-hoc analysis and investigating hunches, abnormalities, and trend changes when you discover them; not days, weeks, or months later when the “cube” has been refreshed, and it might be too late to correct a problem or capture an opportunity!

Remember, this is not 2005, this is 2025, and there are at least half a dozen great DIY (spend) analysis solutions that will do most of what you want, for a price tag that is a fraction of what you might expect, and if you are okay with full DIY, some of these start at a price you can put on a P-Card. For example, Spendata Classic (which can handle data sets up to 5 Million Rows) can be obtained for $699 a year, and Enterprise, which can handle data sets up to 15 Million records, which comes with unlimited use for 5 users (and view licenses for more), and some consulting and setup, starts at an amount that will surprise you. (You can still put it on a P-Card if you pay monthly.) And there is literally nothing you can’t do in it if you’re willing to apply a little elbow grease. It truly is The Power Tool for the Power Analyst.

(Strategic Sourcing Decision / Supply Chain Network) Optimization

Yes, it’s math. But you know what? Math works! And when you use deterministic math, it’s 100% accurate, every time! And it’s one of only two technologies in S2P+ that was been proven (by multiple analyst firms) to repeatedly identify 10%+ savings year-over-year (but since this was pre-COVID and pre- the 47th, we need to amend this finding to adjust for inflation and tariffs). And as an FYI, the other technology was NOT AI. (It was proper DIY spend analysis. Only Human Intelligence can intuit where to look for previously unidentified opportunities, the best AI can do is just follow a script and run standard analysis. Furthermore, the thing about spend analysis is that an analysis that identifies an opportunity only helps you ONCE — once you capture the opportunity, the analysis is useless. You need to do a new, and different, one.)

Rule Based Automation

When you think about most tasks across Source to Pay, most of them are just execution of simple, easily defined processes — most of which don’t require much (if any) intelligence and, thus, don’t need AI (and shouldn’t use an unpredictable AI agent when you can encode a process that gets it right, guaranteed, every single time. (Plus, the way you want to source, buy, pay, track, manage, etc. is probably a little bit different than your peers, and who knows how the AI Agent would do it for you. You certainly don’t!)

With rule based automation, you can easily execute an entire sourcing event in the background all the way to award if you like. It can run auctions, it can run multi-round RFPs with detailed feedback (it’s all calculations, response comparisons, and decisions on what data you want to share and how blinded you want it), it can run analyses and optimizations, it can calculate recommended award decisions subject to the constraints and goals that matter to you, present that to you for acceptance, or, if it’s a simple winner take all or top 2 situation, create the award automatically, send it out, get supplier acceptance, assemble the contract, and send it for e-Signature. You don’t need Agentric/Gen-AI, just tech we’ve had for over a decade!

Machine Learning

Now, when it comes to Enterprise Master Data Management and Administration (E-MDMA) and Invoice Processing, it can be quite a lot of work to keep up with the mapping, cleansing, and enrichment rules, and you don’t want to have to manually define all the new rules every time a new data element appears or a new invoice format arrives, especially if the system can auto-detect/”guess” 90% of the time through rule re-use and generalization. With machine learning, the system can keep track of your corrections, mathematically extract models, and adjust it’s rules to handle the new mapping again automatically as well as improve its suggestion logic when it doesn’t know what to do — increasing the chance that you just have to “accept” a new rule vs. defining it from scratch. (Unlike Gen-AI which just tries to find similar patterns somewhere to present you with something that may or may not have any correlation to your business and even reality!) And we’ve had great non-(pure-)Neural Network machine learning that works great with enough data for decades! Predictive analytics was making huge progress late last decade before this Gen-AI BS took over and could have helped Procurement departments automate 90%+ of what they wanted to automate with just a bit more development and effort by the leading vendors — it just required a bit more time, money, and focus. (Gen-AI has set us back a decade!)

Analytics Backed Augmented Intelligence

We don’t need machines to make decisions for us (especially when they can’t think, or even reason), we need machines to do calculations for us that help us make the right decision quickly and effectively. We need the machine to automatically identify and retrieve all of the relevant data, do all the relevant situational and market analysis, do all the predictive trend analysis, identify all of the typical responses with respect to the situation, predict the likely success of each, and present us with a set of ordered recommendations, complete with the calculations and supporting analysis, so we can pick one or realize that the machine didn’t/couldn’t know about a recent event or a human factor and that none of the responses are right (and that only we could craft one, with full information on the situation). The machine may not think, but the thunking it can do far exceeds our computational ability (billions of computations a second, all flawless), and that’s EXACTLY what we should be using the machine for.

If we give up on this Artificial Intelligence BS (even if the current models are right, machines need to be 100 Million times more powerful for it to even “mimic” human intelligence. That’s not happening any time soon) and instead just give all the machines all the (boring) grunt work, leaving us free to do what they can’t (strategy and relationships). If we do so, we can be at least 10 times as productive as we are now and deliver on the promises Gen-AI / Agentric AI / AGI never will, and do so at a small fraction of the cost. And oh, we have that tech today … we just need to deploy and integrate it properly!

And this is just the beginning of what you can do when you look beyond the hype and use your Human Intelligence [HI!] to cut through all the BS.

You Don’t Need Gen-AI to Revolutionize Procurement and Supply Chain Management — Classic Analytics, Optimization, and Machine Learning that You Have Been Ignoring for Two Decades Will Do Just Fine!

This originally posted on March 22 (2024).  It is being reposted because we need solutions, Gartner (who co-created the hype cycle) published a study which found that Gen-AI/technology implementations fail  85% of time, and its because we have abandoned the foundations — which work wonders in the hands of properly applied Human Intelligence (HI!).  Gen-AI, like all technologies, has its place, and it’s not wherever the Vendor of the Week pushes it, but where it belongs.  Please remember that.

Open Gen-AI technology may be about as reliable as a career politician managing your Nigerian bank account, but somehow it’s won the PR war (since there is longer any requirement to speak the truth or state actual facts in sales and marketing in most “first” world countries [where they believe Alternative Math is a real thing … and that’s why they can’t balance their budgets, FYI]) as every Big X, Mid-Sized Consultancy, and the majority of software vendors are pushing Open Gen-AI as the greatest revolution in technology since the abacus. the doctor shouldn’t be surprised, given that most of the turkeys on their rafters can’t even do basic math* (but yet profess to deeply understand this technology) and thus believe the hype (and downplay the serious risks, which we summarized in this article, where we didn’t even mention the quality of the results when you unexpectedly get a result that doesn’t exhibit any of the six major issues).

The Power of Real Spend Analysis

If you have a real Spend Analysis tool, like Spendata (The Spend Analysis Power Tool), simple data exploration will find you a 10% or more savings opportunity in just a few days (well, maybe a few weeks, but that’s still just a matter of days). It’s one of only two technologies that has been demonstrated, when properly deployed and used, to identify returns of 10% or more, year after year after year, since the mid 2000s (when the technology wasn’t nearly as good as it is today), and it can be used by any Procurement or Finance Analyst that has a basic understanding of their data.

When you have a tool that will let you analyze data around any dimension of interest — supplier, category, product — restrict it to any subset of interest — timeframe, geographic location, off-contract spend — and roll-up, compare against, and drill down by variance — the opportunities you will find will be considerable. Even in the best sourced top spend categories, you’ll usually find 2% to 3%, in the mid-spend likely 5% or more, in the tail, likely 15% or more … and that’s before you identify unexpected opportunities by division (who aren’t adhering to the new contracts), geography (where a new local supplier can slash transportation costs), product line (where subtle shifts in pricing — and yes, real spend analysis can also handle sales and pricing data — lead to unexpected sales increases and greater savings when you bump your orders to the next discount level), and even in warranty costs (when you identify that a certain supplier location is continually delivering low quality goods compared to its peers).

And that’s just the Procurement spend … it can also handle the supply chain spend, logistics spend, warranty spend, utility and HR spend — and while you can’t control the HR spend, you can get a handle on your average cost by position by location and possibly restructure your hubs during expansion time to where resources are lower cost! Savings, savings, savings … you’ll find them ’round the clock … savings, savings, savings … analytics rocks!

The Power of Strategic Sourcing Decision Optimization

Decision optimization has been around in the Procurement space for almost 25 years, but it still has less than 10% penetration! This is utterly abysmal. It’s not only the only other technology that has been generating returns of 10% or more, in good times and bad, for any leading organization that consistently uses it, but the only technology that the doctor has seen that has consistently generated 20% to 30% savings opportunities on large multi-national complex categories that just can’t be solved with RFQ and a spreadsheet, no matter how hard you try. (But if you want to pay them, an expert consultant will still claim they can with the old college try if you pay their top analyst’s salary for a few months … and at, say, 5K a day, there goes three times any savings they identify.)

Examples where the doctor has repeatedly seen stellar results include:

  • national service provider contract optimization across national, regional, and local providers where rates, expected utilization, and all-in costs for remote resources are considered; With just an RFX solution, the usual solution is to go to all the relevant Big X and Mid-Sized Bodyshops and get their rate cards by role by location by base rate (with expenses picked up by the org) and all-in rate; calc. the expected local overhead rate by location; then, for each Big X / Mid-Size- role – location, determine if the Big X all-in rate or the Big X base rate plus their overhead is cheaper and select that as the final bid for analysis; then mark the lowest bid for each role-location and determine the three top providers; then distribute the award between the three “top” providers in the lowest cost fashion; and, in big companies using a lot of contract labour, leave millions on the table because 1) sometimes the cheapest 3 will actually be the providers with the middle of the road bids across the board and 2) for some areas/roles, regional, and definitely local, providers will often be cheaper — but since the complexity is beyond manageable, this isn’t done, even though the doctor has seen multiple real-world events generate 30% to 40% savings since optimization can handle hundreds of suppliers and tens of thousands of bids and find the perfect mix (even while limiting the number of global providers and the number of providers who can service a location)
  • global mailer / catalog production —
    paper won’t go away, and when you have to balance inks, papers, printing, distribution, and mailing — it’s not always local or one country in a region that minimizes costs, it’s a very complex sourcing AND logistics distribution that optimizes costs … and the real-world model gets dizzying fast unless you use optimization, which will find 10% or more savings beyond your current best efforts
  • build-to-order assembly — don’t just leave that to the contract manufacturer, when you can simultaneously analyze the entire BoM and supply chain, which can easily dwarf the above two models if you have 50 or more items, as savings will just appear when you do so

… but yet, because it’s “math”, it doesn’t get used, even though you don’t have to do the math — the platform does!

Curve Fitting Trend Analysis

Dozens (and dozens) of “AI” models have been developed over the past few years to provide you with “predictive” forecasts, insights, and analytics, but guess what? Not a SINGLE model has outdone classical curve-fitting trend analysis — and NOT a single model ever will. (This is because all these fancy-smancy black box solutions do is attempt to identify the record/transaction “fingerprint” that contains the most relevant data and then attempt to identify the “curve” or “line” to fit it too all at once, which means the upper bound is a classical model that uses the right data and fits to the right curve from the beginning, without wasting an entire plant’s worth of energy powering entire data centers as the algorithm repeatedly guesses random fingerprints and models until one seems to work well.)

And the reality is that these standard techniques (which have been refined since the 60s and 70s), which now run blindingly fast on large data sets thanks to today’s computing, can achieve 95% to 98% accuracy in some domains, with no misfires. A 95% accurate forecast on inventory, sales, etc. is pretty damn good and minimizes the buffer stock, and lead time, you need. Detailed, fine tuned, correlation analysis can accurately predict the impact of sales and industry events. And so on.

Going one step further, there exists a host of clustering techniques that can identify emergent trends in outlier behaviour as well as pockets of customers or demand. And so on. But chances are you aren’t using any of these techniques.

So given that most of you haven’t adopted any of this technology that has proven to be reliable, effective, and extremely valuable, why on earth would you want to adopt an unproven technology that hallucinates daily, might tell of your sensitive employees with hate speech, and even leak your data? It makes ZERO sense!

While we admit that someday semi-private LLMs will be an appropriate solution for certain areas of your business where large amount of textual analysis is required on a regular basis, even these are still iffy today and can’t always be trusted. And the doctor doesn’t care how slick that chatbot is because if you have to spend days learning how to expertly craft a prompt just to get a single result, you might as well just learn to code and use a classic open source Neural Net library — you’ll get better, more reliable, results faster.

Keep an eye on the tech if you like, but nothing stops you from using the tech that works. Let your peers be the test pilots. You really don’t want to be in the cockpit when it crashes.

* And if you don’t understand why a deep understand of university level mathematics, preferably at the graduate level, is important, then you shouldn’t be touching the turkey who touches the Gen-AI solution with a 10-foot pole!

Optimization Still Saves Double Digits — Why Aren’t You Using It?

Sourcing Innovation has been publishing for eighteen (18) years (over which it has published over 6,000 articles — inspired by the GruntMaster), with the first article published on June 15, 2006 with regular coverage since, including a push for all events to use sourcing optimization in Supercalifragilisticexpialidocious.

The reason is simple. It’s one of only two technologies that has been proven to identify savings in excess of 10% for almost 20 years (the other being spend analysis). The International Business Times recently reminded us of the power of this solution when it published an article on how Procurement Expert Sylvia Zhou Reduces Operational Costs by 13% Through Strategic Supply Chain Optimisation.

When we read how Zhou’s shift in sourcing strategies and supplier relations management allowed for a drastic reduction in operational costs by 13%, it reminded us of how decision optimization is not restricted just to sourcing and logistics, where it has traditionally been used, but saves across the supply chain, as discussed in our recent post on comprehensive optimization.

According to Zhou, with her team, she assessed their entire supply network, identifying bottlenecks and inefficiencies. By partnering with suppliers aligned with their operational goals and technological capabilities, they could streamline processes and cut costs. This approach worked so well that post-optimisation, her company reported a 33% increase in profits, attributed mainly to the reduced cost of goods sold and improved operational efficiencies.

And the best way to identify logistics efficiencies, product-based savings, and opportunities for operational efficiency is optimization. Sometimes there’s no better way to identify significant savings. So, go forth and optimize!

Questions to Ask Your Optimization Vendor

This is an update of a post that originally ran way back in 2007. Yes, two, double-o seven. Seventeen years ago. It is being updated because

  1. it needs a re-posting
    (as very few of you will find it that deep in the archives)
  2. most of the vendors originally mentioned are gone

However, if you read, and remember, the original, you’ll realize that, like my article where the doctor goes mental on optimization myths (which was recently shared on LinkedIn), it doesn’t need much updating and what was written seventeen years ago is still valid to this day. (When you write to inform vs. to create meaningless buzz, it really does stand the test of time.) Let’s begin.

Not all optimization vendors are equal … and, more importantly, not all vendors that claim to have strategic sourcing decision optimization (SSDO) actually have it (since the underlying algorithms and model needs to meet a stringent set of requirements to be true SSDO), with some systems, to this day, barely qualifying as decision support. Thus, since the need for optimization is as desperate as it has ever been with costs again skyrocketing, risks rising rapidly, carbon control being critical, and supply assurance necessary for sustained operations, it’s time to make sure you know how to qualify a potential provider. This means you need to not only understand the basics of what SSDO does (see the archives), but also how to distinguish between the relative strengths and weaknesses of the different offerings, as well as how much strength you really need.

You need to buy optimization at the strength, and usability level, that you need — especially if the vendor is pricing it according to its power, or computational requirement. And while there is no such thing as too much, the reality is that a 95% solution is often more than enough as the entire point is understanding the optimal solution against each dimension (cost, risk, carbon), the cost of compromise between the trade-offs, and the cost of going with a preferred, versus calculated, vendor award. And doing this for EVERY sourcing event. Once you factor in enough discounts and constraints, it’s almost impossible to calculate the best award in a spreadsheet, and the insight of what you could be spending, versus what you are, how low your risks could be, versus what they are, and how much you could alter your carbon footprint, vs what your footprint is today, is invaluable. Even if you never select a recommended solution, the key is understanding how good your (preferred) award actually is.

Before we get to the (starting) question list, it should be pointed out that it’s almost impossible to cover every question, as many of the questions you should be asking depend on the answers you receive to your first few questions, but the question list below is a good starting point.

1. Does the product meet the four criteria for strategic sourcing decision optimization?

  • Sound & Complete Mathematical Foundations : such as MILP solutions based on simplex, branch and bound, and interior point algorithms as many simulation, heuristic, and “AI” algorithms DO NOT guarantee analysis of every possible solution (sub)space given enough time, and, thus, are not “complete” in mathematical terms (and if they incorporate Gen-AI, they aren’t even “sound” in that they may not even compute an award that satisfies the constraints!)
  • True Cost Modelling :
    that supports tiered bids, discounts, and fixed cost components — the model must be capable of supporting all of the bid types being collected, as well as the cost breakdowns
  • Sophisticated Constraint Analysis : at a minimum, the model must be able to reasonably support generic and flexible constraints in each of the following four categories
    • Capacity / Limit: allowing an award of 200K units to a supplier who can only supply 100K units does not make for a valid model
    • Basic Allocation: you should be able to specify that a supplier receinves a certain amount of the business, and that business is split between two or more suppliers in feasible percentage ranges
    • Risk Mitigation: you should be able to force multiple suppliers, geographies, lanes, etc. to mitigate those risks without specifying specific suppliers, geographies, lanes, etc. to take advantage of the full power of decision optimization
    • Qualitative: A good model considers quality, defect rates, waste, on-time delivery, etc., and must support qualitative factors and minimum and average scores across the award
  • What-If? Capability : The strength of decision optimization lies in what-if analysis. Keep reading.

2. Does it support the creation of multiple what-if scenarios per event?

Furthermore, does it simplify the creation of these scenarios? The true power of decision optimization does not lie in the model solution, but the ability to create different models that represent different eventualities (as this will allow you to hone in on a robust and realistic solution), to create different models off of a base model plus or minus one or more constraints (as this will help you figure out how much a business rule or network design constraint costs you), and to create models under different pricing scenarios (to find out what would happen if preferred suppliers decreased prices or increased supply availability).

3. How fast is it for different average model sizes?

And can performance be tweaked? Optimization takes what it takes. That being said, if one solution takes an average of 1 hour for an average scenario, and another solution takes 10 minutes, all things being equal, if you have compressed sourcing cycles, the 10 minute solution might be better. Emphasis on “might”. This is only true if the faster solution is of the same quality – some models, and some solvers, sacrifice quality and accuracy for speed. The best solution will let you trade off “tolerance” and accuracy for speed. Sometimes it’s easy to get within 1% or 2% in a few minutes, even though that last 1% or 2% could take hours. On a model with low total savings potential, getting within 1% may be enough. And when trying to hone in on the right what-if scenario, it’s nice to get within 1% quickly and then allow the right scenario to run to completion over lunch (or if its a huge model, over night) after you’ve quickly analyzed half-a-dozen scenarios and settled on your preferred scenario. Thus, tweaking ability is very important.

4. Is it “true” real-time or “near” real-time?

Thanks to significant advances in processor and hardware performance as well as off-the-shelf optimizer technology (like IBM ILog’s CPlex), it’s now possible to rapidly re-build and re-solve even very large models using off-the-shelf modeling languages in seconds, allowing for e-auction tools that keep the model relatively moderate in comparison, and presolve with seed bids (current prices, market prices, last quotes), to incorporate decision optimization in real-time by simply updating a few parameters and re-solving the model every (few) parameter(s) update (depending on model-size) on a high-powered multi- core server with an appropriately configured and optimized solver (which can spin off copies and have each processor work on a different subspace). However, if the approach the product takes is to rebuild and resolve the model on every update, that’s not real-time, that’s near real time, and the slowdown could be significant for large models. (To clarify further, real-time optimization requires the ability to merge model construction and model solution in such a way that a new bid can be introduced as a parameter change that does not require the optimizer to rebuild the sparse model matrix and start the solution process over from scratch.)

5. Can you describe two or three scenarios you have encountered where you could not model the situation exactly?

And, more importantly, how did you work around the issue, and how accurate was the final result. The real world is messy, compared to models that are clean, only so much data is available, and math can only model as much as the minds who created the model could conceive. As a result, no optimization model can handle every real-world scenario 100% accurately. If a vendor representative says so, he’s either lying through his teeth or not competent enough to be selling the product. (Note that: I’ll have our optimization expert get back to you on that is a good answer from an average sales representative.) This is about the only way to get a decent idea of how appropriate the tool is for you. If the scenarios were complex and the constraints based on business rules you hardly ever, or never, use, then the solution is probably okay for you. If the scenarios were simple and the constraints based on business rules you use all the time, it’s probably not the tool for you.

6. Would you be willing to demo your solution to, and answer questions from, our consultant who understands both our needs and decision optimization technology?

Let’s face it -– just like the right decision optimization tool can deliver huge savings multiples on your investment (10X or more), the wrong tool will simply represent a six (or seven) figure cost that yields little return. If you can’t tell the difference, and there’s no shame in admitting you can’t if you’ve never used this type of technology before, then you should bring in a consultant who can to help you select the right technology, and ensure you are appropriately trained on it, until you are self sufficient and saving an average of 10% or more per project put through the tool.

7. Can we do a pilot project at-cost (or gain-share) before committing to a long term license?

If you like what you hear, but are still unsure, or are having problems getting the budget approved, a pilot is often the way to go! (Note that I did not use the word “free”!) If you’re not willing to sign a license, given the sophistication of this technology and the amount of effort the provider is going to have to allocate to support you through the pilot and ensure you are successful, you need to be willing to pay for services at a rate that is sufficient to cover the provider’s cost for the pilot -– especially considering that many of the companies that offer affordable optimization offerings are only able to do so because they keep their costs and overheads down.

The Sourcing Innovation Source-to-Pay+ Mega Map!

Now slightly less useless than every other logo map that clogs your feeds!

1. Every vendor verified to still be operating as of 4 days ago!
Compare that to the maps that often have vendors / solutions that haven’t been in business / operating as a standalone entity in months on the day of release! (Or “best-of” lists that sometimes have vendors that haven’t existed in 4 years! the doctor has seen both — this year!)

2. Every vendor logo is clickable!
the doctor doesn’t know about you, but he finds it incredibly useless when all you get is a strange symbol with no explanation or a font so small that you would need an electron microscope to read it. So, to fix that, every logo is clickable so you can go to the site and at least figure out who the vendor is.

3. Every vendor is mapped to the closest standard category/categories!
Furthermore, every category has the standard definitions used by Sourcing Innovation and Spend Matters!
the doctor can’t make sense of random categories like “specialists” or “collaborative” or “innovative“, despises when maps follow this new age analyst/consultancy award trend and give you labels you just can’t use, and gets red in the face when two very distinct categories (like e-Sourcing and Marketplaces or Expenses and AP are merged into one). Now, the doctor will also readily admit that this means that not all vendors in a category are necessarily comparable on an apples-to-apples basis, but that was never the case anyway as most solutions in a category break down into subcategories and, for example, in Supplier Management (SXM) alone, you have a CORNED QUIP mash of solutions that could be focused on just a small subset of the (at least) ten different (primary) capabilities. (See the link on the sidebar that takes you to a post that indexes 90+ Supplier Management vendors across 10 key capabilities.)

Secure Download the PDF!  (or, use HTTP) [HTML]
(5.3M; Note that the Free Adobe Reader might choke on it; Preview on Mac or a Pro PDF application on Windows will work just fine)