Monthly Archives: May 2025

Financial Business Risk Prioritizes Supply Chain Vulnerabilities …

… but it does not identify those vulnerabilities, although it can tell you where to start looking. So while an article in the SCMR last year provided a good overview on how to evaluate, and quantify, supplier risk, the title was misleading when it said they were calculating business risk to identify supply chain vulnerabilities.

The article, which described an approach by the authors to find a way to improve the evaluation of risk impact on a business, culminated in four main findings. The approach, which looked at the total financial impact a supplier failure would have, yielded two findings that we’ve known for over a decade, ever since Resilinc pioneered the approach of assessing the financial risk associated with a supplier failure (based on mapping where all of their parts are used and which of those are single source)

  • procurement spend with a supplier is NOT correlated with the financial risk of a supplier
  • part standardization can increase business risk impact

As well as two insights that are rather new:

  • procurement spend is not correlated with the revenue of the company (the Resilinc model could have shown this, but they did not focus on this or collect those metrics last time SI was made aware of their methodology)
  • true high-risk impact suppliers are a substantially smaller amount of spend than an organization might think; in the authors’ study, they represented only 28% of total spend (whereas most companies will highlight the high spend suppliers as high risk and identify the suppliers that represent almost 3 quarters of spend, or 73% in this study)

The reason for this is that they linked all of the organization’s data sources that contained information related to the BoM for each SKU, the revenue for each SKU, and the suppliers for each BOM. By creating a network of connections between components, products, and suppliers, and identifying single source parts, the link between the criticality of a supplier and the revenue became clear. Consider the supplier who supplies that custom control chip for the fuel injection management, cruise control, or even for the monitoring of the tire pressure. If they were to fail, the absence of a single, $10, custom control chip can bring down a multi-million dollar production line, and close down an entire production plant, as the recent semiconductor shortage did to many plants during COVID. Given that these were being put into $10,000 to $100,000 cars, these suppliers would never have blipped on a spend-based risk assessment. And this is just one example.

But it is an example that demonstrates the blind spots companies have with respect to small and specialized suppliers that aren’t in the top 80% of spend but yet supply sole-sourced and/or custom parts or products. This means that when doing a risk assessment, it’s not just risky suppliers or risky supply chains that need to be assessed, it’s any supplier that supplies something that isn’t easily replaced by another source should something happen to the current supplier. The risk could be low that they will fail, and lower still that you couldn’t quickly modify a design to use an alternative, but you don’t know until you assess. And that assessment must be revenue and criticality based, not spend based. Spending $100M with a steel supplier to acquire the raw material for a frame assembly makes the supplier strategic, but doesn’t make using that supplier super risky when all their competitors offer the same grades of steel. But if you need a custom chip for that car, power transformer, etc., and you currently only have one supplier to supply it, then that supplier, no matter how stable and how low-risk its profile looks, is a risk even if it only gets one hundredth of the spend. And you need to determine if it has any vulnerabilities and, if so, monitor them so you won’t be surprised by a sudden failure.

The Lack of Adoption of Analytics is NOT Complicated!

According to THE PROPHET, the reason that we’ve never seen a breakout $100M+ pure-play (spend) analytics vendor is it’s complicated. (Source: LinkedIn)

But the reality is that it’s really not.

First of all, approximately one third of all multi-nationals are headquartered in the US. In other words, one third of global enterprise is based out of the US, where the strategic decisions are made. Let’s say that again, one third!

Secondly, and this is the real explanation, in our age of participation trophies and only focusing on the positive (when there really isn’t any), no one is willing to state the truth, and that is most of the employees responsible for strategic [spend] analysis are just too math stupid.

Analytics, at its core, requires good mathematics skills and, with traditional analytics applications, good computer skills.

However, the US, where many multi-nationals are based, consistently ranks in the lower part of the OECD international rankings and is currently 34th in the PISA [out of 79 scored countries] (with an average numeracy score of 249, below the TOTAL OECD average of 263, with over 1/3 of its adult population at level 1! This means they can’t even do basic arithmetic and problem solving [or calculate a tip FFS, but that does explain why they believed their administration when they lied and said other countries pay the tariffs] — and that’s the average business employee in the US, since anyone with a level 2 on the OECD can likely fake it in a STEM career in the US.

As for THE PROPHET‘s reasons as to why Spend Analysis has consistently underperformed the hype:

  • While 3/4 of solutions have always been reporting in drag, I’ve been highlighting at least a dozen Best of Breed solutions consistently for the past decade. They have existed for the past 20 years, you just had to look (and understand what to look for. But this site did a great job of helping you with that!)
  • Yes, scale came at the cost of dumbing down the UX (for the US market in particular)!
  • Unfortunately there is no faster way to die as a Spend Analysis vendor then to get scooped up by a (mega) suite or a Big X Comsultancy.
  • Actually, the analytics and optimization is not powerful or complex enough in most solutions. Again, the problem is that the vendor didn’t add incremental levels of simplification (i.e. dumbing down) so each user could take advantage of it at their mathematical (in)competency level.

But the real reason, as hinted above, is that employees resisted these advanced spend analytics solutions because they knew they didn’t have the mathematical skills to use them. (Which the US Education System should be blamed for [and why it should be fixed, not dismantled], not the employees, unless those employees went to University and chose not to take math courses to try and make up for the failings of the public education system they were subjected to.)

As for THE PROPHET‘s signals that the times they are a changin’:

  1. Good + Cheap = Dangerous
    Faster? Check! Cheaper? Check! Smarter? Well … Ask Woody!
  2. Analytics is Merging with Execution
    This is key for adoption of analytics — do it when you need it and apply the findings right away.
  3. Intake, Orchestration and Agentic Tech
    I guess I have to say it again!
    ????? ????????????? ?? ???????? ??? ??? ??????? ????!
    When what we really need is a Revenge of the Nerds! (If the USA even has any left!)

However, the real reason that we may finally be entering a new era in analytics is the following:

4. Most companies are trying to stave off bankruptcies as a result of US trade, market, etc. decisions that have already bankrupted many SMEs and they now realize that analytics is a key part of that solution. You can’t optimize spend you don’t understand, or understand the impact of a sudden 145% increase in tariffs if you don’t understand how much you are sourcing from the country in question.

CLM is Dead! Long Live CLM!

Last month THE PROPHET ran a RIP post for CLM over on LinkedIn where he heralded the demise of CLM.

Which is coming fast and furious for CLM 1.0 and CLM 2.0 because, as we’ve said before, most current CLM solutions are nothing more than a glorified document repository / barebones CMS with a bit of linguistic rebranding, a few customized meta-data fields, maybe a bit of versioning support, and if you’re super lucky, some integrated e-Signature support.

As for the prophet’s suggestions, most of them won’t happen.

CLM absorbed into I2O?

Considering most I2O (Intake to Orchestrate) players still haven’t absorbed a fleshed out working Source to Pay model … not likely.

CLM goes vertical?

The whole point of CLM is horizontal — to get a grip on all of your contracts, not just a subset of them!

Agentic Solutions?

I like my contracts the way I like my maps: ACCURATE!

The best “AI” can do is enhance the productivity you get from a (very) small legal team … it CAN NOT replace it!

GPOs?

Standard terms around pricing DOES NOT satisfy geographic requirements which vary on levels of regulation, compliance, etc.

Clause based?

Ask Icertis and especially Exari how well that worked out for them …

Every other suggestion

Maybe … but all of this is trickier than THE PROPHET lets on!

The reason that CLM doesn’t work, as we noted above, is that the majority of “CLM” solutions on the market are NOT CLM at all. They are glorified repositories with some authoring and e-Signature support … not at all what an organization needs.

An organization needs “lifecycle” management. That’s a heck of a lot more than just drafting, redlining, signing, and sticking in a repository. That’s because contract “lifecycle” management really starts when the contract is signed (whereas most platforms seem to think it ends when the contract is signed).

It’s about automatically extracting the obligations, indexing them, assigning them, tracking them, and making sure they get done.

It’s about extracting the milestones and deliverables, as well as those obligations, and wrapping them in a project plan, assigning the resources, assigning the supervisory chain, tracking them, making sure they get done, and making sure all requirements are met.

It’s about extracting the SKUS, price tables, rate cards, and pushing them into the Procurement systems to allow those systems to perform the right m-way matches and make sure nothing is paid out that wasn’t agreed to. It’s about pulling in the paid invoices for tracking purposes and allowing the contract/relationship managers to track total contract fulfillment.

It’s about ensuring that the right parties are notified when a contract is coming up for renewal, have all the information necessary to make the decision on termination, renegotiation, or allow an evergreen renewal.

And about a whole lot more where VALUE is concerned. Just check out what The Maverick has to say over on Spend Matters.

Accept It! You ARE Selecting Obsolete Tech.

But that’s not necessarily a bad thing.

In a recent LinkedIn article, Joel said that digital procurement is like a pie eating contest, and while we’re not sure we agree, he made one valid point:

The system you select is already heading toward obsolescence the moment you go live.

But it’s worse than that!

1) It’s heading toward obsolescence from the minute the implementation starts … you have no idea the technical debt in the systems you are being sold today from the build fast, scale faster, fix it later mentality infused by VCs and most PE firms!

2) It was probably obsolete when you selected it, especially if you chose a vendor who has been leading the same Gartner and Forrester maps for 10 years with no significant changes to their product or platform!

3) Even worse, chances are that the process you digitized makes you outdated anyways and keeps you that way — digitization is the best time for identifying not how things work, but the way they should work to maximize efficiency and minimize risk (and that’s not, as we continually point out, jumping on the Gen AI / Agentric AI bandwagon and being blinded by the hype).

4) Moreover, you really shouldn’t need different channels (i.e. completely different apps) to source, just different workflows and interfaces, but since most providers don’t do more than one category (among indirect, direct, services, capex projects, etc.), you likely need MORE apps. Moreover, few suites have more than one or two modules that are truly best of breed (despite their claims), so if you don’t plan for the constant upgrades and bolts ons … well … you won’t be ready when you have to select and implement one quick, and then you’ll have even more obsolescence than you planned for.

That doesn’t mean that you should give up on modern tech because it’s all obsolete, because it’s not, and the good vendors recognize this and continually update their tech to minimize the obsolescence. It does mean that you need to be very careful when selecting your tech to find a solution that has minimal technical debt, is beyond where you are at today with respect to the processes it supports, and is being continually enhanced by the vendor. If the vendor offers a truly best of breed solution, is beyond where you are today, and has a track record of keeping up with best practices, and best tech, it’s likely a good vendor.

Especially if the tech today is considerably enhanced against the tech it had two to three years ago (which you should be able to determine by looking up old demo videos, articles, independent reviews, etc.).

However, if you can’t tell any difference between the (mega) suite tech being pushed at you today vs. what the (mega) suite tech were advertising five years ago, then you should probably stay away. Far, Far Away.

Why Are You Still Buying That Fancy New Piece of Software That

  • Could Get You Sued?
  • Increases The Chance You Will Be Hacked!
  • Could Result in a 100 Million Processing Error?
  • Could Shut Down Your Organization’s Systems for Days!
  • Helps Your Employees Commit Fraud?

If someone told you this when evaluating a piece of software, and asked if you wanted to buy it, I’m sure the vast majority of you would say HELL NO!

In which case I want you to please tell me, why are you all still riding the AI Hype Train, Buying, and Using Gen-AI everywhere?

It has already resulted in lawsuits and losses!
The Air Canada lawsuit over the Gen-AI chatbot is just one notable well publicized example.

AI systems are AI coded, and AI code has a much greater security risk
as it generates code using training repositories that contain large amounts of untested, unverified, and high risk code — generating code so full of security holes it’s a hacker’s dream! (See this great piece on the ACM on The Drunken Plagiarists.)

AI systems negotiate on the data they have
and with a single decimal point error and you could be paying 10X what you need to. Not to mention, they don’t always translate right. Remember, the Experimental AI DOGE used claimed an 8 Billion savings on an 8 Million contract!

Bad data generated by an AI system and fed into a legacy system with poor data validity checks can shut it down.
Plus, Gen-AI can also push out bad updates faster than any human can and you can easily have your own Crosslake situation!

Now it’s being used by employees to generate fake receipts
that look so real that, if the employee does a few seconds of research (to get the restaurant info, current menu prices, tax code, etc.), you can’t distinguish the generated image from the real thing. And, before you say “Ramp solves this”, well, it only does if the employee is lazy (which, let’s face it, is human nature, so you’ll catch about 90% of it). But what happens when a user strips the meta data which, FYI, can be as easy as taking a picture of the picture … oops! (And if you’re a hacker, running it through a meta data stripper/replacement routine is even easier as you’re just hotkeying a background task.)

AI is good. Gen-AI has its [limited] uses. But unrestricted and unhinged mass adoption of untested, unverified AI for inappropriate uses is bad. So why do you keep doing it?

Especially since it’s now proven it’s worse for you than some illegal drugs! (Source)