Advanced Supplier Management TOMORROW — No Gen-AI Needed!

Back in late 2018 and early 2019, before the GENizah Artificial Idiocy craze began, the doctor did a sequence of AI Series (totalling 22 articles) on Spend Matters on AI in X Today, Tomorrow, and The Day After Tomorrow for Procurement, Sourcing, Sourcing Optimization, Supplier Discovery, and Supplier Management. All of which was implemented, about to be implemented, capable of being implemented, and most definitely not doable with, Gen-AI.

To make it abundantly clear that you don’t need Gen-AI for any advanced back-office (fin)tech, and that, in fact, you should never even consider it for advanced tech in these categories (because it cannot reason, cannot guarantee consistency, and confidence on the quality of its outputs can’t even measured), we’re going to talk about all the advanced features enabled by Assisted and Augmented Intelligence that are (or soon will be) in development (now) and you will see in leading best of breed platforms over the next few years.

Unlike prior series, we’re identifying the sound, ML/AI technologies that are, or can, be used to implement the advanced capabilities that are currently emerging, and will soon be found, in Source to Pay technologies that are truly AI-enhanced. (Which, FYI, may not match one-to-one with what the doctor chronicled five years ago because, like time, tech marches on.)

Today we continue with AI-Enhanced Supplier Management that is in development “today” (and expected to be in development by now when the first series was penned five years ago) and will soon be a staple in best of breed platforms. (This article sort of corresponds with AI in Supplier Management The Day After Tomorrow that was published in May, 2019 on Spend Matters.)

TOMORROW

Supplier Future State Predictions

Supplier management platforms of today can integrate market intelligence with community intelligence, internal data, and external data sources and give you a great insight into a supplier’s current state from a holistic perspective.

Along each dimension, future states can be predicted based on trends. But single trends don’t tell the whole story. Now that we have decades of data on a huge number of companies available on the internet across financial, sustainability, workforce, production, and other dimensions which can be analyzed overtime and cross-correlated, we can do more, and know more.

Based on this correlated data, machine learning can be used to build functions by industry and company size that can predict future state with high confidence based upon the presence of a sufficient number of sufficiently accurate data points for a company in question. Now that these platforms can monitor enough internal, community, and market data and pull in a plethora of data feeds, they can accurately compute metrics with high confidence along a host of dimension, and this in turn allows them to compute the metrics that are needed to predict future state if the vendor’s platform has enough historical data on enough companies to define trends and define predictor functions using machine learning.

Not only can you enter a relationship based on a current risk profile, but on a likely future risk profile based on what the company could look like at the end of the desired contract term. If you want a five year relationship, maybe taking advantage of that great deal due to a temporary blip in supplier or market performance may not be a good idea if suppliers historically in this situation typically went into a downward spiral after accepting a big contract they ultimately weren’t prepared to deliver on.

Category Based Supplier Rebalancing

We could actually do this today, as a few vendors are now offering this capability, but it’s not yet part of supplier management platforms and the newly emergent offerings are often limited to a few categories today. But tomorrow’s platforms will continually analyze your categories holistically (along the most relevant dimensions, which could include cost, supply assurance, environmental friendliness, etc.) to determine if the supply mix you are currently using is the best one, let you know if there could be a better one, and suggest changes to orders (as long as it doesn’t jeopardize contracts where that jeopardy could come with a financial or legal penalty).

It’s just a matter of re-running an optimization model on, say, a monthly basis with updated data on price, supply assurance, and environmental friendliness (using the appropriate data for each, such as market quotes, current supplier risk, carbon per unit, etc), and comparing the optimal result to the current allocation plan. If it’s within tolerance, stay on track; if it’s slightly out of tolerance, notify a human to conduct and review a thorough analysis to see if something might need to change; if it’s way off of tolerance, recommend a change with the data that supports the change.

Supply Base Rebalancing

Once you have a platform that is continually reanalyzing categories and supplier-based assignment, you can start looking across the supply base and identify suppliers which are hardly used (and an overall drain on your company when you consider the costs of maintaining a relationship and even maintaining the supplier profile) and supplier that are potentially overused (and pose a risk to your business simply based on the level of supply [as even the biggest company can stumble, fall, and crash to the ground on a single unexpected event, such as the unexpected installation of a spreadsheet driven Master of Business Annihilation as CEO who has no clue what the business does or how to run it effectively and, thus, causes a major stumble, as summarized in Jason Premo’s article).

And, more importantly, identify new suppliers who have been performing great with slowly increasing product / service loads and should be awarded more of the business over older suppliers that are becoming less innovative and more risky to the operation at large. Now, this will just be from a supply perspective, and not a supply chain perspective (as these programs focus on suppliers and not logistics or warehousing or overall global supply issues), but this will be very valuable information for Sourcing and New Product Development who want to always find the best suppliers for a new product or service requirement.

Real-Time Order Rebalancing

Since tomorrow’s platforms will be able to recommend category rebalancing across suppliers, they will also be able to quickly recommend real-time order rebalancing strategies if a primary supplier is predicted to be late in a delivery (or a human indicates an ETA for a shipment has been delayed by 60 days). This is because they will be integrated with current contracts, e-procurement systems, and have a bevy of data on projected availability and real historical performance. Thus, it will be relatively simple to recommend the best alternatives by simply re-running the machine learning and optimization models with the problematic supplier taken out of the picture.

Carbon-Based Rebalancing

Similarly, with the rise of carbon-calculators and third-party public sources on average carbon production per plant, and even unit of a product, it will be relatively easy for these supplier management platforms to build up carbon profiles per supplier, the amount of that carbon the company is responsible for, how those profiles compare to other profiles, and what the primary reasons for the differentiation are.

The company can then focus on suppliers using, or moving to, more environmentally friendly production methods, optimize logistics networks, and proactive rebalancing of awards among supplier plants to make sure the plants producing a product are the ones closest to where the product will be shipped and consumed. It’s simply a carbon focussed model vs. a price focussed one.

SUMMARY

Now, we realize some of these descriptions are dense, but that’s because our primary goal is to demonstrate that one can use the more advanced ML technologies that already exist, harmonized with market and corporate data, to create even smarter Supplier Management applications than most people (and last generation suites) realize, without any need (or use) for Gen-AI. More importantly, the organization will be able to rely on these applications to reduce time, tactical data processing, spend, and risk while increasing overall organizational and supplier performance 100% of the time, as the platform will never take an action or make a recommendation that doesn’t conform to the parameters and restrictions placed upon it. It just requires smart vendors who hire very smart people who use their human intelligence (HI!) to full potential to create brilliant Supplier Management applications that buyers can rely on with confidence no matter what category or organization size, always knowing that the application will know when a human has to be involved, and why!

Have You Brought Your Supply Chain Planning Out of the Middle Ages?

Back in the 1930s, the dark ages of computing began, starting with the Telex messaging network in 1933. Beginning as an R&D project in 1926, it became an operational teleprinter service, operated by the German Reichspost (under the Third Reich — remember we said “dark ages”). With a speed of 50 baud, or about 66 words per minute, it was initially used for the distribution of military messages, but eventually became a world-wide network of both official and commercial text messaging that survived into the 2000s in some countries. A few years later, Bell Labs’ George Stibitz built the “Model K” adder in 1937 that was the first proof of concept for the application of Boolean Logic to computer design. Two years later, the Bell Labs CNC (Complex Number Calculator) was completed. In 1941, the Z3, using 2,300 relays, was constructed and could perform floating point binary arithmetic with a 22 bit word length and execute aerodynamic calculations. Then, in 1942, the ABC (Atanasoff-Berry Computer) was completed, seen by the John Mauchly, who invented the ENIAC, which was the first general purpose computer completed in 1945.

Three years later, in 1948, Frederic Williams, Tom Kilburn, and Geoff Toothill developed the Small-Scale Experimental Machine (SSEM), which was the first digital, electronic, stored-program computer to run a computer program, consisting of a mere 17 instructions! A year later, we saw the modem that allowed computers to communicate through ordinary phone lines. Originally developed for transmitting radar signals, the modem was adapted for computer use four years later in 1953. The same year saw the EDSAC, the first practical stored-program computer to provide a regular computing service.

A year later, in 1950, we saw the introduction of magnetic drum storage, which could store 1 Million bits, which was a previously unimagined amount of data (and twice what Gates once said anyone would ever need), though nothing by today’s standards. Then, in 1951, the US Census Bureau gets the Univac 1 and the end of the dark ages are in sight. Then, in 1952, only two years after the magnetic drum, IB introduces a high speed magnetic tape, which could store 2 million digits per tape! In 1953, Grimsdale and Webb built a 48-bit prototype transistorized computer that used 92 transistors and 550 diodes. Later that same year, MIT created magnetic core memory. Almost everything was in place for the invention of a computer that didn’t take a whole room. In 1956, MIT researchers began experimenting with direct keyboard input to computers (which up to now could only be programmed using punch cards or paper tape). A prototype of a mini computer, the LGP-30, was created at Caltech this same year. A year later, FORTRAN, one of the first third generation computing languages, was developed in 1957. Early magnetic disk drives were invented in 1959. And 1960 saw the introduction of the DEC PDP-1, one of the first general purposed mini-computers. A decade later saw the first IBM computer to use semiconductor memory. And one year later, in 1971, we saw one of the first memory chips, the Intel 1103, and the first microprocessor, the Intel 4004.

Two years later NPL and Cyclades started experimenting with internetworking with the European Informatics Network (EIN) and Xerox PARC began linking Ethernets with other networks using its PUP protocol. And the Micral, based on the Intel 8008 microprocessor, one of the earliest non-kit personal computers, ws released. the next year, in 1974, the Xerox Parc Alto was released and the end of the dark ages were in sight. In 1976, we saw the Apple I, and in 1981 we saw the first IBM PC and the middle ages began as computing was now within reach of the masses.

By 1981, before the middle ages began, we already had GAIN Systems (1971), SAP (1972), Oracle (1977), and Dassault Systemes, four (4) of the top fourteen (14) supply chain planning companies according to Gartner in their 2024 Supply Chain Planning Magic Quadrant (Challengers, Leaders, and Dassault Systemes). In the 1980s we saw the formation of Kinaxis (1984), Blue Yonder (1985), and OMP (1985). Then in the 1990s, we saw Arkieva (1993), Logility (1996), and John Galt Solutions (1996). This says ten (10) of the top fourteen (14) supply chain planning solution companies were founded before the middle ages ended in 1999 (and the age of enlightenment began).

Tim Berners-Lee invented the World Wide Web in 1989, the first browser appeared in 1990, the first cable internet service appeared in 1995, Google appeared in 1998, and Salesforce, considered to be one of the first SaaS solutions built from scratch launched in 1999. At the same time, we reached an early majority of internet users in North America, ending the middle ages and starting the age of enlightenment, as global connectivity was now available to the average person (at least in a first world country).

Only e2Open (2000), RELEX Solutions (2005), Anaplan (2006), and o9 Solutions (2009) were founded in the age of enlightenment (but not the modern age). In the age of enlightenment, we left behind on premise and early single client-server applications and began to build SaaS applications using a modern SaaS MVC architecture where requests came in, got directed to the machine with the software, that computed answers, and sent them back. This allowed for rather fault-tolerant software since if hardware failed, the instance could be moved. If an instance failed, it could just be redeployed with backup data. It was true enlightenment. However, not all companies adopted multi-tenant SaaS from day one, only a few providers did in the early days. (So even if your SCP company began in the age of enlightenment, it may not be built on a modern multi-tenant cloud-native true SaaS architecture.) This was largely because there were no real frameworks to build and deploy such solutions on (and Salesforce literally had to build their own.

However, in 2008, Google launched its Cloud and in 2010, one year after the last of the top 14 supply chain applications was launched, when Microsoft launched Azure, the age of enlightenment came to an end and the modern age began as there were now multiple cloud-based infrastructures available to support cloud-native true multi-tenant SaaS applications (no DC operational knowledge required), making it easy for any true SaaS provider to develop these solutions from the ground up.

In other words, not one Supply Chain Planning Solution recognized as a top supply chain planning solution by Gartner was founded in the modern age. (Moreover, if you look at the niche players, only one of the six was founded in the age of enlightenment, the rest are also from the middle ages.)

So why is this important?

  • If the SCP platform core was architected back in the day of client server, and the provider did not rearchitect it for true multi-tenant, even if the vendor wrapped this core in a VM (Virtual Machine), put it in a Docker container, and put it in the cloud, it’s still a client-server application at the core. This means it has all the limits of client server applications. One client per server. No scalability (beyond how many cores and how much memory the server can support).
  • If the platform core was architected such that each module, which runs in its own VM, requires a complete copy of the data to function, that’s a lot of data replication required to run the platform, especially if it has 12 separate modules. This can greatly exacerbate the storage requirements, and thus the cost.
  • But that’s not the big problem. The big problem is that models constructed on a traditional client-server architecture were designed to run only one scenario at a time, and only do so if a complete copy of the data is available. So if you want to run multiple models, multiple scenarios for a model, or both, you need multiple copies of the module, each with their own data set for each model scenario you want to run. This not only exacerbates data requirements, but compute requirements as well. (This is why many providers limit how many models you can have and scenarios you can run as their cloud compute costs skyrocket due to the inefficiency in design and data storage requirements.)

    And while there is no such thing as a truly optimal supply chain plan, since you never know all the variables in advance, there are near optimal fault-tolerant plans that, with enough scenarios, can be identified (by building up a picture of what happens at different demand levels, supply levels, transportation times, etc.) and you can select the one that balances cost savings, quality, expected delivery time, and risk at levels you are comfortable with.

That’s the crux of it. If you can’t run enough scenarios across enough models to build up a picture of what happens across different possibilities, you can’t come up with a plan that can withstand typical perturbations, and definitely can’t come up with a plan that can be rapidly put into place to deal with a major demand fluctuation, supply fluctuation, or an unexpected supply chain event.

So if you want to create a supply chain plan that can enable supply chain success, make sure you’ve brought your supply chain planning out of the middle ages (through the age of enlightenment) and into the modern age. And we mean you. If you chose a vendor a decade ago and are resisting migration to a newer solution, including one offered by the vendor, because you spent years, and millions, tightly integrating it to your ERP solution, then you’re likely running on a single tenant SaaS architecture at best, and a nicely packaged client server architecture otherwise. You need to upgrade … and you should do it now! (We won’t advise you here as we don’t know all of the vendors in the SCP quadrant well enough, but we know some, including those that have recently acquired newer, age of enlightenment and even modern age solutions, and know that some still have old tech on old stacks that they maintaining because of install base. Don’t be the company stalling progress for your own good!)

We Want to Be a Smart Company — What Else Can We Do! Part 2

In part 1, you admitted that you read the dumb company: avoid the fork in the road and dead company walking: avoiding the graveyard articles (links in part 1), taken them to heart, admitted you’re making some mistakes and that you’re not doing some key functions as well as you could. Most importantly, you know you need to do more to avoid becoming a casualty of the next mass corporate extinction that’s coming. And you asked us to tell you what else you could do to avoid becoming a dead company walking (or, even worse, a zombie company*)? And yesterday we gave you our first five suggestions. Today we give you our next five.

06) Remember Websites are MORE than Static Web Pages

Your website should be a dynamic and interactive website that quickly guide visitors to the educational and informative content they want, with point-based and constructable demos, targeted education and thought leadership, and easy to find contact us options for information requests and specific live demos from thought leaders and solution professionals, not sales people. (Qualify the lead, then pass it on to sales.)

It should not, like the majority of websites today, be an overload of hogwash messaging and buzzwords, fancy animated graphics that don’t actually show the solution in use, a constant barrage of questions (along the “do you have trouble with …”) with the uniform “contact us for answers” directive. It definitely should not contain nonstandard terminology for modules, functions, and processes. (And definitely don’t mislead and say you’re an e-Procurement tool if you’re an e-Sourcing tool, and if you don’t know the difference, that just means you didn’t do your homework!) Confusing or non-existent information on target industries and market size (as we all know there is no one size fits all solution, and pretending that your company has one is just obnoxious). Or utter lack of information on pricing tiers and benefits. (Maybe you can’t give an exact price because you offer SSDO or advanced analytics that requires a lot of pay-per-use cloud processing, but you can still give a base license fee or range. If you’re a M+ annual solution, you don’t want companies that can’t, or won’t, pay more than 100K reaching out. The market should understand you get what you pay for and that a 100K solution won’t, or at least shouldn’t, have all the features of a 1M solution, but also that, if they are a smaller company, they shouldn’t need all the advanced features of the 1M solution either.)

07) Tap Your Talent for Top Tens

Sometimes the talent you overlook (because you think they are just a developer, pre-sales solution advisor, etc.) has the best ideas (and sometimes they don’t, and that’s why you use your leadership to filter out the best ideas).

If you have a problem, or just want to look for opportunities for improvement, ask your people first. Now, they won’t identify or come up with everything (as they have a limited view from a single function and may not have the decades of experience that is sometimes required to come up with something that is both “obvious” and revolutionary), but why should you pay a consultant to help you with improvements you can identify and make in house? You want the consultant focussed on the big win improvements you don’t see (and not easily sidetracked with the dozens of things you can do better).

So, ask all of your employees to come up with, anonymously if it helps, the

  • ten best ways to save money,
  • ten best investments across the business,
  • ten best ways to improve productivity,
  • ten SaaS apps you can do without,
  • ten functions that would totally change customer productivity in your core offering
  • ten functions that could be removed from the roadmap because they are actually low value,
  • etc.

And while you will get a lot of pyrite, you will get some gold nuggets. And if you’re knowledgeable enough, you’ll be able to separate the gold nuggets out. (And if not, you’ve jump started your expert advisor with some unique insights into your business and your team and that will improve their productivity.)

08) Always Pause for Innovation

Regardless of how you interpret what we tell you in #10, if an opportunity for innovation presents itself, always pause to evaluate it and see if it is a true opportunity, fits in with the plan, and would make the product, and the plan, better. If it would make the plan better, and it wouldn’t slow progress down more than a small amount, work it into the plan. If it would make the plan better, but would slow progress down a moderately significant amount, put it on the roadmap to be considered in the next plan update (as new ideas might emerge that make it less of an impact by then). Moreover, when you stumble upon it, the right innovation will improve the product, the plan, and even the timeline.

09) Sign in Blood

Once you have a plan, sign your name to it in blood. The only thing worse than not having a proper plan is abandoning a good plan part way through (because you get too anxious or lose faith) … if, after investing a lot of time and effort, you abandon a plan part way through, you might as well just shut the doors now instead of retreating into the castle to starve as you wait out the siege. Greatness takes time, effort, and sometimes sacrifice.

10) Drive Decisions Like You Just Heisted the Antwerp Diamonds

Once you have a direction, don’t stop. Don’t pull over. Keep going until you successfully escape the EU, sorry, until you escape mediocrity and unprofitability. (And definitely don’t panic along the way. If you got out clean, and have 24 hours to make your escape, use every last hour, because once you cross the border, you’re off Scot-free.)

Once you have it all figured out and committed to, you have to be Hagar behind the wheel and drive, drive, drive. Slowing down will lead to stopping. Stopping to abandonment, and then, instead of improvement and success, it will be failure and the beginning of the end. As per 09, you have to see the plan through, and this will only happen if you never stop — you have to keep going as long as there is a drop of gas in the tank.

* yes, zombie companies exist in our space too; and, as the entertainment industry would have you believe, since we’re not medical doctors working in morgues with a constant fresh supply of brains, it is a fate worse than corporate death!

Dear Enterprise Software Vendor: Should You Fire Your PR and Marketing?

Note the Sourcing Innovation Editorial Disclaimers and note this is a very opinionated rant!  Your mileage will vary!  (And not about any firm in particular, as a few non-isolated incidents opened up a whole new line of questioning.)

In response to a post by eCornell (which is/was here), THE REVELATOR wrote this comment (which is/was here) which is repeated here in its entirety in case it gets deleted, since anytime we tried to have a serious conversation around sales, marketing, public relations, and/or Gen-AI with Big X firms and/or (mid-sized) consultancies and analyst firms, they have quickly deleted our comments, and sometimes their entire posts rather than enter into a real conversation on the subject (and now we have developed an implicit distrust any corporate account and keep copies of everything):

NOTE: The following post was inspired by a comment by Paul Rogers

Despite feeling like someone walking the hallowed halls of Cornell University wearing a “Yeah, Harvard University” t-shirt, sometimes you have to say things that need to be said – which is the purpose of sharing this article.

Ask ChatGPT the following two questions:

? What is the role of the Public Relations professional?
? What is the role of the Marketing professional?

Do you see any mention of end client or customer success as a priority? Whose best interests are PR and marketing professionals focused on? What does the answer to these questions tell you?

Corporate communication has always been about putting a positive spin on business and the brand. It reminds me of the 1986 Richard Gere movie Power – if not a great movie, it is certainly interesting and engaging. Denzel Washington’s role as public relations expert Arnold Billings is worth the price of admission alone.

Unfortunately, beyond the company they represent, are PR and marketing people doing more harm than good?

Thoughts?

To which the doctor responded (which is/was here)

Well, SI, which has repeatedly told companies in our space to fire their PR firms going back to 2008: Blogger Relations, firmly believes that PR firms are doing more harm than good because

  1. you are NOT selling enterprise software to consumers and
  2. it’s not “image”, it’s “solution”!

As for marketing, corporate marketing can be good if it exists to educate and explain, but when was the last time that happened on a regular basis in our space? Over a decade ago … now it’s all AI-this, orchestrate-that, and whatever the bullcr@p of the day is. It’s all buzz, no honey. All show, no substance. All confusion, no clarity. (It’s bad enough that Trump has brought back the Land of Confusion with his populist politics that have taken by storm the first world over, we don’t need it in our workplace!)

So, right now, I’d say at least 6/7, if not 9/10, marketers are doing more harm than good and should be fired with their PR brethren.

There are over 666 companies in our space, and way too many pandering any type of solution you can think of. While we need at least 3-5 in each industry group – market size – geo region – module focus you can think of for competition, we don’t need 30+. Most are not going to survive, especially when most of these don’t have solid solutions built from years of experience that solve real customer problems (as opposed to just offering some shiny new tech that looks good but doesn’t solve the majority of pain points in real organizations).

This means that companies need to focus less on marketing and selling and more on:

  • market research, especially listening to what the real pain points are of the customers they want to sell to (and they need to focus in on a customer group here, you can’t be everything to everyone in our space and any company that thinks it can is the first company you should walk away from)
  • solution (not product) development — not shiny new tech, tried-and-true tech that works
  • market education, explaining what they do, how they do it, and why it solves real pain points after building a solution that solves the pain points they identified in their research

Which means, especially if money is tight, they should forget the marketers and instead focus on hiring researchers and educators. People are getting tired of the 80%+ tech project failure rates. They’d welcome some real insight and real focus on real solutions. If only the market would wake up and realize this!

Advanced Supplier Management TODAY — No Gen-AI Needed!

Back in late 2018 and early 2019, before the GENizah Artificial Idiocy craze began, the doctor did a sequence of AI Series (totalling 22 articles) on Spend Matters on AI in X Today, Tomorrow, and The Day After Tomorrow for Procurement, Sourcing, Sourcing Optimization, Supplier Discovery, and Supplier Management. All of which was implemented, about to be implemented, capable of being implemented, and most definitely not doable with, Gen-AI.

To make it abundantly clear that you don’t need Gen-AI for any advanced enterprise back-office (fin)tech, and that, in fact, you should never even consider it for advanced tech in these categories (because it cannot reason, cannot guarantee consistency, and confidence on the quality of its outputs can’t even be measured), we’re going to talk about all the advanced features enabled by Assisted and Augmented Intelligence that were (about to be) in development five years ago and are now (or should be) available in leading best of-breed systems. And we’re continuing with Supplier Management.

Unlike prior series, we’re identifying the sound, ML/AI technologies that are, or can, be used to implement the advanced capabilities that are currently found, or will soon be found, in Source to Pay technologies that are truly AI-enhanced. (Which, FYI, may not match one-to-one with what the doctor chronicled five years ago because, like time, tech marches on.)

Today we continue with AI-Enhanced Supplier Management that was in development “yesterday” when we wrote our first series five years ago but is now available in mature best of breed platforms for your Procurement success. (This article sort of corresponds with AI in Supplier Management Tomorrow Part I and Part II that were published in May, 2019 on Spend Matters.)

TODAY

Auto Profile Updates with Smart Information Selection

In our last article, we noted that in first, and many second, generation Supplier Management solutions, a supplier was always forced to create a profile by scratch, filling out a bevy of pre-defined form fields — even if they had all of that data in a well formed (metadata rich) xml or csv file. That’s why yesterday’s Supplier Management solutions contained functionality to auto-complete profiles wherever this data was easily available in standard formats.

But the biggest problem remained — supplier profile maintenance. A supplier profile is only accurate the second a supplier hits confirm/complete. Then, their main contact changed. They changed their mailing address. They moved HQ. They offered a new product. They dropped an old one. And so on. And, of course, they never maintained their profile, and you never verified it until you went to call, mail, or order and that person wasn’t there, the mail got returned, or the order was rejected (because the supplier no longer made the product). Then, you went to the website, found the new main line, called, navigated to the right person, got the right info, and maybe remembered to update the system.

So, as errors were discovered, some critical ones would be corrected, but most would remain unchanged or unnoticed and over the years errors — including information on critical insurance, regulatory approvals, and other key business requirements that put the organization at high risk if not verified — continued to pile up. After a few years, the record becomes more wrong than right. Not good.

So today’s solutions make use of the fact that information typically gets updated somewhere, even if not in the application. They monitor the supplier’s website for changes in contact information, invoices for address and product information, state and country registries for business information, and so on and when changes are detected, automatically update the supplier profile if the changes can be independently verified (through a third party authority, to prevent hacks or fraud from changing the system) or present the new data for approval to the relationship manager. All this takes is simple website and data source monitoring, scraping, reg-ex based pattern matching, and automated workflows. For complex information, a bit of semantic processing. Nothing beyond classical, proven, tried-and-true AI is needed.

Market Based Supplier Intelligence

Today’s supplier management platforms can integrate with multiple marketplaces, communities, partners, GPOs, and specialized compliance, sustainability, and risk data platforms, use rule-based transformations to harmonize all the data, and use built-in algorithms to extract intelligence at a market level.

Your company data gives you one view into a supplier; your vendor-based community, which is usually limited to similar companies in your industry that the vendor was able to sell, gives you another view; but the market gives you yet another view yet. Mathematically, one data point doesn’t tell you anything. If only nine other customers use the vendor and share their data through community intelligence, that gives you 10 data points, which gives you some data on the supplier’s performance and their performance for you relative to others, but 10 data points is not statistically significant. But if 30, 50, 100 data points can be collected from the market, that gives you deep insight with deep statistical significance.

On top of the data, and a few powerful cores (few, not a few thousand), all these platforms need is basic statistical calculations, trend analysis, classical machine learning, semantic processing, and sentiment analysis … all of which have been market ready for over a decade.

Real Time Relationship Monitoring

Relationships are more than just performing to a contract. They are about building a working arrangement that is beneficial to both parties. One where both are willing to admit problems, collaboratively explore potential solutions, and work together to achieve them. One where, when there are no problems, both are willing to find ways to improve.

As a result, relationship monitoring is more than just supplier performance monitoring. Especially since the relationship can be bad even when the performance is (still) (surprisingly) good, and the relationship can be (reported as) good when the performance is bad.

However, if you turn that semantic and sentiment analysis that was typically done on market data and public comments on internal communications, you can start to build up a picture of the overall viewpoint and sentiment on the relationship from both sides, what successes or issues are contributing to that, and if the situation is improving or deteriorating over time (by trending the number of spikes in communication with sentiment that is overly positive or negative). It’s not foolproof, as both sides could adopt strict, formal, communication no matter what, but since people are human, they tend to get hotheaded and lose tempers (and let the words fly) if they are really upset or jubilant when they are really happy (and let the praise fly), and while minor changes in relationship sentiment might not be caught (within tolerance), major changes will. Moreover, you’re not going to get rigid, controlled, strict, formal communication until threats of a lawsuit fly, but then it’s too late!

Automated Resolution Plan Creation, Monitoring, and Adjustment

Not only can supplier management platforms automatically detect issues (by rapid increases or decreases in trends or metrics), they can also correlate them to included resolution plan templates, automatically instantiate them and customize them to the issue in question, walk the supplier relationship manager through the resolution process, monitor progress, and automatically adjust the plan, and timeline, as needed as new information, good or bad, comes in.

Each default template can be correlated to a particular metric, trend, or sentiment driven situation, so selecting it is just a lookup. Instantiation is just filling in the blank with the appropriate category, product, service, and metric information, through reg-ex matching and search and replace. Robotic Process Automation (RPA) walks both sides through the process. Monitoring alerts either side when something is updated or not completed on time through more RPA. And adjustments can be made to trend lines based on average timelines on similar projects and current trends at each milestone.

Automated Risk Mitigation Strategy Identification

It’s one thing to detect risk, which is pretty easy along many dimensions when you have a lot of data at your disposal, and relatively straightforward to predict the likelihood of some risk events, but it’s a lot harder to determine which mitigation strategies should be employed when it looks like a risk is going to materialize.

But that doesn’t mean it can’t be done, or isn’t doable by the best of platforms. Just like a platform can come equipped with issue resolution plan templates, it can also come equip with standard risk mitigation strategies, which are essentially action plans to be automatically customized with the specific category, product/service, logistics, and supply line details. This is just pattern matching and semantic contextual awareness.

When all of this is combined with (near) real time monitoring across data sources, that are continually looking for relevant news sources, changes in metrics / prices / trends, etc, it’s like magic (although it isn’t). The platform detects risks, finds the most appropriate mitigations, and present it to the relationship manager. An all it uses is math, traditional machine learning, and traditional semantic/sentiment analysis. And, of course, a lot of up-front human intelligence (HI!) in the creation of this solution.

Automatic Real-Time Resource Re-Alignment

Corrective action plans and risk mitigation plans have something very important in common — people. People who create them, approve them, execute them, and monitor them. This requires resources to be constantly assigned, monitored, replaced as soon as they are unavailable or needed on more pressing assignments, and reassigned as the issue is resolved or the mitigation complete.

And while it will often be difficult for a project manager, or even a resource manager, to determine when to remove an organization’s best problem solver from a critical corrective action project to address a less critical risk mitigation project, or vice versa, even when the manager can’t think of someone else who could address the less critical risk mitigation project effectively, even when there is another moderately experienced problem solver that could step into the critical project, the software will be able to compute when that should happen if the organization defines the rules as to when that will happen based on hard metrics.

For example, if you define assignments to correlate resources to the projects with the highest cost (should the issue persist or the risk mitigate), and you define the cost of an issue based on its expected impact if unsolved, and the cost of a risk as its expected impact if unaddressed (using a fixed cost or a formula if those 10,000 processors don’t arrive and you have 10,000 vehicles you can’t complete), and you associate a seniority with each resource, it’s simply rank ordered matching.

If there aren’t enough resources for all problems, you can apply simple optimization to maximize the impact of your most senior resources. And, again, there is no Gen-AI needed!

SUMMARY

Now, we realize some of these descriptions, like yesterday’s, are also quite brief, but again, that’s because this is not entirely new tech, as the beginnings have been around for years, have been in development for a few years and discussed as “the future of” Procurement tech before Gen-AI hit the scene, and all of these capabilities are pretty straight-forward to understand. Moreover, if you want to dive deeper, the baseline requirements for most of these capabilities were described in depth in the doctor’s May 2019 articles on Spend Matters. The primary purpose of this article, as with the last, is to explain how more sophisticated versions of traditional ML methodologies could be implemented in unison with human intelligence (HI!) to create smarter Supplier Management applications that buyers can rely on with confidence.