The Complete AI in X (No Gen-AI) Series, 2018/2019 and 2024!
Monthly Archives: November 2024
You Should Never Build Your Own ProcureTech Solution! Ever!
Integrate your own custom suite to suit your processes, maybe, but never build from scratch. (And we should not have to be talking about this again after just publishing on the subject two weeks ago, but too many conversations are indicating that we still need to shout this loud and clear!)
For some reason, this comes up every decade, usually after a hype cycle has peaked, marketers have switched from focussing on solutions to sound bites from a suite of providers who have released products that don’t meet customer needs, the implementation failure rate has edged back up to the 80%+ range, and customers have gotten absolutely positively fed up with the whole situation.
Customers, fed up with the valueless hype, marketing sound bites, high failure rate, and utter lack of solutions from the vendors targeting them on a daily basis, start to think that the right solution is to build their own.
Sourcing Innovation tackled this subject in depth back in 2015 when it wrote a 4-part series on why you should NOT build your own e-Sourcing solution, followed by an explanation of why you should not build your own Contract Management and e-Procurement platform. (links here)
That’s why we are both repeating and elaborating on last Friday’s Rant on why A Company Should Never Build It’s Own Enterprise Software Systems.
Not only do we have the situation where:
- the company is not an expert in building software products
- the company is not an expert in best practices across all of its processes
- by the time a custom solution is developed, it’s out of date
- it’s not about the product, it’s about the process you should be working toward and, most importantly,
- it’s about the data that drives the process!
But we have the situation where, as highlighted in THE REVELATOR‘s article:
1. Developing your own is NOT being an early adopter! (Which is what many companies considering build-your-own think they are.)
Early adopter means someone who adopts leading edge technology from a third party, not someone trying to fast track their digitization effort with custom built tech. This is just high risk with little chance of reward for all the reasons mentioned in all of our prior articles.
2. They think Gen-AI will fix their data problem and allow them to develop their own!
If you’re read anything on Gen-AI on this blog you know that’s the last thing it will do. For Gen-AI to have any chance of working at all, it needs a huge amount of good, clean, data. Otherwise, it’s garbage in, hazardous waste out. No technology has ever needed such large amounts of near-perfect data to have even an abysmal chance of working, and the fact that the marketing madness has convince many CPOs that Gen-AI can fix a data problem is downright terrifying!
3. They obviously think that the initial quote will be close to the final cost.
No where are cost overruns more extreme than in custom development by a non-software organization that contracts a Big X with poor specifications that look easy, and that, due to lack of manpower, sends The C-Team (if you are lucky) because it’s just another instance of system X (when it’s not).
To be honest, in this situation, if the costs ends up being only 3X to get something usable (but still not what you wanted), given the high technology failure rates, that would be amazing.
We know it’s hard to find appropriate solutions given all the noise out there, and the overabundance of vendors that all look, sound, and go all in on useless Gen-AI the same, as it just takes one glance at the Mega Map to figure that out, but that doesn’t mean there aren’t vendors out there appropriate for you. Vendors that put solutions, not tech first, that built affordable tech that works (and didn’t take too much money from investors who then insisted on quadrupling the price), and that will work in an ecosystem with out vendors to solve your problems.
You just have to look hard. Real hard. Probably harder than you’ve ever had to look before. (Expect to eliminate 6 out of every vendors you look at for short list consideration and probably go through 20 to find 3.) But trust us, when you find the right vendor, it will be worth it. The solution will work, will configure to your liking, will be extremely usable for the problems your team faces every day, and will be one where the provider will grow with you for the decade to come.
Good things come to those who wait to find the right vendor. (Even if they have to crawl through multiple pig sties to do so.)
Advanced Supplier Management TOMORROW — No Gen-AI Needed!
Back in late 2018 and early 2019, before the GENizah Artificial Idiocy craze began, the doctor did a sequence of AI Series (totalling 22 articles) on Spend Matters on AI in X Today, Tomorrow, and The Day After Tomorrow for Procurement, Sourcing, Sourcing Optimization, Supplier Discovery, and Supplier Management. All of which was implemented, about to be implemented, capable of being implemented, and most definitely not doable with, Gen-AI.
To make it abundantly clear that you don’t need Gen-AI for any advanced back-office (fin)tech, and that, in fact, you should never even consider it for advanced tech in these categories (because it cannot reason, cannot guarantee consistency, and confidence on the quality of its outputs can’t even measured), we’re going to talk about all the advanced features enabled by Assisted and Augmented Intelligence that are (or soon will be) in development (now) and you will see in leading best of breed platforms over the next few years.
Unlike prior series, we’re identifying the sound, ML/AI technologies that are, or can, be used to implement the advanced capabilities that are currently emerging, and will soon be found, in Source to Pay technologies that are truly AI-enhanced. (Which, FYI, may not match one-to-one with what the doctor chronicled five years ago because, like time, tech marches on.)
Today we continue with AI-Enhanced Supplier Management that is in development “today” (and expected to be in development by now when the first series was penned five years ago) and will soon be a staple in best of breed platforms. (This article sort of corresponds with AI in Supplier Management The Day After Tomorrow that was published in May, 2019 on Spend Matters.)
TOMORROW
Supplier Future State Predictions
Supplier management platforms of today can integrate market intelligence with community intelligence, internal data, and external data sources and give you a great insight into a supplier’s current state from a holistic perspective.
Along each dimension, future states can be predicted based on trends. But single trends don’t tell the whole story. Now that we have decades of data on a huge number of companies available on the internet across financial, sustainability, workforce, production, and other dimensions which can be analyzed overtime and cross-correlated, we can do more, and know more.
Based on this correlated data, machine learning can be used to build functions by industry and company size that can predict future state with high confidence based upon the presence of a sufficient number of sufficiently accurate data points for a company in question. Now that these platforms can monitor enough internal, community, and market data and pull in a plethora of data feeds, they can accurately compute metrics with high confidence along a host of dimension, and this in turn allows them to compute the metrics that are needed to predict future state if the vendor’s platform has enough historical data on enough companies to define trends and define predictor functions using machine learning.
Not only can you enter a relationship based on a current risk profile, but on a likely future risk profile based on what the company could look like at the end of the desired contract term. If you want a five year relationship, maybe taking advantage of that great deal due to a temporary blip in supplier or market performance may not be a good idea if suppliers historically in this situation typically went into a downward spiral after accepting a big contract they ultimately weren’t prepared to deliver on.
Category Based Supplier Rebalancing
We could actually do this today, as a few vendors are now offering this capability, but it’s not yet part of supplier management platforms and the newly emergent offerings are often limited to a few categories today. But tomorrow’s platforms will continually analyze your categories holistically (along the most relevant dimensions, which could include cost, supply assurance, environmental friendliness, etc.) to determine if the supply mix you are currently using is the best one, let you know if there could be a better one, and suggest changes to orders (as long as it doesn’t jeopardize contracts where that jeopardy could come with a financial or legal penalty).
It’s just a matter of re-running an optimization model on, say, a monthly basis with updated data on price, supply assurance, and environmental friendliness (using the appropriate data for each, such as market quotes, current supplier risk, carbon per unit, etc), and comparing the optimal result to the current allocation plan. If it’s within tolerance, stay on track; if it’s slightly out of tolerance, notify a human to conduct and review a thorough analysis to see if something might need to change; if it’s way off of tolerance, recommend a change with the data that supports the change.
Supply Base Rebalancing
Once you have a platform that is continually reanalyzing categories and supplier-based assignment, you can start looking across the supply base and identify suppliers which are hardly used (and an overall drain on your company when you consider the costs of maintaining a relationship and even maintaining the supplier profile) and supplier that are potentially overused (and pose a risk to your business simply based on the level of supply [as even the biggest company can stumble, fall, and crash to the ground on a single unexpected event, such as the unexpected installation of a spreadsheet driven Master of Business Annihilation as CEO who has no clue what the business does or how to run it effectively and, thus, causes a major stumble, as summarized in Jason Premo’s article).
And, more importantly, identify new suppliers who have been performing great with slowly increasing product / service loads and should be awarded more of the business over older suppliers that are becoming less innovative and more risky to the operation at large. Now, this will just be from a supply perspective, and not a supply chain perspective (as these programs focus on suppliers and not logistics or warehousing or overall global supply issues), but this will be very valuable information for Sourcing and New Product Development who want to always find the best suppliers for a new product or service requirement.
Real-Time Order Rebalancing
Since tomorrow’s platforms will be able to recommend category rebalancing across suppliers, they will also be able to quickly recommend real-time order rebalancing strategies if a primary supplier is predicted to be late in a delivery (or a human indicates an ETA for a shipment has been delayed by 60 days). This is because they will be integrated with current contracts, e-procurement systems, and have a bevy of data on projected availability and real historical performance. Thus, it will be relatively simple to recommend the best alternatives by simply re-running the machine learning and optimization models with the problematic supplier taken out of the picture.
Carbon-Based Rebalancing
Similarly, with the rise of carbon-calculators and third-party public sources on average carbon production per plant, and even unit of a product, it will be relatively easy for these supplier management platforms to build up carbon profiles per supplier, the amount of that carbon the company is responsible for, how those profiles compare to other profiles, and what the primary reasons for the differentiation are.
The company can then focus on suppliers using, or moving to, more environmentally friendly production methods, optimize logistics networks, and proactive rebalancing of awards among supplier plants to make sure the plants producing a product are the ones closest to where the product will be shipped and consumed. It’s simply a carbon focussed model vs. a price focussed one.
SUMMARY
Now, we realize some of these descriptions are dense, but that’s because our primary goal is to demonstrate that one can use the more advanced ML technologies that already exist, harmonized with market and corporate data, to create even smarter Supplier Management applications than most people (and last generation suites) realize, without any need (or use) for Gen-AI. More importantly, the organization will be able to rely on these applications to reduce time, tactical data processing, spend, and risk while increasing overall organizational and supplier performance 100% of the time, as the platform will never take an action or make a recommendation that doesn’t conform to the parameters and restrictions placed upon it. It just requires smart vendors who hire very smart people who use their human intelligence (HI!) to full potential to create brilliant Supplier Management applications that buyers can rely on with confidence no matter what category or organization size, always knowing that the application will know when a human has to be involved, and why!
Have You Brought Your Supply Chain Planning Out of the Middle Ages?
Back in the 1930s, the dark ages of computing began, starting with the Telex messaging network in 1933. Beginning as an R&D project in 1926, it became an operational teleprinter service, operated by the German Reichspost (under the Third Reich — remember we said “dark ages”). With a speed of 50 baud, or about 66 words per minute, it was initially used for the distribution of military messages, but eventually became a world-wide network of both official and commercial text messaging that survived into the 2000s in some countries. A few years later, Bell Labs’ George Stibitz built the “Model K” adder in 1937 that was the first proof of concept for the application of Boolean Logic to computer design. Two years later, the Bell Labs CNC (Complex Number Calculator) was completed. In 1941, the Z3, using 2,300 relays, was constructed and could perform floating point binary arithmetic with a 22 bit word length and execute aerodynamic calculations. Then, in 1942, the ABC (Atanasoff-Berry Computer) was completed, seen by the John Mauchly, who invented the ENIAC, which was the first general purpose computer completed in 1945.
Three years later, in 1948, Frederic Williams, Tom Kilburn, and Geoff Toothill developed the Small-Scale Experimental Machine (SSEM), which was the first digital, electronic, stored-program computer to run a computer program, consisting of a mere 17 instructions! A year later, we saw the modem that allowed computers to communicate through ordinary phone lines. Originally developed for transmitting radar signals, the modem was adapted for computer use four years later in 1953. The same year saw the EDSAC, the first practical stored-program computer to provide a regular computing service.
A year later, in 1950, we saw the introduction of magnetic drum storage, which could store 1 Million bits, which was a previously unimagined amount of data (and twice what Gates once said anyone would ever need), though nothing by today’s standards. Then, in 1951, the US Census Bureau gets the Univac 1 and the end of the dark ages are in sight. Then, in 1952, only two years after the magnetic drum, IB introduces a high speed magnetic tape, which could store 2 million digits per tape! In 1953, Grimsdale and Webb built a 48-bit prototype transistorized computer that used 92 transistors and 550 diodes. Later that same year, MIT created magnetic core memory. Almost everything was in place for the invention of a computer that didn’t take a whole room. In 1956, MIT researchers began experimenting with direct keyboard input to computers (which up to now could only be programmed using punch cards or paper tape). A prototype of a mini computer, the LGP-30, was created at Caltech this same year. A year later, FORTRAN, one of the first third generation computing languages, was developed in 1957. Early magnetic disk drives were invented in 1959. And 1960 saw the introduction of the DEC PDP-1, one of the first general purposed mini-computers. A decade later saw the first IBM computer to use semiconductor memory. And one year later, in 1971, we saw one of the first memory chips, the Intel 1103, and the first microprocessor, the Intel 4004.
Two years later NPL and Cyclades started experimenting with internetworking with the European Informatics Network (EIN) and Xerox PARC began linking Ethernets with other networks using its PUP protocol. And the Micral, based on the Intel 8008 microprocessor, one of the earliest non-kit personal computers, ws released. the next year, in 1974, the Xerox Parc Alto was released and the end of the dark ages were in sight. In 1976, we saw the Apple I, and in 1981 we saw the first IBM PC and the middle ages began as computing was now within reach of the masses.
By 1981, before the middle ages began, we already had GAIN Systems (1971), SAP (1972), Oracle (1977), and Dassault Systemes, four (4) of the top fourteen (14) supply chain planning companies according to Gartner in their 2024 Supply Chain Planning Magic Quadrant (Challengers, Leaders, and Dassault Systemes). In the 1980s we saw the formation of Kinaxis (1984), Blue Yonder (1985), and OMP (1985). Then in the 1990s, we saw Arkieva (1993), Logility (1996), and John Galt Solutions (1996). This says ten (10) of the top fourteen (14) supply chain planning solution companies were founded before the middle ages ended in 1999 (and the age of enlightenment began).
Tim Berners-Lee invented the World Wide Web in 1989, the first browser appeared in 1990, the first cable internet service appeared in 1995, Google appeared in 1998, and Salesforce, considered to be one of the first SaaS solutions built from scratch launched in 1999. At the same time, we reached an early majority of internet users in North America, ending the middle ages and starting the age of enlightenment, as global connectivity was now available to the average person (at least in a first world country).
Only e2Open (2000), RELEX Solutions (2005), Anaplan (2006), and o9 Solutions (2009) were founded in the age of enlightenment (but not the modern age). In the age of enlightenment, we left behind on premise and early single client-server applications and began to build SaaS applications using a modern SaaS MVC architecture where requests came in, got directed to the machine with the software, that computed answers, and sent them back. This allowed for rather fault-tolerant software since if hardware failed, the instance could be moved. If an instance failed, it could just be redeployed with backup data. It was true enlightenment. However, not all companies adopted multi-tenant SaaS from day one, only a few providers did in the early days. (So even if your SCP company began in the age of enlightenment, it may not be built on a modern multi-tenant cloud-native true SaaS architecture.) This was largely because there were no real frameworks to build and deploy such solutions on (and Salesforce literally had to build their own.
However, in 2008, Google launched its Cloud and in 2010, one year after the last of the top 14 supply chain applications was launched, when Microsoft launched Azure, the age of enlightenment came to an end and the modern age began as there were now multiple cloud-based infrastructures available to support cloud-native true multi-tenant SaaS applications (no DC operational knowledge required), making it easy for any true SaaS provider to develop these solutions from the ground up.
In other words, not one Supply Chain Planning Solution recognized as a top supply chain planning solution by Gartner was founded in the modern age. (Moreover, if you look at the niche players, only one of the six was founded in the age of enlightenment, the rest are also from the middle ages.)
So why is this important?
- If the SCP platform core was architected back in the day of client server, and the provider did not rearchitect it for true multi-tenant, even if the vendor wrapped this core in a VM (Virtual Machine), put it in a Docker container, and put it in the cloud, it’s still a client-server application at the core. This means it has all the limits of client server applications. One client per server. No scalability (beyond how many cores and how much memory the server can support).
- If the platform core was architected such that each module, which runs in its own VM, requires a complete copy of the data to function, that’s a lot of data replication required to run the platform, especially if it has 12 separate modules. This can greatly exacerbate the storage requirements, and thus the cost.
- But that’s not the big problem. The big problem is that models constructed on a traditional client-server architecture were designed to run only one scenario at a time, and only do so if a complete copy of the data is available. So if you want to run multiple models, multiple scenarios for a model, or both, you need multiple copies of the module, each with their own data set for each model scenario you want to run. This not only exacerbates data requirements, but compute requirements as well. (This is why many providers limit how many models you can have and scenarios you can run as their cloud compute costs skyrocket due to the inefficiency in design and data storage requirements.)
And while there is no such thing as a truly optimal supply chain plan, since you never know all the variables in advance, there are near optimal fault-tolerant plans that, with enough scenarios, can be identified (by building up a picture of what happens at different demand levels, supply levels, transportation times, etc.) and you can select the one that balances cost savings, quality, expected delivery time, and risk at levels you are comfortable with.
That’s the crux of it. If you can’t run enough scenarios across enough models to build up a picture of what happens across different possibilities, you can’t come up with a plan that can withstand typical perturbations, and definitely can’t come up with a plan that can be rapidly put into place to deal with a major demand fluctuation, supply fluctuation, or an unexpected supply chain event.
So if you want to create a supply chain plan that can enable supply chain success, make sure you’ve brought your supply chain planning out of the middle ages (through the age of enlightenment) and into the modern age. And we mean you. If you chose a vendor a decade ago and are resisting migration to a newer solution, including one offered by the vendor, because you spent years, and millions, tightly integrating it to your ERP solution, then you’re likely running on a single tenant SaaS architecture at best, and a nicely packaged client server architecture otherwise. You need to upgrade … and you should do it now! (We won’t advise you here as we don’t know all of the vendors in the SCP quadrant well enough, but we know some, including those that have recently acquired newer, age of enlightenment and even modern age solutions, and know that some still have old tech on old stacks that they maintaining because of install base. Don’t be the company stalling progress for your own good!)
We Want to Be a Smart Company — What Else Can We Do! Part 2
In part 1, you admitted that you read the dumb company: avoid the fork in the road and dead company walking: avoiding the graveyard articles (links in part 1), taken them to heart, admitted you’re making some mistakes and that you’re not doing some key functions as well as you could. Most importantly, you know you need to do more to avoid becoming a casualty of the next mass corporate extinction that’s coming. And you asked us to tell you what else you could do to avoid becoming a dead company walking (or, even worse, a zombie company*)? And yesterday we gave you our first five suggestions. Today we give you our next five.
06) Remember Websites are MORE than Static Web Pages
Your website should be a dynamic and interactive website that quickly guide visitors to the educational and informative content they want, with point-based and constructable demos, targeted education and thought leadership, and easy to find contact us options for information requests and specific live demos from thought leaders and solution professionals, not sales people. (Qualify the lead, then pass it on to sales.)
It should not, like the majority of websites today, be an overload of hogwash messaging and buzzwords, fancy animated graphics that don’t actually show the solution in use, a constant barrage of questions (along the “do you have trouble with …”) with the uniform “contact us for answers” directive. It definitely should not contain nonstandard terminology for modules, functions, and processes. (And definitely don’t mislead and say you’re an e-Procurement tool if you’re an e-Sourcing tool, and if you don’t know the difference, that just means you didn’t do your homework!) Confusing or non-existent information on target industries and market size (as we all know there is no one size fits all solution, and pretending that your company has one is just obnoxious). Or utter lack of information on pricing tiers and benefits. (Maybe you can’t give an exact price because you offer SSDO or advanced analytics that requires a lot of pay-per-use cloud processing, but you can still give a base license fee or range. If you’re a M+ annual solution, you don’t want companies that can’t, or won’t, pay more than 100K reaching out. The market should understand you get what you pay for and that a 100K solution won’t, or at least shouldn’t, have all the features of a 1M solution, but also that, if they are a smaller company, they shouldn’t need all the advanced features of the 1M solution either.)
07) Tap Your Talent for Top Tens
Sometimes the talent you overlook (because you think they are just a developer, pre-sales solution advisor, etc.) has the best ideas (and sometimes they don’t, and that’s why you use your leadership to filter out the best ideas).
If you have a problem, or just want to look for opportunities for improvement, ask your people first. Now, they won’t identify or come up with everything (as they have a limited view from a single function and may not have the decades of experience that is sometimes required to come up with something that is both “obvious” and revolutionary), but why should you pay a consultant to help you with improvements you can identify and make in house? You want the consultant focussed on the big win improvements you don’t see (and not easily sidetracked with the dozens of things you can do better).
So, ask all of your employees to come up with, anonymously if it helps, the
- ten best ways to save money,
- ten best investments across the business,
- ten best ways to improve productivity,
- ten SaaS apps you can do without,
- ten functions that would totally change customer productivity in your core offering
- ten functions that could be removed from the roadmap because they are actually low value,
- etc.
And while you will get a lot of pyrite, you will get some gold nuggets. And if you’re knowledgeable enough, you’ll be able to separate the gold nuggets out. (And if not, you’ve jump started your expert advisor with some unique insights into your business and your team and that will improve their productivity.)
08) Always Pause for Innovation
Regardless of how you interpret what we tell you in #10, if an opportunity for innovation presents itself, always pause to evaluate it and see if it is a true opportunity, fits in with the plan, and would make the product, and the plan, better. If it would make the plan better, and it wouldn’t slow progress down more than a small amount, work it into the plan. If it would make the plan better, but would slow progress down a moderately significant amount, put it on the roadmap to be considered in the next plan update (as new ideas might emerge that make it less of an impact by then). Moreover, when you stumble upon it, the right innovation will improve the product, the plan, and even the timeline.
09) Sign in Blood
Once you have a plan, sign your name to it in blood. The only thing worse than not having a proper plan is abandoning a good plan part way through (because you get too anxious or lose faith) … if, after investing a lot of time and effort, you abandon a plan part way through, you might as well just shut the doors now instead of retreating into the castle to starve as you wait out the siege. Greatness takes time, effort, and sometimes sacrifice.
10) Drive Decisions Like You Just Heisted the Antwerp Diamonds
Once you have a direction, don’t stop. Don’t pull over. Keep going until you successfully escape the EU, sorry, until you escape mediocrity and unprofitability. (And definitely don’t panic along the way. If you got out clean, and have 24 hours to make your escape, use every last hour, because once you cross the border, you’re off Scot-free.)
Once you have it all figured out and committed to, you have to be Hagar behind the wheel and drive, drive, drive. Slowing down will lead to stopping. Stopping to abandonment, and then, instead of improvement and success, it will be failure and the beginning of the end. As per 09, you have to see the plan through, and this will only happen if you never stop — you have to keep going as long as there is a drop of gas in the tank.
* yes, zombie companies exist in our space too; and, as the entertainment industry would have you believe, since we’re not medical doctors working in morgues with a constant fresh supply of brains, it is a fate worse than corporate death!
