For Proper Direct Sourcing, Different Organizational Thinking is Required

In our last post we noted that standard sourcing solutions don’t work for direct and referred you to our seven part series with Bob Ferrari of Supply Chain Matters at these links:

And we noted the reason was that direct sourcing doesn’t work isolated from supply chain. Fortunately, direct sourcing and supply chain planning can work together, but only if we

Think Different

This is the only way we are going to realize business and operational planning alignment from source to supply. Right now, it takes too much time across the various strategic, tactical, and operational decision making processes in the gathering, assimilation or transcribing of the most up to date line-of business, functional or operational focused data and information into spreadsheets and antiquated tools to support forecasting, sourcing, supply chain, and logistics systems.

This is primarily due to the fact that not only are each of these processes for different timeframes but they are typically conducted using different business processes. Long term strategic planning is typically conducted using IBP methodologies, mid-term tactical planning is typically conducted using S&OP methodologies, and short-term planning is typically conducted using exception planning, materials replenishment planning, logistics re-routing, etc.

Each plan requires information on the connecting layers in order to make a decision. IBP requires knowledge of what S&OP can do and the best historical results from S&OP to come up with the plans most fit for execution. S&OP requires knowledge of the overriding IBP goals as well as the operational systems used for day to day procurement, inventory replenishment and management, logistics and trade management, and production. Since most of these systems don’t talk, it’s a lot of manual data collection, processing, and pushing up and down the levels and the chain.

These processes need to be connected in integrated planning loops that span:

  • Plan, Source and Procure
  • Plan and Analyze
  • Plan and Produce
  • Execute and Fulfill

Moreover, these planning process frameworks need to be enabled by more effective data management, data harmonization, and analytics that enables these loops to constantly be executed and re-executed as needed to ensure each level of planning and each step of the process has the data it needs to suggest the right answer for the human to make the right decision.

Finally, this will only happen if organizational employees think different and adopt new processes, frameworks, data models, and strategies to integrated planning from source to supply. For some insights in to how this might happen, see part three of our joint series on how today’s Organizational Thinking is Wrong.

Standard Sourcing Solutions Don’t Work For Direct

the doctor recently teamed up with the Supply Chain Master Bob Ferrari over on Supply Chain Matters to bring you an initial seven part series on why Standard Sourcing Technology Solutions Don’t Work for Direct, which you can find at these links:

If you read Parts I and Part II in detail, which you most definitely should because we’re only going to summarize a few highlights here, we detail some of the big reasons they don’t work, besides the fact that most were designed for indirect and can’t even do the basics of direct sourcing. The reasons we put forward included:

  • Direct Material Sourcing is Hard
    • substitution (like satisfaction) is not guaranteed
    • substitution is always conditional when available
    • demand is not easily aggregated
    • delivery time guarantees are often significantly more important
  • Sourcing Platforms Don’t Do Direct Well (as most were designed for indirect)
  • Most Sourcing Platforms Don’t Support Bill of Materials
  • Most Sourcing Platforms Don’t Support Optimization

Then we dove into why direct solutions don’t work either:

  • It’s Not Just Landed Cost, It’s Total Cost of Acquisition
  • It’s Not Just Cost, It’s Supply Assurance
  • It’s Not Just Supply Network Assurance, It’s Timing

That’s just the baseline sourcing side of the equation. We still haven’t talked about the supply assurance side:

  • They Aren’t Designed for Multi-Stage NPD/NPI Sourcing and Quality Assessments
  • They Aren’t Designed to Capture Network Performance and Carrier Risk
  • They Aren’t Designed to Capture and Assess External Risks

That last point is key. If you’re not considering the geopolitics of where you are sourcing from and where you are sourcing to, and how those might change in the near future, you could be in for quite a shock, as many of you in the USA found out this year. If you had been paying attention to the election, noted how much a certain Tech Bro donated to a certain campaign, and compared that number to past campaign contributions, you would have known the election, which appeared neck and neck, was being bought and paid for, which party was going to win, and who was going to be President.

If you did your research, analyze everything he said publicly in the decades leading up to his first campaign for political office, look at what he actually did in his first term, and read Project 2025, you would have known something was coming on the trade front, especially where certain countries were concerned. And you would have known that what was coming was not going to be good for your business if you were sourcing from China.

But it’s not just the “to” destination you have to worry about, especially if the only thing increasing is cost. It’s also the “from” destination, which could be cut off entirely by a new regime that imposes sanctions or embargoes, or could undergo a rapid economic decline due to bad government decisions, external third party sanctions and embargoes, or global shifts in trade. A great discussion of this can be found in Koray Köse’s recent LinkedIn post on on Poland’s Economy: Reslient Amid Political Storms and how it faces a test under it’s new President — and how, should it fail that test, supply chain leaders need to be prepared. It’s the perfect example of why supply chain considerations need to be pulled back to sourcing, because there’s no way an average sourcing professional today would consider any of this when evaluating suppliers for a direct sourcing project.

When Someone Says “Real AI”, Ask For Details!

We shouldn’t have to remind you, but since too many people are falling for, and buying into, the hype and selecting tech that does not, and can not, ever,work, we are going to remind you yet again.

Computers do NOT think!

To think is to direct one’s mind … where one is an intelligent being, not a dumb box. Computers thunk … they compute using algorithms (which are hopefully advanced and encapsulate expert guidance and knowledge, but that is far from guaranteed).

Computers do NOT learn.

Appropriately selected and implemented probabilistic / statistical / machine learning algorithms will improve their performance over time as more data becomes available, but they do not learn. Learn is to acquire knowledge (or skill), and by definition, knowledge can only be acquired by an intelligent being.

Computer Programs Can Adapt …

but there’s no guarantee the adaption is going to improve their performance under your definition, or even maintain their performance. Their performance could actually decrease over time.

What is critically important is that there are two primary types of algorithms that can be used to create an AI application:

Deterministic and Probabilistic

A deterministic algorithm is one that, by definition, given a particular input will, no matter what, always produce the same output, with the underlying machine always passing through the same sequence of states. As long as you don’t screw up the input, or the retrieval of the output, (and, of course, the hardware doesn’t fail), it is 100% reliable.

A probabilistic algorithm, in comparison, is an algorithm that incorporates randomness or unpredictability into its execution, and may or may not produce the same output given successive iterations of the same input. Nor is there even any guarantee that the algorithm will produce a correct, or even an acceptable, input a given percentage of the time. Well designed, these algorithms may allow for consistently faster computation, better identification of edge cases, or even a lower chance of error, on average, for a certain class of inputs (but with the caveat that other classes of inputs may suffer a higher error rate).

Deterministic algorithms can be relied on to execute certain tasks and functions autonomously with no oversight and no worry. Probabilistic cannot. In other words, you cannot assign a probabilistic algorithm a task for autonomous computation unless you can live with the worst possible outcome of the algorithm getting it wrong. And this is what Gen-AI, and most of today’s “AI” tech, is based on.

This is the critical problem with today’s AI-tech and AI-Hype. Especially when a probabilistic system can, by definition, use any method it likes to determine a probability (which may or may not be at all appropriate, since a model is only valid if it accurately captures the “population” dynamics) and may, or may not, be accurate. For some of these situations, it will be the case that neither the company nor the provider of the system will have enough historical data (market situation and outcome) to even attempt to make a reasonable prediction, and there definitely won’t be enough data to know the accuracy, because standard measures of model accuracy (like the Brier Score), tend to require a lot of data, especially if you have a situation where you need to accurately identify rare events as this could require 1,000 or more “data points” (which, in a typical market scenario, would require enough data to identify the market condition and then the unexpected change”).

(And this is exacerbated by the reality that, for many of these situations, one could likely employ more traditional “statistical techniques” like trend analysis, clustering, classical machine learning, etc. to solve much of the problem at hand.)

It’s important to remember that Gen-AI LLMs, which power most of the new (fake) agentic tech, are all probabilistic based (and designed in such a way that hallucinations are a core function that CAN NOT be eliminated), and much of it is complete and utter garbage for what it was designed for, and even worse for tasks it wasn’t defined for (like math and complex analyses). (Everyday we see a new example of complete and utter failure, often due to hallucinations, of this tech. For example, you can’t even get a list of real books out of it — as per a recent contribution to the Chicago Sun Times which which published its Summer Reading List of 15 books, of which only 5 of which actually exist. And then there are numerous examples of lazy lawyers getting raked over the coals by judges for using ChatGPT to do their homework and quoting fake cases!)

While we do need to augment purely deterministic tech with more adaptive tech that uses the best “statistical techniques” to more quickly adapt to situations, we need to spell out the techniques and restrict ourselves to what is now “classic machine learning” where the algorithms have been well researched and stress tested over decades (not modern Gen-AI powered agentic tech that has worse odds than your local casino). At least then we’ll have confidence and can enforce bounds on what the solution can actually do (to limit any potential damage).

Especially now that we finally have the computing power we need to effectively use tried-and-true “classic” ML/AI techniques that require large data stores and huge processing power for highly accurate predictions. The reality is that even though this tech has existed for at least 25 years, the computing power required made it totally impractical for all but the most critical situations. Twenty-five years ago, a large Strategic Sourcing Decision Optimization (SSDO) model would run all weekend. Today you can solve it in a few seconds on a large rack server (with 64 cores, GB of cache, and high-speed access to TB of storage). The fact that we finally have (near) real time capability means that this tech is not only finally usable in all situations, but finally effective.

[And if vendors actually hired real computer scientists, applied mathematicians, and engineers and built more of this tech, instead of script kiddies cobbling together LLMs they don’t understand, we would be a decade ahead of where we are today.]

A Very Brief History of “Safe” American Inventions and Products

More specifically, a brief history of inventions and products developed, or (primarily) adopted, in the USA as perfectly “Safe” for public use when they were anything but! From the late 1800s to the present day.

Asbestos: large scale mining began in the late 1800s when manufacturers and builders decided it was a great thermal and electrical insulator whose adverse effects on human health were not widely recognized and acknowledged until the (late) 1970s; even today exposure is still the #1 cause of work-related deaths in the world (with up to 15K dying annually in the US due to asbestos-related disease)

Aspirin: as per our previous post, invented in 1897, available over the counter in 1915, it was heavily promoted as the cure all in the 1920s through the 1940s and might have cost us over a hundred thousand lives due to overprescription during the Spanish Flu pandemic alone

Cocaine: from the late 1880s through the early 1910s, your physicians were big fans of the Victorian wonder drug (as per this Lloyd Manufacturing Ad archived on the NIH site) as it was effectively the first effective local anesthetic the western world knew about (which was endorsed by the Surgeon-General of the US Army in 1886), although the real popularity was in the public, with an estimated 200,000 cocaine addicts in the US by 1902; still, it was 1914 before it was restricted to prescription use, 1922 before tight regulations were put in place, and likely the late 1940s before prescription and dispensation finally came to an end; moreover, it was generally viewed as harmless and non-addictive until crack emerged in 1985 (even though the number of cocaine related deaths in the US climbed to 2 per 1,000 in 1981)

DDT: (this is particularly relevant to Gen-Z who are fully on-board the Gen-AI hype train) developed in the 1940s as the first modern synthetic insecticide, Gen Z’s grandparents and great-grandparents used to run through DDT clouds that were sprayed in the streets of your cities and towns in the 1940s through the 1960s, as the first health risks were not reported until roughly 1962 when Rachel Carson published Silent Spring, and it wasn’t until 1972 when the US banned it for adverse effects on human health (as well as the environment); to this day, we’re still not sure how many deaths it has contributed to, although the UN estimates 200K people globally still die from toxic exposure to pesticides, of which DDT was the first and the precursor to many newer derivations (Source)

PFAS, inc. PTFE (Teflon)

developed by DuPont in 1938, spun off into Chemours, it found use as a lubricant and non-stick coating for pans, and was produced using PFOA (C8), which we now know (and should have known much sooner, but there was a massive PFAS cover up) is carcinogenic (but only for the last decade or so as it was only classified as such in 2013 even though we should have known by the late 1990s) but they still aren’t banned (even though legislation was proposed last year to phase them out over the next decade); because of the cover ups and lack of studies until recent times, we still don’t know how deadly this was, and is, but estimates are that PFAS likely killed 600K annually between 1999 and 2015 and 120K annually after that in the USA (Source) … WOW!

Tobacco: in the 1950s, cigarettes were advertised as good for you with Doctor (Camel Advertisement) and Dentist (Viceroy Advertisement) recommendations on the ads! Despite the fact that health risks were known since the late 1950s (when the first epidemiological study showing an association between smoking and lung cancer was published by Wynder and Graham), minors in the USA could still buy cigarettes until 2009 … even though Tobacco likely killed over 100 Million people globally in the 1900s (Source)

etc.

We could go on, but the point is this: like most cultures, the USA is not good at picking winning technology that is safe for everyday use, or at least safe enough under appropriately designated usage conditions.

There’s a reason that most countries have harsh regulations on the introduction of new consumer products and technologies that US lobbyists and CEOs scream about, and that’s because more mature countries (which have been around longer than a mere 249 years) understand that no matter how safe something seems, every advancement comes at a cost, every invention comes with a risk, and every convenience comes at a price — and until we know what we are paying, when we need to pay it, and how much we are going to pay, we shouldn’t rush in head first with blinders on.

And while we might still get it wrong, the reality is that we’re more likely to get it right if we take our time and properly evaluate a new technology or advancement first, and even if we get it partially wrong, as in the case of Aspirin, at least the gain should outweigh the cost. For example, even though it can be argued Aspirin was rushed to market, when used in proper doses, the side effects for the vast majority of the population are typically much less than the anti-inflammatory benefits as, for decades, there was no substitute. Even if it gave a person stomach irritation or minor ulcers, if it was life-saving, then that was a reasonable cost at the time.

However, in the cases of DDT, PFAS, and Tobacco, there was no excuse for the lack of research, and, in some cases, the prolonged cover up of research that indicated that maybe the products were not safe but, in fact, very deadly, and since they brought no significant life saving benefits (Malaria wasn’t a big concern in the USA; people were cooking with butter, lard, and oils for centuries; and, in small quantities, both alcohol and cannabis were known to not only be safer, but even medicinal in the right quantities), there was no need to rush them to market.

The simple fact of the matter is that no tech — be it chemical/medicinal, (electro-)mechanical, or computational — can be presumed safe without adequate testing over time, and that’s why we need regulations and proper application of the scientific method. A lack of apparent side effects doesn’t mean that there are none. That’s why we have the scientific method and mathematical proofs (for confidence and statistical certainty), which is something today’s generation doesn’t appear to know a thing about (especially if they just did a couple of years of college programming) as they’ve probably never been in a real lab [or played with uranium like their grandparents because it was legal in the USA to sell home chemistry kits with uranium samples to children in the 1950s, and these kits included the Gilbert U-238 Atomic Energy Lab] and more than likely don’t know the rule of thumb that you should generally add the acid to the base (and not vice versa because, otherwise, this could happen) and that you should definitely add the acid to whatever liquid [typically water] you are diluting it with.

Regulations exist for a reason, and that reason is to keep us safe. The Hippocratic Oath should not be restricted to
doctors and the Obligation of the Order should not be restricted to engineers. Every individual in every organization bringing a product to market should be bound by the same, and regulations should exist to make sure that all organizations take reasonable care in the development and testing of every product brought to market, real or virtual. (This doesn’t mean that every product needs to be inspected, but that regulations and standards exist for organizations to follow, and those caught not following the regulations should be subject to fines that would ensure that not just the company, but the C-Suite personally, was bankrupted if the company was found to have ignored the regulations.)

While Gen Z might like the Wild Wild West (which the USA never grew out of) as much as Gen X who created the dot com boom, we need to remember that the dot com boom ended in the dot com bust in 2000, and that if this new generation continues to latch on to AI like Boomers would latch on to blankies and teddies, it just means they are doomed to repeat the mistakes of their grandparents (and will bring about a tech market crash that makes the dot com bust look like a blip). You’re supposed to learn from history, NOT repeat it!

Got a Headache? Don’t Take an Aspirin or Query a LLM!

Yesterday we provided you with a brief history of Aspirin, the first turn-of-the-century miracle drug that was both society’s salvation and sorrow, though the latter wouldn’t be known for more than half a century. As we discussed, it was hailed as a miracle and life-saving drug that could be used for everything from the common cold to global pandemics. And it worked, for a price. That price, when it needed to be paid, was usually one of many, many side effects which were often minor and insignificant compared to the perceived benefit the drug was bringing, except when they weren’t and they enflamed ulcers and/or increased gastrointestinal bleeding and created a life threatening situation, caused hyperventilation in a pneumonia patient, or induced a pulmonary edema and killed the patient. While the death rate even at the height of over-prescription was likely only 3%, and less than a 10th of that today, it’s still not good.

The reason for this, as we elaborated in our last post, is because, like many of the breakthrough technologies that came before, it was not only rolled out before the side effects, and more importantly, the long term effects, were well understood, but before even the proper use for the desired primary effects were well understood (as evidenced by the fact that the best physicians were routinely prescribing two to four times the maximum safe dosage during the Spanish Flu Pandemic almost 20 years after first availability). While there were benefits, there were consequences, some of them severe, and others deadly.

Medicine is as much a technology as a new mode of transportation (boat, automobile, airplane, etc.), a new piece of manufacturing equipment, a new computing device, or a new piece of software.

Now you see the point. Every breakthrough tech cycle is the same. Whether it is medicine, farm machinery, the airplane, or modern software technology — and this includes AI and definitely includes LLMs like ChatGPT.

As Aspirin proves, even if the first test seems to be successful, there’s always more beneath the surface. Especially when the population numbers in the billions and every individual could react differently. Or, in the case of an LLM, billions of people who have thousands of queries, the large majority of which have never been tested, and all of which could generate unknown results.

Moreover, there have not been significant large-scale independently funded academic studies that we can use to understand the true strengths and weaknesses, truths and hallucinations, and appropriate utilization of the technology. As Mr. Klein has pointed out in a recent LinkedIn post that asked who funded that study, over 80% of AI industry “studies” are funded by undisclosed sources, and most of them, like most industry studies these days (see Mr. Hembitski’s latest post) don’t contain good data on demographics, sample size, test material, or potential bias.

That would be the first step to trying to get a grip on this technology. The next step would be to create reasonable measures that we could use to appropriately define technology categories and domains for which we could identify tests and measures that would give us a level of confidence for a given population of inputs or usage. If you consider a traditional (X)NN (Neural Network), which have a fixed set of outputs and are designed to process inputs from a known population, we have developed methodologies to determine the accuracy of such models with high confidence through testing and random sampling with sufficiently sized data sets using appropriate statistical models. Furthermore, mathematicians have proved the accuracy of those models for a given population and we know that if appropriate tests have demonstrated 90% accuracy for a population with 98% confidence, the model is 90% accurate with 98% confidence when used properly.

We have no such guarantees for LLMs, nor any proof that they are reliable. “It worked fine for me” is NOT proof. Vendors quoting nebulous client success stories (without client names or real data) is not proof. Moreover, the fact they raised millions of dollars to bring this technology to market is definitely not proof. (All a raise proves is that the C-Suite sales team is very charismatic and convincing and great at selling a story. Nothing more. In fact, fund raising would be more honest if securities law allowed fund raising via poker and takeover protection via gunfighting, as imagined in the season two episode of Sliders “The Good, the Bad, and the Wealthy“. At least then the shenanigans would be out in the open.)

The closest thing out there to a good industry study on LLMs and LRMs is likely Apple’s newest study, as summarized in The Guardian, where they find that “standard AI models outperformed LRMs in low-complexity tasks while both types of model suffered “complete collapse” with high-complexity tasks“.

The study also found that as LRMs neared performance collapse they began “reducing their reasoning effort and that if the problem was complex enough even when provided with an algorithm that would solve the problem, the models failed.

Still we have to question this study, or more precisely, the release of this study (especially given the timing). Did Apple do it out of genuine academic interest to get to the bottom of the technology claims, or are they doing it to cast doubt on competition as rivals are claiming they are behind in the AI race (and thus they are focussing only on the negatives of the technology to show that their competition doesn’t have what their competition claims to have and are thus not behind).

The point is, we don’t understand this technology, and that fact should scream louder in your head every day. Look at all the bad stuff we’ve discovered so far, and it’s likely we’re not even close to being done yet:

Yes there is potential to the new technology, as there is with all discovery, but until we understand fully not only what that is, how to use it safely, and, most importantly, how to prevent harm, we should approach it with extreme caution and we should most definitely not let it tell us how to run our business or our lives — or else, like an Aspirin overdose, it might just kill us. (And remember, Aspirin was studied for 18 years before it was made available without a prescription, and deadly side effects and prescribed overdoses still happened. In comparison, today’s LLMs and LRMs haven’t been formally studied at all, and the providers of this technology want you to run your business, and your life, off of them in next-generation agentic systems. Think about that! And when the migraine comes, remember, don’t take Aspirin!)