Category Archives: rants

Dear Analyst Firms: Please stop mangling maps, inventing award categories, and evaluating what you don’t understand!


If there’s a place you got to go
I’m the one you need to know
I’m the map
I’m the map, I’m the map
If there’s a place you got to get
I can get you there, I bet
I’m the map
I’m the map, I’m the map

… but if there’s a tool you want to score
I’m the one you must ignore
I’m the map
I’m the map, I’m the map
if there’s a tool you got to get
I’ll lead you astray, I bet
I’m the map
I’m the map, I’m the map

It’s map time! (It’s always map time!) The 2*2 onslaught isn’t over yet (and may never be)! Prepare to be continually overwhelmed with cool graphics, big company names and logos, and no information you can actually use (as is). Why? Because when you map 6+ criteria or dimensions of information down to a single dimension, and 12+ dimensions of information down to a 2×2 grid, it’s meaningless. All you know is which vendor had a total score on two sets of 6+ criteria that was in the top percentile. But you don’t know if that’s because they are good across the board on those 6 criteria, or top score on 3 of those dimensions (in the analyst’s opinion) and average score on the other 3; or top score on 3 of those criteria, average score on 2 criteria, and below average score on the last criteria — which happens to be the core technology criteria that also happens to be the most important criteria to you!

It might not be so bad if all the criteria were different aspects of a criteria category — such as core architecture, product features, and integration under technology; or product innovation, service delivery, and operational efficiency under innovation — but you have a mish-mash of scores on the seven different dimensions of product capability, market viability, sales execution / funnel success rate, marketing execution / visibility, market responsiveness, corporate operations, and overall customer experience which are squished into a single execution dimension in one of the big name maps and a mish-mash of product specific capabilities, related application offering, integrations, globalization, technology, and customer references into an offering dimension in another big name map. It’s crazy! And useless.

And it’s also mind-boggling when you consider the significant effort some of these firms put into their research, the detailed reports they produce, and the great work that often results otherwise. (You may not agree with the analysts’ opinion of what a good strategy is, what true innovation is, what the appropriate product features are, or the scoring scales; but as long as all of the vendors are scored consistently, it’s still valuable insight that you could use in differentiating vendors to find the ones that might be the most right for your organization and your challenges IF all these scores weren’t mangled into one meaningless score you can’t use.)

So, dear analyst firms, please stop! You don’t need to to this. You can provide much more value by not creating these 2*2 mangled maps and either:

  • use a graphing technique that was made for comparing multiple dimensions visually, like a spider graph
  • score less dimensions and then do multiple 2*2s on the different dimension pairs
    (after all, when customers want to buy a solution, do those customers really care about how good a vendor’s marketing is or how successful the salesperson is? heck no! they care only about how good the product is, how well the vendor can serve them, how stable the vendor is, and maybe about how innovative the vendor is if they are forward thinking and want longevity)
  • create bar, or similar, charts on the different dimensions and then give customers a tool to build their own weightings meaningful to them

It’s bad enough these map-creation analyst firms are eliminating vendors from their maps based on criteria that range from somewhat to completely arbitrary, which can include, but not be limited to:

  • an arbitrary minimum on overall revenue in the prior year on software alone
  • an arbitrary client minimum
  • an arbitrary minimum on the average number of users per client
  • an arbitrary minimum on customer size for a % of the customer base
  • an arbitrary minimum on license fees (for the majority of the customer base)
  • an arbitrary list of core “features” that are absolute
  • an arbitrary exclusion of any solution deemed to narrow/industry focussed
  • some other arbitrary requirement merely to maximize the number of vendors that can be included … which might actually eliminate the vendor with the best or most innovative product or service! (Which entirely misses the point, doesn’t it?)

Given all of this, these firms could at least produce maps meaningful to the average buyers where those buyers could extract useful information from the maps as is!

“Two by two they’re still coming down
… the satellite circus never leaves town …”

Holy smoke holy smoke,
plenty bad mappers for the doctor to stoke
Feed ’em in feet first, this is no joke
This is thirsty work, making holy smoke, yeah
Holy smoke
Smells good

The only thing that is as annoying as these meaningless 2*2s is other analyst firms inventing award categories just to create attention for themselves when those award categories are totally meaningless and useless to end customers who have no clue what they mean or what the award categories are evaluating (especially when these award categories often mix vendors with completely different solutions) such as “insight“, “innovation“, “customer-centric” and/or “growth“. While we can be sure that every vendor wants to be seen as “insightful“, “innovative“, “customer-focussed“, and “growing“, that doesn’t tell the customer if the vendor offers a product or service, or if that product is e-Sourcing, e-Procurement, Risk Monitoring, or a simple carbon calculator. And if that’s the only category the vendor is listed in, well, that’s just useless.

I want to run, I want to hide
I wanna tear down the walls that keep them outside
I wanna reach out and set the flame
Where the sheets have no name, ha, ha, ha

I wanna see insight on the page
And see confusion disappear without a trace
I wanna take shelter, I can’t ascertain
Where the sheets have no name, ha, ha, ha

As a postscript, the doctor isn’t annoyed by all of the 2*2 maps (just the majority). Although they aren’t perfect, he finds that the Spend Matters Solution Maps that, in full disclosure, he did co-create (and which he no longer has any association with) are still useful as they are still focussed entirely on two dimensions: product (& underlying technology) evaluation and customer score. (As of V3, released Fall 2021, not due for update until [at least] Fall 2023.) The product evaluation is against an extremely well defined set of criteria where each criteria has a scoring scale that at least defines fledgeling through industry standard capability (and usually above standard as well) and the customer evaluation is done entirely by the customer completing surveys with no analyst interaction whatsoever (as any survey done by an analyst introduces bias based on the way the analyst asks the question and the tone the analyst uses).

The Solution Maps are two, and only two, dimensions that can be consistently scored by any analyst who scores on the product side and consistently scored against perceived value on the customer side. Are they perfect? Of course not! The product side contains some services questions (which are soft and more open to interpretation) (but were less than 5% of the questions); the customer side can be very subjective based upon cultural norms for that customer, customer stage in the relationship (new vs. longer term), and service level the customer subscribed to (and, thus, if there are only a few customer scores, one really bad or really good, out of range, score can really affect the average); and the weightings for the maps are still analyst interpretation of what criteria are most important for each market size, but it’s one relatively pure dimension mapped against another relatively pure dimension, consistently scored, and consistently weighted.  And that’s still considerably more useful than any other map currently is.

Plus, at least when the doctor was involved, there was only ONE requirement for participation: have a standalone solution you are willing to openly demo (without an NDA) and sign a form committing to participation regardless of where you end up falling on the map (which is all mathematically, and not subjectively, computed). So while you can’t say the top vendor is for you, you can say any vendor who makes the map likely has the core tech you need (as they need to at least be industry average) and likely enough customer service to get you going on it. You can produce a short list of comparable vendors that produce a solution of the type you are looking for, of various sizes (not just the biggest vendors), and know that the solutions are reasonably comparable. This allows you to focus on the other value drivers relevant to your organization in the RFP. And if the other maps gave you just granular insight into service, innovation, and any other dimension relevant to you, think how useful they could be?

The Procurement People-Process-Technology Pain Cycle …

Recently on LinkedIn, someone asked the trick question of which came first: process or technology. The answer, of course, was people since, when Procurement, the world’s second oldest profession, started, it was just a buyer haggling with the seller for their wares. and this is how it was for a long (long) time (and in some societies was as far as “procurement” progressed), until shortly after a culture advanced to the point where people could form private businesses that were entities unto themselves. Once these entities started to grow, and multiple people were needed to do the same job, they realized they needed rules of operation to function, and these became the foundations for processes.

But when business buying began, there was typically no technology beyond the chair the employee sat in, the table they used to support the paper they wrote their processes and records on (and the drawers they stored the paper in), the quill and ink they used to write with, and the container that held the ink. And in many civilizations, it was like this for hundreds of (and sometimes over a thousand) years. The first real technological revolution that affected the back office was the telephone (invented in 1876, the first exchange came online in 1878, and it took almost 30 years for the number of telephones to top 1,000,000 (600K after 24 years, 2.2 million after 29 years). [And it took 59 years before the first transatlantic call took place.] The next invention to have a real impact on the back office was the modern fax machine and the ability to send accurate document copies over the telephone. Even though the history of the fax machine dates back to a 1843 patent, the modern fax machine, that used LDX [Long Distance Xerography], was invented in 1964, with the first commercial product that could transmit a letter sized document appearing on the market in 1966. Usage and availability was limited at first (as the receiver need to have a fax machine compatible with the sender), but with the 1980 ITU G3 Facsimile standard, fax quickly became as common as the telephone. But neither of these inventions are what we consider modern technology.

When we talk about “technology” in modern procurement, or modern business in general, we are usually talking about software or software-enabled technology. This, for some businesses, only became common place about 30 years ago (since most businesses could only afford PCs, and even though they were invented in the 1970s, it was the 80s before they were generally available commercially, and the 90s before most smaller businesses could afford them [for the average employee]), and only commonplace in the largest of businesses 50 years ago. Once has to also remember that the first general purpose automatic digital computer built by IBM (in conjunction with Harvard) only appeared in 1944, and that IBMs first fully electronic data processing system didn’t appear until 1952, and, as a result, back office technology really only began in the fifties, and was only affordable by the largest of corporations. (Furthermore, even though he first MRPs were developed in the 1950s, the first general commercial MRP release wasn’t until 1964, and it took over a decade until the number of installations topped 1,000. [And MRP came before ERP.]) In other words, technology, beyond the telephone [and fax] did not really exist in the business back office until the MRP. And it wasn’t common until the introduction, and adoption, of the spreadsheet. The first spreadsheet was VisiCalc, on the Apple II, on 1979. This was followed by SuperCalc and Microsoft’s Multiplan on the CP/M platform in 1982 and then by Lotus 1-2-3 in 1983, which really brought spreadsheets to the masses (and then Excel was introduced in 1985 for the Mac and 1987 for Windows 2X). (And 36 years later Excel is still every buyer’s favourite application. Think about this the next time you proclaim the rapid advance in modern technology for the back office.)

In other words, we know the order in which people, process, and technology came into play in Procurement, and the order in which we need to address, and solve, any problems to be effective. However, what we may not fully realize, and definitely don’t want to admit, is the degree to which this cycle causes us pain as it loops back in on itself like the Ouroboros that we referenced in our recent piece on how reporting is not analysis — and neither are spreadsheets, databases, OLAP solutions, or “Business Intelligence” solutions as every piece of technology we introduce to implement a process that is supposed to help us as people introduces a new set of problems for us to solve.

Let’s take the viscous cycle created by incomplete, or inappropriate, applications for analysis, which we summarized as follows:

Tool Issue Resolution Loss of Function
Spreadsheet Data limit; lack of controls/auditability Database No dependency maintenance; no hope of building responsive models
Database performance on transactional data (even with expert optimization) OLAP Database Data changes are offline only & tedious, what-if analysis is non-viable
OLAP Database Interfaces, like SQL, are inadequate BI Application Schema freezes to support existing dashboards; database read only
BI Application Read only data and limited interface functionality Spreadsheets Loss of friendly user interfaces and data controls/auditability

This necessitated a repeat of the PPT cycle to solve the pain introduced by the tool:

Technology Pain People Process
Spreadsheet Data Limitations Figure out how to break the problem down, do multiple analysis, and summarize them Define the process to do this within the limitations of existing technology
Database Performance Issues Define a lesser analysis that will be “sufficient” and then figure out a sequence of steps that can be performed efficiently in the technology Codify each of those steps that the database was supposed to do
OLAP Stale Data Define a minimal set of updates that will satisfy the current analysis Create a process to do those updates and then re-run the exact same analysis that led to the identification of stale data
BI Tool inability to change underlying rollups / packaged views define a minimal set of additional rollups / views to address the current insight needs, as mandated by the C-suite create a process to take the system offline, encode them, put the system back online, and then do the necessary analysis

In other words, while every piece of technology you implement should solve a set of problems you currently have, it will fail to address others, introduce more, and sometimes bring to light problems you never knew you had. Although technology was supposed to end the pain cycle, the reality is that all it has ever done is set it anew.

So does that mean we should abandon technology? Not in the least. We wouldn’t survive in the modern business world anymore without it. What it means is that a technology offering is only a solution if it

  1. solves one or more of the most significant problems we are having now
  2. without introducing problems that are as significant as the problems we are solving

In other words, technology should be approached like optimization (which, in our world is typically strategic sourcing decision optimization or network optimization). Just like each potential solution returned by a proper mathematical optimization engine should provide a result better than the previous, each successive technology implementation or upgrade should improve the overall business scenario by both solving the worst problems and minimizing the overall severity of the problems not yet addressed by technology.

This is why it’s really important to understand what your most significant business problems are, and what processes would best solve them, before looking for a technology solution as that will help you narrow in on the right type of solution and then the right capabilities to look for when trying to select the best particular implementation of that type of technology for you.

Stop Sanctifying Savings, Recognizing ROIs without Research, or Seeking Solutions Solely in Software

If you’ve been reading the doctor for any length of time, you’re probably a bit confused about the third part of this title — as the doctor is one of the biggest proponents of sustainable software solutions that cover the extended Source-to-Pay process and enable next generation Sourcing and Procurement. However, just because he believes you should have an appropriate software solution for every stage of the source-to-pay process, that does not mean he believes all of the solutions are in software alone. Some are in systems, which are composed of talent, technology, and transformation(al processes).

Sometimes the solution to a challenge isn’t (just) a better system, it’s a better process. Take invoice overpayments, common in large organizations due to over-billings, duplicate billings, and even fraudulent billings. The current “solution” is to use a recovery firm who will take 1/3 of what they “recover” for you as their fee, but they won’t recover everything (since anything off contract is hopeless, as is any fraud that slipped through — the perpetuators are long gone, and even if the authorities find them, by the time you get a judgement in court, they’ve spent, laundered, or transferred the money to somewhere you can’t touch it). This is not much of a solution, because if only 50% of the overspend is addressable, and you lose 1/3 of that in fees, you’re only recovering 1/3 of your overspend. Ouch!

The solution here is better process enabled by technology. When an invoice comes in, the system auto-processes it and auto-matches it a purchase order and a goods receipt. If there is no PO, and it’s not a pre-defined monthly billing, it’s marked as no-pay until manually verified by the buyer that a) the invoice is for goods that were ordered and b) all of the units / services are the agreed upon prices or rates. And even then it’s held for payment until the goods are marked as received or the services delivered. If there is a PO, it must match all of the yet unmatched units (if multiple shipments, and thus invoices, are made against the PO) and each unit must be billed at the approved (contracted rate). If not, it’s flipped back to the supplier for correction. If the supplier won’t correct, possibly because the order was expedited at managerial insistence and the supplier agreed only if a premium could be charged, then it needs managerial approval before a payment can be issued, and if that is not given, it needs to enter a dispute process. In other words, no invoice is paid until matched, verified correct, and, when necessary, granted managerial approval — and the entire invoice management function is governed by a well thought out, defined, and detailed process (with flow-charts that govern process flows) that ensures every invoice is processed correctly in every situation (based upon whether or not the goods and/or services are under contract, PO, cyclic billing agreement, ordered from a catalog, requisitioned at a defined rate scale, bought in an e-auction, etc. In other words, the process comes first, and the technology enables it.

This means that while the software should enable as much of the process as possible, you shouldn’t look to the software, or even the vendor, to define the process for you. The vendor should have best practices, and should provide you with sufficient configuration options to make it work for the process you need, but you need to understand what you need before you select a solution. Some solutions on the market will be really rigid, and others will expect you to configure it to your needs. In other words, software can provide you with what you need to complete the solution, but software alone is not a complete solution — you need the right process and the right people using it. So don’t look to a software provider as the solution, look to a provider to provide software that will provide the software part of the solution.

And, more importantly, don’t accept the promised ROI without doing your own research. Most providers will promise you an ROI of 5X to 15X in an effort to convince you that NOT buying their solution wold be the stupidest thing ever as every day you’re not using their solution you’re flushing money down the toilet. And if the ROI of a solution is that high, you should definitely have a solution — but the solution that gives you that ROI might not be the one that promises it. Remember, ROI is realized return / total solution cost, and depending on how good you were doing before buying the solution, the current market conditions, your industry, and the ancillary costs of the solution (implementation, integration, training, etc.), the ROI for your organization could be drastically different than their average ROI for their average customer. For example, while the vendor’s average customer might see an ROI of 5, you might only see an ROI of 2.5, and at a multiple of less than 3, it’s likely not the solution for your organization. (Unless it’s the only solution and you need a software solution, but it’s rare that there’s only one software solution that would work.)

If you’re given an ROI, ask for the calculation the vendor uses and how you would calculate it for your own organization and do it yourself. Add padding into the price, and when you have an expected range of savings and/or cost avoidance, err on the side of caution (the lower end). That’s the number you use when considering the value, not the vendor’s number. Every situation is different, and you need to understand how different your situation is from their average customer.

However, the most important thing to understand is that you need to stop sanctifying savings and believing that the savings numbers provided by a vendor are a result of their solution. Or that you will achieve anything similar. Remember, “savings”, which is usually just “unnecessary cost avoidance”, is a function of how much the organization is spending across its addressable categories, how much overspend is across those categories, and how much was able to be captured — and this is dependent on organizational size (annual revenue), industry, and spend profile. If your organizational spend is considerably smaller, your addressable spend is less than industry average (long term locked-in contracts, etc.), or your overspend in high volume / dollar categories is less than industry average (either because you had good negotiators, or you cut the contracts at the most opportune time), then your expected “savings” will be considerably less than their average customer savings they are presenting to you. In other words, like ROI, the advertised number may not be what you get. Specifically, your savings might not be anywhere close to their advertised number.

But that’s not the most important reason you need to stop sanctifying savings — the most important reason you need to stop sanctifying savings is that there is absolutely no correlation between the savings numbers and their software. Let’s repeat that. There is absolutely no correlation between the savings numbers and their software. Why? The same reason you should not seek solutions solely in software.

The reality is that, depending on the situation at hand, sometimes most of the “savings” or “cost avoidance” results from a better process alone and has nothing to do with the software solution whatsoever. Also, sometimes the solution that is needed is simply a workflow that enforces a process, a RFX solution that collects comparable information, an e-procurement solution that supports contracted rate catalogs and rate cards, etc. These standard solutions are offered by dozens of vendors and if the majority of the “savings” or “cost avoidance” comes from a baseline solution, it literally doesn’t matter what vendor’s solution you use! Literally. So if the vendor with the significant savings number is asking 1M annually in license fees and a smaller vendor offers a solution with all the necessary baseline functionality for 120K annually, you could get the same savings for 1/8th of the cost (which would significantly impact the ROI).

In other words, when you are given a savings number, you have to do your research and figure out

  • what percentage of those savings results solely from the fact that the implemented solution enforces a proper process
  • what percentage of those savings results solely from the baseline functionality that is available in at least 3 to 5 other solutions (at a lower annual license cost)
  • … and what percentage of those savings result from advanced features found only in that solution

For example, if only 5% of the savings results from advanced functionality and your estimated annual savings from addressable spend for the first 3 years is only 10M per year, are you really willing to spend 8X as much in license fees for an incremental savings of 500K? The answer here should be a resounding NO as that incremental savings is less than the incremental solution cost! But if you’re a multibillion dollar corporate, that could save 50M a year for three years, with a 10% incremental savings from the advanced functionality, then you would be saving an extra 5M per year at an incremental cost of 800K (which is a 6X ROI) AND have advanced functionality that could be applied to all categories that might squeeze out an extra percent here and there.

In other words, what solution you should buy depends on which solution you expect will give YOU the greatest ROI based upon YOUR calculation, not the vendor’s customer averages (or outrageous quotes from multinationals who spend 10X what you do). Furthermore, don’t misread the title — you do need software to enable your Sourcing, Procurement, and Supply Chain, but the software is not the total solution — which requires the right process driven by the right people. So don’t expect the vendor to solve all your problems, just the software portion (which you should only buy after identifying what you need, and the vendor you should choose should be that which has the greatest expected ROI for your organization, as calculated by you).

Reporting is Not Analysis — And Neither Are Spreadsheets, Databases, OLAP Solutions, or “Business Intelligence” Solutions

… and one of the best explanations the doctor has ever read on this topic (which he has been writing about for over two decades) was just published over on the Spendata blog on Closing the Analysis Gap. Written by the original old grey beard himself (who arguably built what was the first stand alone spend analysis application back in 2000 and then redefined what spend analysis was not once, but twice, in two subsequent start-ups that built two, entirely new, analytics applications that took a completely different, more in-depth approach), it’s one of the first articles to explain why every current general purpose solution that you’re currently using to try and do analysis actually doesn’t do true analysis and why you need a purpose built analysis solution if you really want to find results, and in our world, do some Spend Rappin’.

We’re not going to repeat the linked article in its entirety, so we’ll pause for you to go read it …

 

… we said, go to the linked article and read it … we’ll wait …

 

READ IT! Then come back. Here’s the the linked article again …

 

Thank you for reading it. Now we’ll continue.

As summarized by the article, we have the following issues:

Tool Issue Resolution Loss of Function
Spreadsheet Data limit; lack of controls/auditability Database No dependency maintenance; no hope of building responsive models
Database performance on transactional data (even with expert optimization) OLAP Database Data changes are offline only & tedious, what-if analysis is non-viable
OLAP Database Interfaces, like SQL, are inadequate BI Application Schema freezes to support existing dashboards; database read only
BI Application Read only data and limited interface functionality Spreadsheets Loss of friendly user interfaces and data controls/auditability

In other words, the cycle of development from stone-age spreadsheets to modern BI tools, which was supposed to take us from simple calculation capability to true mathematical analysis in the space age using the full breadth of mathematical techniques at our disposal (both built-in and through linkages to external libraries), has instead taken us back to the beginning to begin the cycle anew, while trying to devour itself like an Ouroboros.


Source: Wikipedia

 

Why did this happen? The usual reasons. Partly because some of the developers couldn’t see a resolution to the issues when they were first developing these solutions, or at least a resolution that could be implemented in a reasonable timeframe, partly (and sometimes mostly) because vendors were trying to rush a solution to market (to take your money), and partly (and sometimes largely) because the marketers keep hammering the message that what they have is the only solution you need until all the analysts, authors, and columnists repeat the same message to the point they believe it. (Even though the users keep pounding their heads against the keyboard when given a complex analysis assignment they just can’t do … without handing it off to the development team to write custom code, or cutting corners, or making assumptions, or whatever.) [This could be an entire rant on its own how the rush to MVP and marketing mania sometimes causes more ruin than salvation, but considering volumes still have to be written on the dangers of dunce AI, we’ll have to let this one go.]

The good news is that we now have a solution you can use to do real analysis, and this is much more important than you think. The reality is that if you can’t get to the root cause of why a number is as it is, it’s not analysis. It’s just a report. And I don’t care if you can drill down to the raw transactions that the analysis was derived from, that’s not the root cause, that’s just supporting data.

For example, profit went down because warranty costs increased 5% is not helpful. Why did warranty costs go up? Just being able to trace down to the transactions where you see 60% of that increase is associated with products produced by Substitional Supplier is not enough (and in most modern analysis/BI tools, that’s all you can do). Why? Because that’s not analysis.

Warranty costs increasing 5% is the inevitable result of something that happened. But what happened? If all you have is payables data, you need to dive into the warranty claim records to see what happened. That means you need to pull in the claim records, and then pull out the products and original customer order numbers and look for any commonalities or trends in that data. Maybe after pulling all this data in you see, of the 20 products you are offering (where each would account for 5% of the claims if all things were equal) there are 2 products that account for 50% of the claims. Now you have a root cause of the warranty spend increase, but not yet a root cause of what happened, or how to do anything about it.

To figure that out, you need to pull in the customer order records and the original purchase order records and link the product sent to the customer with a particular purchase order. When you do this, and find out that 80% of those claims relate to products purchased on the last six monthly purchase orders, you know the products that are the problem. You also know that something happened six months or so ago that caused those products to be more defective.

Let’s say both of these products are web-enabled remote switch control boxes that your manufacturing clients use to remotely turn on-and-off various parts of their power and control systems (for lighting, security monitoring, etc.) and you also have access, in the PLM system, to the design, bill of materials (BOM), and tier 2 suppliers and know a change takes 30 to 60 days to take effect. So you query the tier 1 BOM from 6, 7, 8, and 9 months ago and discover that 8 months ago the tier 2 supplier for the logic board changed (and nothing else) for both of these units. Now you are close to the root cause and know it is associated with the switch in component and/or supplier.

At this point you’re not sure if the logic board is defective, the tier 1 supplier is not integrating it properly, or the specs aren’t up to snuff, but as you have figured out this was the only change, you know you are close to the root cause. Now you can dive in deep to figure out the exact issue, and work with the engineering team to see if it can be addressed.

You continue with your analysis of all available data across the systems, and after diving in, you see that, despite the contract requiring that any changes be signed off by the local engineering team only after they do their own independent analysis to verify the product meets the specs and all quality requirements, you see that the engineering, who signed off on the specs, did not sign off on the quality tests which were not submitted. You can then place a hold on all future orders for the product, get on the phone with the tier 1 supplier and insist they expedite 10 units of the logic board air freight for quality testing, and get on the phone with engineering to make sure they independently test the logic boards as soon as they arrive.

Then, when the product, which is designed for 12V power inputs, arrives and the engineers do their stress tests and discover that the logic board, which was spec’ed to be able to handle voltage spikes to 15V (because some clients power backup systems off of battery backups that run off of chained automotive batteries) actually burns out at 14V, you have your root cause. You can then force the tier 1 supplier to go back to the original board from the original supplier, or find a new board from the current supplier that meets the spec … and solve the problem. [And while it’s true you can’t assume that all of the failure increases were the logic board without examining each and every unit of each and every claim, in this situation, statistically, most of the increase in failures will be due to this [as it was the only change].]

In other words, true analysis means being able to drill into raw data, bring in any and all associated data, do analysis and summaries of that data, drill in, bring in related data, and repeat until you find something you can tie to a real world event that led to something that had a material impact on the metrics that are relevant to your business. Anything less is NOT analysis.

“Generative AI” or “CHATGPT Automation” is Not the Solution to your Source to Pay or Supply Chain Situation! Don’t Be Fooled. Be Insulted!

If you’ve been following along, you probably know that what pushed the doctor over the edge and forced him back to the keyboard sooner than he expected was all of the Artificial Indirection, Artificial Idiocy & Automated Incompetence that has been multiplying faster than Fibonacci’s rabbits in vendor press releases, marketing advertisements, capability claims, and even core product features on the vendor websites.

Generative AI and CHATGPT top the list of Artificial Indirection because these are algorithms that may, or may not, be useful with respect to anything the buyer will be using the solution for. Why?

Generative AI is simply a fancy term for using (deep) neural networks to identify patterns and structures within data to generate new, and supposedly original, content by pseudo-randomly producing content that is mathematically, or statistically, a close “match” to the input content. To be more precise, there are two (deep) neural networks at play — one that is configured to output content that is believed to be similar to the input content and a second network that is configured to simply determine the degree of similarity to the input content. And, depending on the application, there may be a post-processor algorithm that takes the output and tweaks it as minimal as possible to make sure it conforms to certain rules, as well as a pre-processor that formats or fingerprints the input for feeding into the generator network.

In other words, you feed it a set of musical compositions in a well-defined, preferably narrow, genre and the software will discern general melodies, harmonies, rhythms, beats, timbres, tempos, and transitions and then it will generate a composition using those melodies, harmonies, rhythms, beats, timbres, tempos, transitions and pseudo-randomization that, theoretically, could have been composed by someone who composes that type of music.

Or, you feed it a set of stories in a genre that follow the same 12-stage heroic story arc, and it will generate a similar story (given a wider database of names, places, objects, and worlds). And, if you take it into our realm, you feed it a set of contracts similar to the one you want for the category you just awarded and it will generate a usable contract for you. It Might Happen. Yaah. And monkeys might fly out of my butt!

CHATGPT is a very large multi-modal model that uses deep learning that accepts image and text as inputs and produces outputs expected to be inline with what the top 10% of experts would produce in the categories it is trained for. Deep learning is just another word for a multi-level neural network with massive interconnection between the nodes in connecting layers. (In other words, a traditional neural network may only have 3 levels for processing with nodes only connected to 2 or 3 nearest neighbours on the next level while a deep learning network will have connections to more near neighbors and at least one more level [for initial feature extraction] than a traditional neural network that would have been used in the past.)

How large? Large enough to support approximately 100 Trillion parameters. Large enough to be incomprehensible in size. But not in capability, no matter how good its advocates proclaim it to be. Yes, it can theoretically support as many parameters as the human brain has synapses, but it’s still computing its answers using very simplistic algorithms and learned probabilities, neither of which may be right (in addition to a lack of understanding as to whether or not the inputs we are providing are the right ones). And yes it’s language comprehension is better as the new models realize that what comes after a keyword can be as important, or more, than what came before (as not all grammars, slang, or tones are equal), but the probability of even a ridiculously large algorithm interpreting meaning (without tone, inflection, look, and other no verbal cues when someone is being sarcastic, witty, or argumentative, for example) is still considerably less than a human.

It’s supposed to be able to provide you an answer to any query for which an answer can be provided, but can it? Well, if it interprets your question properly and the answer exists, or a close enough answer exists and enough rules for altering that answer to the answer that you need exists, then yes. Otherwise, no. And yes, over time, it can get better and better … until it screws up entirely and when you don’t know the answer to begin with, how will you know the 5 times in a hundred it’s wrong and which one of those 5 times its so wrong that if you act on it, you are putting yourself, or your organization, in great jeopardy?

And its now being touted as the natural language assistant that can not only answer all your questions on organizational operations and performance but even give you guidance on future planning. I’d have to say … a sphincter says what?

Now, I’m not saying properly applied these Augmented Intelligence tools aren’t useful. They are. And I’m not saying they can’t greatly increase your efficiency. They can. Or that appropriately selected ML/PA techniques can’t improve your automation. They most certainly can.

What I am saying are these are NOT the magic beans the marketers say they are, NOT the giant beanstalk gateway to the sky castle, and definitely NOT the goose that lays the golden egg!

And, to be honest, the emphasis on this pablum, probabilistic, and purposeless third party tech is not only foolish (because a vendor should be selling their solid, specialty built, solution for your supply chain situation) but insulting. By putting this first and foremost in their marketing they’re not only saying they are not smart enough to design a good solution using expert understanding of the problem and an appropriate technological solution but that they think you are stupid enough to fall for their marketing and buy their solution anyway!

Versus just using the tech where it fits, and making sure it’s ONLY used where it fits. For example, how Zivio is using #ChatGPT to draft a statement of work only after gathering all the required information and similar Statements of Work to feed into #ChatGPT, and then it makes the user review, and edit as necessary, knowing that while the #ChatGPT solution can generate something close with enough information and enough to work with, every project is different and an algorithm never has all the data and what is therefore produced will never be perfect. (Sometimes close enough that you can circulate it is a draft, or even post it for a general purpose support role, but not for any need that is highly specific, which is usually the type of need an organization goes to market for.)

Another example would be using #ChatGPT as your Natural Language Interface to provide answers on performance, projects, past behaviour, best practices, expert suggestions, etc. instead of having the users go through 4+ levels of menus, designing complex reports/views and multiple filters, etc. … but building in logic to detect when a user is asking a question on data versus asking for a prediction on data vs. asking for a decision instead of making one themself … and NOT providing an answer to the last one, or at least not a direct answer. For example, how many units of our xTab did we sell last year is a question on data the platform should serve up quickly. How many units do we forecast to sell in the next 12 months is a question on prediction the platform should be able to derive an answer for using all the data available and the most appropriate forecasting model for the category, product, and current market conditions. How many units should I order is asking the tool to make a decision for the human so either the tool should detect it is being asked to make a decision where it doesn’t have the intelligence or perfect information to do and respond with I’m not programmed to make business decisions or return an answer that the current forecast for the next quarter’s demand for xTab for which we will need stock is 200K units, typically delivery times are 78 days, and based on this, the practice is to order one quarter’s units at a time. The buyer may not question the software and blindly place the order, but the buyer still has to make the decision to do that.

And no third party AI is going to blindly come up with the best recommendation as it has to know the category specifics, what forecasting algorithms are generally used, why, the typical delivery times, the organization’s preferred inventory levels and safety stock, and the best practices the organization should be employing.

AI is simply a tool that provides you with a possible (and often probable, but never certain) answer when you haven’t yet figured out a better one, and no AI model will ever beat the best human designed algorithm on the best data set for that algorithm.

At the end of the day, all these AI algorithms are doing is learning a) how to classify the data and then b) what the best model is to use on that data. This is why the best forecasting algorithms are still the classical ones developed 50 years ago, as all the best techniques do is get better and better and selecting the data for those algorithms and tuning the parameters of the classical model, and why a well designed, deterministic, algorithm by an intelligent human can always beat an ill designed one by an AI. (Although, with the sheer power of today’s machines, we may soon reach the point where we reverse engineer what the AI did to create that best algorithm versus spending years of research going down the wrong paths when massive, dumb, computation can do all that grunt work for us and get us close to the right answer faster).