Category Archives: rants

With Suites, What you are Sold Vs. What You Get Vs. What You Need are Three VERY Different Things!

A while back, Dan Gianfreda published a piece on LinkedIn on how what you need is not what you are sold when you buy a a shiny, “all-in-one” procurement platform that is 10X bigger than what you will actually use (on a multi-year contract with a massive implementation that takes months longer than promised and ensures you don’t have the majority of the functionality you need until the contract is almost up), and he was right. But it missed the full picture. The reality is that not only are you sold 10 times more than you will use, but what you will use doesn’t cover what you need, and with a poor selection, might only be one 10th of what you actually need!

In other words, you need to see the full picture:

As outlined in the response post, just because a suite has a module, there’s no guarantee that module is anywhere close to what the organization really needs, especially when the capabilities can vary greatly (and the definitions even more so). Sourcing can be a simple RFX or a multi-staged integrated RFX/Auction platform with embedded strategic sourcing decision optimization. We still see canned reporting modules sold as “modern spend analysis” when they are anything but. And most AI claims are pure BS (or an indication that you should probably run for the hills if that’s the only selling point).

Even if the suite theoretically has the core/must have functionality the organization needs, that’s only meaningful if that functionality is implemented in a way that supports the organizational processes and policies. If approval chains are required, tamper-proof audit logs need to be in place, validated process steps are needed for public sector compliance, and so on — and the suite has none of those, it don’t matter how user friendly, integrated, or “powerful” it is because the organization will NOT be able to use it.

Moreover, the core functionality differs by organizational type and since most platforms only do one of indirect, direct, services, capex projects, or tailspend well, selecting the wrong suite will render it totally useless for the majority of sourcing/procurement projects, which will add insult to injury of the huge cash outlay you agreed to (for an ROI that will never, ever, materialize).

Moreover, as previously indicated, you can NEVER assume that all (or sometimes, even any) of the solution providers will:

  • ask the right questions to understand the challenges
  • do the right due diligence to ensure their solution will solve those challenges
  • be honest about their capabilities (or, outside of the dev team, even understand those capabilities)

because, chances are, as I have indicated many times, everyone in the ecosystem exists to make money off of YOU, but not necessarily to help you. (Especially when too many vendors took too much money and are now under extreme pressures to fulfill ridiculous growth requirements in just a few years or risk massive layoffs, being folded into a bigger player, or getting dropped from the portfolio entirely before going bankrupt.) There’s no time to do it right, just to sell, sell, sell. (Which is why we keep advocating employing an independent consultant to help you with selection, project planning, and project assurance — since their remuneration depends on helping you, not someone else.)

So remember this before you start looking at big suites as there is a good chance you’ll likely be paying 10 times what you should be (based on what you are using) while still only getting 25% of what you actually need in the best case. (And there’s nothing wrong with building your own Best-of-Breed ecosystem, even if you need to add an orchestration player to that mix, if that is what maximizes the return on every dollar spent.)

(Supplier) Diversity is Dead!

Editor’s Note: This is an extended version of a comment that was made in response to an inquiry by THE REVELATOR on LinkedIn about the progression of supplier diversity.

The simple fact of the matter is thus: diversity threatens fascists who want authoritarian dictatorships. This means that as long as far right wing agenda politicians keep getting elected in first world countries (which has been happening more than not over the last decade), not only is DEI (Diversity, Equity, and Inclusion) not going anywhere, but it is going to be rolled back, and done so faster than most policies that came before in countries which equated diversity progress with measurable outcomes.

The sad reality of the situation is that as soon as the board/chief/president of an organization or governmental department concluded that you were not diverse if you did not have x% of whatever minority the board/chief/president thought you should have x% of by time y, and started equating diversity success with measurable outcomes, we went from a situation where “equal opportunity” was replaced with “minority designated role”. And instead of being a further step in the right direction, it was often a step backwards. Under equal opportunity, if two candidates were roughly equal for a role, the role is to go to the minority candidate. And that’s a good thing. However, under “minority designated role”, non-minorities are banned from consideration, and this is not a good thing if there are no qualified minority candidates available for the role. A senior role that should demand a full University degree (Bachelor’s or higher), a decade of experience, and one or more certifications may end up going to someone who just has a 2 year associates degree, only 3 years of work experience (barely relevant to the role), and no certifications as that is the most qualified person who applied.

What many firms fail to take into account when considering diversity mandates is the number of qualified candidates in the minority who are actually in the vicinity of, and who are then actually interested in, and willing to take on, the position. For example, if you were to demand that half of your coding team need to be women, good luck with that when only 25% of STEM graduates in North America are female. (So if you did get 50%, a lot of other companies wouldn’t get any female hires.) Or if you demand that 1/5th of your workforce be hispanic, to mirror the US population distribution, but it’s an in office job in a major city in an expensive neighbourhood where 95% of the local population is white, good luck with that. You might meet your quota, but you know that the vast majority are not going to be qualified for the role.

And DEI didn’t stop there at some organizations and institutions in North America. As soon as people figured out that a DEI program or a particular minority designation could be used to exclude people of certain religion(s) they didn’t like, it went from a tool of inclusion to a tool of subversive discrimination. (So much for equity and inclusion!) Then came the backlash; the labelling of anything even remotely related to DEI, equal opportunity, or humanity as woke; and a full on assault by the fascists and authoritarians.

More specifically, in countries where they have enough power in the government, the authoritarians are dismantling any and all programs they have control over, barring any third party organizations with such policies from doing business with their government, and doing whatever they can to overturn all DEI and Equal Opportunity legislation they can, as far back as they can.

Moreover, given that these far right wing parties are being well funded by donations from the tech bros who spend more time meddling in global politics than running their own ventures, there are not many options for progression of ANY diversity on the global stage.

Agentic AI is a Fraud — And Arguments to the Contrary are BS!

We’ve been over this many times here and on LinkedIn. And we are going to go over many of the reasons yet again, because it seems we have to. But first, we’re going to back up and give you a fundamental answer as to why Agentic AI cannot exist with today’s technology.

The dictionary definition of an agent is generally either “a person who acts on behalf of another” or “a person or group that takes an active role or produces a specific effect“. It’s about “doing, performing, or managing” for the express purpose of a “desired outcome“.

Classically, an agent is a person, and a person is an individual human and a human, unlike a thing, is intelligent, which today’s technology is not and that should be Q.E.D.

However some “modern” or “progressive” dictionaries (and there are at least 26 publishing houses publishing dictionaries as per Publishers Global), will allow an “agent” to be a “thing” as the term has been used in technical diagrams and patents, so if you define AI as a thing (which is being nice), then this refutation is not enough so we have to consider the rest of the definition.

As for “doing, performing, or managing“, like any automated system, it, unfortunately, does something. However, unlike past “automations” which took fixed inputs and produced predictable outputs, today’s Gen-AI takes an input and produces an unpredictable output that depends on how it was trained, parameterized, implemented, interacted with (down to the specific wording and format of the prompt as they are designed to continuously learn [which is a huge misnomer as they do not learn, but simply evolve their state]), fed the data, etc. As a result, two systems fed the same inputs at any point in time are not guaranteed to give the same, or even similar, output and are NOT guaranteed to produce the “desired outcome” (as even one single “bit” of difference between data sets, configuration, and historical usage can completely change an outcome, just like one parameter can completely change an optimal solution in a strategic sourcing decision optimization model). While this can be argued unlikely (and is for run-of-the-mill scenarios), it’s not infeasible (and, in fact, has a statistically significant chance of happening the more an input is off from the norm, and even 1/100 is significant if you plan on pushing [tens or hundreds of] thousands of processing tasks through the agent). Just like one wrong cost (off by a factor of 100 due to a decimal error) can completely change the recommended award in strategic sourcing decision optimization, because these are essentially super sized multi-layer deep learning neural networks (with recurrent, embedding, attention, feed-forward, and maybe even feed-back layers) based on non-linear and/or statistical activation functions where the parameters change based on every activation and, thus, every input, one wrong input, in the right situation, can completely change the expected outcome.

As for the arguments that “they often work as well as people, so why not“, that’s equivalent to the argument that “dynamite often works as well as excavators, so why not” or “loop antennas work just fine for direction bearing on aircraft, so why not“. In the second argument, the “why not?” is that even the best expert can’t always predict the full extent, direction, or result of the explosion (which, FYI, could also create a shockwave that could damage or take down nearby structures). In the third argument, ask Amelia Earhart how reliable they are if the short range is too short. Oh wait, you can’t! (And that’s why modern planes have magnetic compasses and GPS in addition to radio-based navigation.)

As for the arguments that the issues aren’t unique to AI and show up in other systems or people, while that is fundamentally true, there is no other technology ever invented that collectively has all of the issues we have so far identified in Gen-AI … and I’m sure we aren’t done yet!

[ Moreover, when you think about it logically, it makes no sense that we are so intent on pursuing this technology after having spent almost a century designing and building computing systems to be more accurate and reliable than we could ever be (with stress tested design techniques, hardware, and generations of error correcting coding to get the point where a computer can be expected to do trillions of calculation without fail, whereas an average human, who can do an average of one simple calculation a second, might not get through 10 without an error) while continually enhancing their capacity to the point that the largest supercomputer is now one quintillion times more powerful at math than we are! ]

As per a previous post, for starters, this technology:

1. Lies. It hallucinates every second, and often makes up fake summaries based on fake articles written by fake people with fake bios that it generates in response to your query, because it’s trained to satisfy (even though the X-Files made it clear in Rm9sbG93ZXJz how bad an idea this was back in 2018, four months before GPT-1 was released). And that’s the tip of the iceberg of the issues.

2. Doesn’t Do Math. These models don’t do math as they are built to combine the most statistically relevant inputs, not to do standard arithmetic. Plus, they aren’t even guaranteed to properly recognize an equation, especially when words are used. So it will miscalculate, and sometimes even misread numbers and shift decimal points. (Just ask DOGE, even though they won’t admit it.)

3. Opens Doors For Hackers. Over 70% of AI code has been found to be riddled with KNOWN security holes. (So imagine how many more holes it’s introducing!) It doesn’t generate code better than an average developer, and sometimes generates code that’s even worse than a drunken plagiarist intern who can at least identify a good code example to base the generated code on.)

4. Puts Your Entire Network at Risk. If you’re using it to automate code updates, cross-platform integration and orchestration, or system tasks, all it has to do is generate one wrong command and it could lock up and shut down your entire network, no Crosslake required!

5. Helps Your Employees Commit Fraud. They can generate receipts that look 100% real, especially if they do their own math (and ensure all the subtotals and totals add-up and the prices are actual menu prices), look up the restaurant name and tax codes, and have a real receipt example to go off of. (And as for the claims by Ramp and other T&E firms that they can detect fake receipts generated by Chat-GPT, good luck with that, because all the user has to do is strip the embedded metadata/fingerprint by taking an image of the image or running it through a utility that strips the metadata/fingerprint or converts the image format to one that loses the metadata.)

Can you name another technology that comes with all of these severe negatives (with more being discovered regularly, including addiction and a decrease in cognition as a result of using it). (We can’t! Not even close!)

[ And going back to the “people do all of this too” argument, it’s true we collectively do, but the vast majority of humans are not narcissistic sociopathic psychopathic robber barons with delusions of grandeur and no moral code or ethics. (Most criminals and con artists have a code or a line they won’t cross. To date, Gen-AI has proven such a concept is beyond it.) ]

And for those of you who believe Gen-AI is emergent, it has been refuted multiple times. There is no emergence in these models whatsoever. They were, are, and will always be dumb as a doorknob while being much less reliable. (At least a doorknob, when turned sufficiently, will always open a door.) If you want to believe, go find religion (and keep your religion out of technology)or at least restrain yourself to the paranormal. When it comes to Gen-AI, THERE IS NOTHING THERE.

As for it doesn’t have to be smart to be useful, that is one of the most useless statements ever — because it’s context free and could be 100% true or 100% false or anything in between, all depending on the context you are referring to. By definition, technology does not have to be smart to be useful. Every piece of technology we use today is, by definition, dumb because it all lacks intelligence. (And some of it is so useful we’d never live without it — like control systems for energy grids and water works, air traffic, and modern communications).

However, all of the dumb technology that we have developed that is useful is ONLY useful with a precise context that defines the problem to be solved, the inputs it will expect, and the outputs that can be generated. An online order system is useless in a nuclear power plant control station, for example. (And while you think that is far-fetched, many of the areas vendors are claiming you can apply Gen-AI to are even more far-fetched.)

Even if the situations where the task you want ultimately boils down to the digestion, search, summarization, and/or generation of outputs based on a very, very large corpus of data, which is essentially what LLMs were designed to do, they are still only useful as a starting point or suggestion generator, or guide. They are not perfect, are totally capable of misunderstanding the question, being too broad in their interpretation of one or more inputs, being incorrect in their interpretation of the prompt for the desired output, generating fake data despite requests not to, excluding key documents or data from a result, or including irrelevant or wild-goose documents or data in the result.

In the best case, an LLM query will be about equivalent to a traditional Google search (that weights web pages based on key words, context, link weight, freshness, etc. and returns only real links in its results) of potentially relevant data. In the worst case, it’s a mix of real relevant, real irrelevant, and a lot of made up results, and you have no clue which is which until you check every one. (Which means it is ultimately less useful than a drunken plagiarist intern who will only refer to and copy existing references when sent on a research project, and all you will have to do is filter out the irrelevant from the relevant, as such an intern will be too lazy to make anything up completely.)

Since Gen-AI LLMs are what are powering all of the Agentic AI claims, it should now be quite clear as who why Agentic AI is a fraud!

Now, this isn’t saying Gen-AI is bad (as it will have some solid use cases where it is dependable with further research; as with all generations of AI tech before, it needs more time; nor is it saying that we won’t have smarter software agents in the future that can do considerably more than they can today), just that we aren’t anywhere close to being there yet and that they won’t be based on Gen-AI as it exists today.

So for now, as Sourcing Innovation has always advocated, switch your focus to Augmented Intelligence and systems that make your humans significantly more productive than they are today. The right systems that automatically

  1. collect, summarize, and perform standard analysis on all of the data available;
  2. make easily modified suggestions based on those analysis, similar situations, and typical organizational processes (that can then be easily accepted); and then
  3. invoke traditional rules-based automations based on those accepted scenarios and invoke the right processes to implement the defined workflow

can, depending, on the task, make a human three (3), five (5), and even ten (10) times more productive than they are today, allowing for a team of two (2) to do the work that used to require a team of six (6), ten (10), or even twenty (20). For example, consider the best AI-based invoice processing applications that can automatically processes standard form invoices, break out and classify all the data, auto-match to (purchase) orders and receipts, auto-grab missing index/supplier data, auto-accept and process if everything matches within tolerance, auto-reject (and send back with reason and correction needed) in the case of a significant mismatch, and automate a dispute resolution process. When these, with the right configuration and training, will get most organizations to 95% touch-less processing (with zero significant errors), that’s a 10X improvement on invoice processing.

While not all tasks/functions will achieve the high efficiency of invoice processing, depending on the time required to collect and analyze data before strategic decisions can be made, most functions can easily see a 3X improvement with the right Augmented Intelligence technology. With the right tech (especially with supercomputing capabilities in modern cloud data centres), we have finally reached the age where you can truly do more with less … and all you need to do is NOT get Blinded By The Hype.

And for those interested, this latest rant was inspired by one of THE REVELATOR‘s Triple-Play Thursday LinkedIn posts and comments thereon (which have some deeper explanations in the comments if you want to dig in even deeper).

How AI Enhances 10 Common Procurement Challenges Part II

A recent CIO article drew my ire because it claimed that AI Overcomes 10 Common Procurement Challenges as it oversimplified the problems and overstated the benefits of AI. Let’s finish them one-by-one.

Legacy Systems Complicate the Adoption of New Technology: The article claims AI streamlines integration by assessing system compatibility, automating migration, and reducing downtime. While two out of three ain’t bad, it ain’t good when the critical requirement of assessing system compatibility cannot be met by AI — since simple text matching isn’t helpful if the interface of a legacy system isn’t specified in a standard format (as otherwise it’s essentially field-name matching, which is no different than human guesswork). The reality is that humans still have to define/verify the mappings before the AI can take over.

Letting AI do the mappings is fraught with errors. And its even worse when you let it automatically connect systems, pull and push data, replicate incorrectly mapped and bad data across systems, and “fix” data that was actually correct on system integration because the “bad” data in one system is used to overwrite the good data in another system just because it appeared to be more recent. Because it’s automated, AI can propagate and exacerbate errors at an unprecedented rate and in a matter of seconds make a mess that can take months to repair.

Managing Supplier Risks is a Growing Concern: AI can continuously monitor supplier performance, predict risks, and ensure compliance. This is one situation where they were almost perfectly correct, but, when they say vendor evaluation can be time intensive and imply that AI can speed it up, they overlook the fact that evaluations still have to be done by humans and tech can’t speed that up.

Moreover, if you think you can augment your data with third party data to speed up the evaluations, you’re just fooling yourself. You just make bad decisions faster.

Manual Procurement Process Drain Resources: AI can definitely automate repetitive tasks, reduce human error, increase efficiency, and free your team to focus on strategic initiatives, but only for tasks that are well defined, typically free from exception, and capable of being processed by standard rules. However, this can’t be done until the repetitive tasks are identified, processing rules defined, standard exceptions identified, and additional rules defined. Only then can the AI automate enough to be useful.

Moreover, using a next-gen LLM with chain-of-compute to try to break the requirements of a task down into subtasks, execute those subtasks automatically, and automate a process without any human intervention is just as likely to go wrong as it is to go right.

Demand Forecasting is Often Inaccurate : AI can improve demand forecasting, but only if you have the right data — it’s not a magic box, just a black box that you need to understand.

It’s not just demand trend based on utilization / point of sale data, its also market conditions which can sharply change a demand curve overnight … traditional curve fitting / machine learning that most “AI” is based on cannot detect a change in market conditions or a political situation that can cause a rapid change in demand.

Procurement Remains Transactional Rather Than Strategic: AI DOES NOT transform procurement into a strategic function that optimizes spend, improves supplier collaboration, and aligns purchasing decisions with your business! Only people-powered Human Intelligence (HI!) can do that. Remember — transforming Procurement requires defining a strategy, defining appropriate processes, identifying the right people to transform it, and then, and only then, identifying the right technologies.

Assuming that you can slap in AI and transform a tactical function into a strategic one is worse than a pipe dream, it’s a recipe for disaster. Running fast and hard doesn’t get you any closer to the finish line if it’s not in the right direction. For more details, see the dozens of posts about AI in the archives.

Again, we’re not saying that AI is bad. Technology is neither good nor bad. But, like any technology, it has to be ready for prime time, correctly identified, correctly implemented, and correctly used — and that requires a lot of Human Intelligence (HI!) and planning, and the right processes put in place. Shoving it in and expecting a miracle is dangerous. And this is yet another article that implies you can just shove it in and get results. And you can’t. Especially if it’s the wrong technology, which can enhance your problem instead of shrinking it. That’s the problem. This article, like many others, doesn’t tell you about the dangers and downfalls and what you have to do to avoid them.

How AI Enhances 10 Common Procurement Challenges Part I

A recent CIO article drew my ire because it claimed that AI Overcomes 10 Common Procurement Challenges as it oversimplified the problems and overstated the benefits of AI. Let’s take them one-by-one.

Procurement Takes Too Long, Slowing Innovation: According to the article, AI-driven platforms can generate RFPs, accelerate sourcing, automate approvals, and reduce cycle times … which is mostly true. Properly applied, AI can accelerate sourcing, reduce cycle times, and automate approvals … but not all approvals. As for RFP generation, that’s very limited — LLMs can generate RFPs with a simple prompt, but not necessarily a good RFP. The best RFPs are designed by humans (and then automation, which may or may not use AI, can pull in data from supporting documents as needed), and as for acceleration, it depends on the project — it can’t speed up supplier qualification where humans need to inspect the products and verify the requirements.

Moreover, a rush to AI can make things worse, and not better. Letting AI generate an RFP that misses a key requirement in terms of required certifications, performance criteria, production capacity, etc. can entirely invalidate an RFP process and lead to months of wasted effort if no human realizes that this key requirement was missed until an award is offered and a request for the certification, capacity, etc. is delivered and a “sorry, we don’t have / can’t do that” is returned.

Legal and Budget Complexities Create Bottlenecks: Budget tracking systems and rules-based automation allows for instantaneous budgetary approvals. Contract negotiation software can automate redlining, compliance checks, etc., but cannot handle a complex negotiation for a complex project where each side has a lot of requirements and multiple parties to satisfy. AI speeds up the technical drudgery, but not the human interaction.

Moreover, if you turn over negotiations to software, you have no idea what the end result will be. If you let it negotiate based on market data, and the cost data is off, you could be committing to a bad deal. If you let it predict timeframes based on how it expects prices to rise/stay high, but it’s off by two years, it could lock you into a three year deal when you only need a one year deal. And so on.

CIOs Need to Upskill Their Teams in AI and Cybersecurity: Just because “AI” can simplify processes with guided intelligence, that doesn’t mean the team is upskilled in the process. The reality is, there is no incentive for users to learn anything if they think the system will guide them in everything they need to do.

Thus, if you over invest in AI, especially the kind that guides users in every task they have to do, and works quite well on the basic tasks they have to do daily, and doesn’t screw up the first half dozen or so moderately complex tasks, the user will believe the system is almost flawless, start to trust it implicitly, stop questioning it as time goes on, start believing there is no need to learn anything else because the system knows it, and, over time, stop thinking. And then, instead of performance improving, it will decline … and that decline might be accompanied by a major financial loss if a bad contract is signed or major risk ignored.

Data Inaccuracy Leads to Poor Procurement Decisions: While it’s true that over three quarters of organizations struggle with unreliable data, AI doesn’t magically fix the problem. It can help with cleansing, validation, and procurement trend analysis, but ask any spend analysis vendor who has tried to apply an LLM to unclassified vendors about the classification accuracy (which tends to top out around 70%) — good data still requires manual cleansing and classification, especially where the system reports good confidence. It can definitely help, but it doesn’t take the onus off of the human.

In other words, if you believe that you can plug in a magic AI black box ad that it will fix your data, you are gravely mistaken. Sure it will tell you that it has cleansed, classified, and validated all of your data, but if it’s only 70% accurate, it’s only made matters worse if you trust the data 100% and don’t know what 30% is inaccurate. When you base your decisions on data, and the data is bad, you are bound to make a bad decision. The question is, how bad. You don’t know. And that’s a big problem!

B2B Software Selection is Increasingly Complex: Moreover, despite the claims, AI-powered vendor analysis doesn’t really help that much — see Pierre Mitchell’s crazy conversations with DeepSeek-Rq. Note how it not only recommends inappropriate vendors, but also recommends vendors that don’t even exist anymore … it can help you discover potential vendors, but you still need human reviews and deep pricing intelligence (from expert SaaS optimizers).

Trusting AI to select your software is worse than trusting an analyst firm map! And we know all of the problems those maps contain. (First of all, they only mention the same 10 to 20 vendors year after year, ignoring the dozens of other vendors that might be more appropriate for you.) AI cannot understand your needs, cannot truly map needs to requirements, cannot truly map requirements to features, and cannot truly assess how relevant a solution is, and definitely can’t assess how well a provider’s culture will match yours.

Come back Thursday for Part II!