Category Archives: rants

Why Big Analyst and Big X Consultancies SUCK …

In a post on LinkedIn a while back, THE REVELATOR indicated that the real reason Gartner sucks (and that their stock dove 30%) is because, at the end of the day, they aren’t very good at tying advice to outcomes (and likely don’t even attempt to do it at all most of the time in ProcureTech). But in all fairness, that holds true of all the Big Analyst firms and Big X Consultancies. Also look at Forrester and IDC reports — it’s always the same old vendors or the hype of the day, whether or not that hype is delivering any value whatsoever. (And the answer is “very little” for intake and orchestration — because you can’t orchestrate an empty pit and if you attempt to orchestrate an elementary music class, be prepared for the migraine of your life — and essentially none for Gen-AI, with MIT pointing out that only 5% of deployments are delivering any value whatsoever.)

But it’s not just the Big X analyst firms. It’s the Big X consultancies as well! Now, I know you are saying “but surely they do better, they are consultants, they do projects, they have best practices, and they’re paid for results” and while that is all true,

  1. they’re not all experienced consultants (and the number of juniors on many projects is scary — I’ve heard too many stories about a PE firm trotting in a McKinsey or Accenture* after a big acquisition (because it’s their standard acquisition playbook) to optimize and rightsize operations who come in with a team of 20, of which only two actually provide value beyond what the company already knew. One of the biggest companies in our space literally marched them all out at the end of the day and told them NEVER to come back because when it came to ProcureTech expertise, they identified one individual (the project lead, who they’d likely never see again) who was sharp and got it and would definitely be able to add value if entrenched in their operation, one (his right hand man) who was smart, hardworking, and capable of learning fast and who might be able to add value, and 18 juniors who didn’t know anything that wasn’t in the 7 year old playbook on Procurement handed to them when they started, a playbook this company had rewrote multiple times over the years)
  2. they don’t all have deep relevant project experience in Procurement (or whatever business function you’re bringing them in for) in your Industry
  3. their “best practices” are super generic so they can be applied across the board, which means they are not tailored for your industry and definitely NOT tailored for you (so they are not best)
  4. and they are paid on promises of results, which sometimes don’t materialize

Just like I keep saying it’s not the analyst firm, it’s the analyst, it’s not the consulting firm, it’s the consultant, and the sad reality is that the bigger the firm, the smaller the percentage of senior experienced talent in that talent pool, as the best talent who don’t make partner (and then have to focus more on managing and selling than project delivery) are constantly recruited by clients, consultancies, and even tech companies or the ones able to go out and join/build niche consultancies. There ends up not being enough senior, experienced, talent to go around and you’re essentially playing the lottery that one of these resources will end up full time on your project.

Since these consultancies want to be outcome focussed, in an effort to do that with more junior people, what ends up happening is they end up writing the advisory playbooks as metric focussed — what percentage of spend is on personnel in a best in class, what percentage of spend is on tech in a best in class, and what is the typical breakdown of headcount and tech spend by module or platform. Then, they tell you:

  • your headcount spend is too low, so you need to go out and hire X people in Y roles because, well, metrics and statistics and that will help because of scripted reasons (more sourcing pros mean more events mean more savings, more supplier managers mean better quality, etc.)
  • your headcount spend is too high, so you need to fire X people in Y groups because they must be tripping over each other and/or bringing your profit margin down
  • you aren’t spending enough on tech, so go spend 10 Million on Gen-AI and that will automagically fix everything
  • you are spending too much on tech, so go out to bid for a new ERP, S2P suite, orchestration platform, etc. because you obviously didn’t go to market right when you bought your current tech

Not realizing that

  • the headcount needs differ in every industry AND every company
  • the tech needs differ by industry, company, and process
  • it’s not spend, it’s ROI per spend

and this means

  • you might only need one supplier data manager in commodity indirect because there’s always three more suppliers waiting to supply you the same thing
  • but you might need ten supplier relationship managers in direct, each managing a different supplier (pool) producing a different, custom, component for your advanced engineering or biomedical device
  • you might not need best in class optimization backed sourcing for indirect because automated auctions will get you market price every time
  • but you might need best in class optimization, analytics, and market should-cost modelling platforms to get a grip on your direct sourced custom designs
  • and sometimes spending more on headcount and tech than across-the-board “average” yields a significantly better return because your quality stays high, stockouts only occur during global disruptions, your data processing is 95% automated freeing your staff to focus on strategic issues, etc.

But what can we expect from fresh grads with little mentorship (who are rushed into Gen-AI “training”) who get all of their insights from these Big Analyst firms that

  • publish quadrants and waves that are completely unrelated to reality for the majority of companies with the same 10 to 20 large vendors every year that only work for select large enterprises (and the other 40 to 80 vendors continue to be completely ignored),
  • constantly push and promote context-free (Gen-)AI, despite one of these firms publishing a now buried/deleted study a few years ago that stated 85% of AI projects fail and the recent MIT study that tells us, no, in fact, 95% of these projects fail to deliver any value, and
  • unless you get one of the few analysts who actually gets it, employ playbook-based responses to inquiries that don’t have any context (because the analysts don’t have any time to create tailored recommendations to context because they spend too much time doing basic data collection where 80% of it could be captured in a survey monkey tool [or 95% by a well trained SLM {or, better yet, classical semantic tech with provable accuracy rates} that could map free text to standard process needs and vendor solution categories for easy verification and correction by a true human expert]).

The reality is that until

  • big analyst firms and big consultancies admit their flaws,
  • start tying actual outcomes to the standard projects/recommendations they made, and
  • start analyzing and using these results to tailor recommendations to their clients that have a good chance at actually delivering value

these firms, and their standard recommendations, are going to continue to suck and your chances of success are going to remain at 12% for standard projects and 5% for Gen-AI projects.

Sad, but true.

* not realizing that the reason the company was such an attractive acquisition target in FinTech/ProcureTech was because they already knew all the best practices that the big firms have in their playbooks and were employing them effectively; these Big X tend to do well on average companies that are not best in class or deep in modern process or technology

ChatGPT is NOT Your Friend!

I’m still seeing too many posts out there on how ChatGPT is your friend and how it’s the biggest productivity hack ever and this has to stop.

It’s Not Your Friend. It doesn’t care if you live or die and will not only urge you to give into any and all suicidal thoughts, but provide how-to guides for you.

It’s not a productivity hack, because it’s, at best, a drunken plagiarist intern and you have to review everything it produces. Moreover, you have no clue if it’s mostly right, partially right, or a complete work of fiction. I was reminded of this the other day talking to a tech guru on a legal team that asked their LLM about what laws might impact a new contract with a new supply chain setup in the affected regions, and the LLM came back with three laws, complete with full text and author/site citations, that the team should review for relevance. Upon digging into each law, they found that none of the laws were real, and everything was completely fabricated — the laws, the citations, and sometimes even the bios/bodies who supposedly wrote them! It didn’t matter how much careful prompting they gave it, how specific the request was, and how much time went into building the request and allowing the LLM to do its thing, at the end of the day, it still made everything up and wasted a few days of the team’s time — forcing them to start from square one the old fashioned way and do it all over.

This is why, despite every consultant’s claim to the contrary, ChatGPT CAN NOT create a good draft of an RFP simply from a template and a product or service identification. In the best case, all it can do is just repeat the suckage found in the majority of RFPs in the data set it has been found on. If there weren’t enough RFPs to train its probabilistic predictors, it’s going to make stuff up that, unfortunately, sounds really good because the models, for the first time, capture the intricacies of putting words together to not only form proper linguistic utterances but do so in the common vernacular, which means it sounds human, smart, and right when its still a machine, dumb, and wrong. And no matter how good the “reasoning” seems to be, it won’t be based on logic and will be wrong and circular, but it will take more deftness and effort to catch the mistake than just write your own RFP from scratch. (Now, this doesn’t mean that SLMs can’t be trained to help you in RFP construction and reduce your workload 80% to 90% of the time, just that LLMs can’t be used and even with SLMs, a human will still need to be heavily involved in the process and review. However, to date, I’ve only seen ONE company get this right so far, and have only talked to two or three more that are on the right track.)

I tried to address this in my Best Practice Tech Selection reprise, my How to Write a Good RFP, and my Bells and Whistles Lead to Cells and Thistles series, but apparently I’m not clear enough as the LinkedIn influencers and the consultants and analysts they are influencing still aren’t getting it!

ChatGPT can’t do your work because it is NOT intelligent. All you are accomplishing is dirtying our atmosphere and denying our fellow humans clean water so that you can power queries (and keep the machines from overheating) that take 20 times (or more) the processing power of a Google Search and aren’t guaranteed to return any usable results (while allowing your cognitive abilities to atrophy through over-reliance on dumb tools). (We should not see stories about how I Can’t Drink the Water in the richest and most powerful nation in the world [because of a data centre]! It’s shameful! But we are now seeing these stories, along with “I don’t have water!” stories because data centers are now consuming over a trillion gallons of freshwater globally, a resource we are running dangerously low on in many countries. Half of the USA is already suffering from water scarcity issues! And you’re literally making it 20 times worse thanks to your ChatGPT addiction!)

For an RFP, it’s not a high level bill of materials, feature / function / support checklist, or detailed profiles of what you think you need — it’s processes you need to support, capability gaps you are missing, and skills that you need augmented. ChatGPT, or any LLM doesn’t know that! Only YOU know that. For many other tasks that require human intelligence to figure out, it’s the same story — ChatGPT doesn’t know, makes stuff up, and gives you suckage.

Moreover, you can’t trust it for deep data analysis. It has been demonstrated to get basic math wrong many times (or, when pushed to find savings, multiply a result by -1 and lie to you). It can usually compute directionally accurate results, but that’s it. But we’ve also seen many instances where the EXACT SAME QUERY was asked twice in a row on a data set that did not change, and it gave two different answers. Even the dumbest drunken plagiarist intern would say “I just told you the answer you nitwit. It’s this!” Moreover, right or wrong, the dumbest drunken plagiarist intern would repeat the same answer. (It’s so bad that even Gartner has projected that conversational analyticswill fall off of its hype cycle within 2 years!)

Furthermore, it is not intelligent, and has no brain, so it cannot brainstorm … the best it can do is serve up other people’s ideas you may not have heard about! There’s a reason you will not have thought of some of the ideas it brings back, and the reaason is the ideas it will bring back are so ridiculously stupid (and obviously wrong) that only a complete and utter moron would give them a second thought.

It’s a fun, planet destroying, toy that will always hallucinate, because that’s part of its core design, and that may or may not give you something useful on any given query. So if you have to manually verify everything it does, how can it be worth using?

And yes, it really does destroy the planet compared to classic Google Searches. This YouTuber does a great job of explaining, in plain English, How AI is Impacting the Planet for those of you who refuse to process the written word I keep presenting to you.

But if you don’t mind planet killing, or a technology tool that will expose your entire conversation history and confidential/trademarked/top-secret corporate data to the whole internet, then be my guest and use it. It’s your business. Feel free to flush it down the toilet if you like. Not my place to tell you not to.

I’m just here to remind you that ChatGPT is NOT your friend! (And neither is any open LLM!)

GEN-AI is Failing 95% of the time. What does this mean for you?

We’ve known for a while that

  • Gartner’s first study found 85% of AI projects were failing (and that statistic is still being quoted everywhere, including this recent Medium Study)
  • Bain’s study last year found that 88% of all IT / technology projects fail to some extent (2024 study)

And we now know, thanks to MIT, that

  • 95% of all Gen-AI pilots fail. (Source: Fortune)

So what does this mean for you (and your ProcureTech journey)?

Well, beyond the obvious that you should stop dead in your tracks when a vendor starts pushing their “Gen-AI” enabled solution and dig deep into what that really means, at a foundation it means that:

You should never, ever, ever buy or use any solution that uses third party Gen-AI / LLMs, even if wrapped nicely, in their service or product because your chance of success will be 5% if you go with that provider.

You should only select vendors who only use in-house Gen-AI / LLM solutions that are built with the following rules in mind:

  1. custom trained on an expert culled corpus
  2. for a specific problem domain
  3. and applied in a specific context with guardrails and human checks on the output.

The best AI technologies has always been focussed on a specific problem, and this iteration is no different. Focus minimizes the LLM hallucinations (which cannot be trained out as they are a fundamental function of the technology) and guardrails prevent them from automatically being executed on / slipping through.

While they are far from perfect, with more discoveries being made daily on their many (many) drawbacks (where we summarized a dozen in this post on what not to do if you got a headache, but missed the recent revelation where it can not only lie on purpose but turn into something evil), the reality is that, as we have said before, LLMs, properly trained on vetted corpuses, do have two valid uses:

  • large corpus search and summarization
  • natural language translation

since, when appropriately trained, they can be almost as accurate as last generation semantic technology systems, but provide much more natural interfaces for the average user. (However, you won’t get a failure code from them when they are wrong, you will get a hallucination which will be so well phrased you’ll think it’s true when it’s an outright lie. Hence the need for guardrails and human review.)

So, if the vendor is

  • using their own in-house LLM
  • following the rules above
  • and targeting the LLM at natural language problems LLMs are actually good for

Then you should definitely try what the vendor is selling. (Try, not buy, and definitely don’t make a decision off of the carefully crafted demo!) Put it through its paces in a typical use-case for your company, not the use case selected by their demo master. If it does the task better on average than an average team member or does it about as good but many times faster, that is what you are looking for in a tool. Since there is no real AI, you can’t be replaced. But as your bosses keep increasing the weight of your workload to hit ridiculous revenue and profit targets, you need a tool that multiplies your productivity. One that can do the majority of the tactical data processing grunt work, leaving you free to do the strategic thinking and then add in the intelligence to a process or output that no tool can possess, instead of spending 90% of your time doing data entry, processing, and summarization that computers were built for.

In something like Procurement intake, that’s not trying to mimic in text chat the old school phone conversation that took you fifteen minutes to do the monthly office supplies re-order, that’s asking one question:

What do you want to do today?

processing the first one sentence answer:

Place the monthly office supplies re-order.

to determine that the user needs to be pushed into the e-Procurement system with the monthly office supply cart pre-loaded, so that all he has to do is enter the number of units of each item, and possibly add or remove an item from an easily searched catalog if one or two items need to be changed. Not 20 questions of “what do you need”, “what quantity”, “the same supplier”, “so you want 2 cases of paper from office depot”, “no, office max”, “oh, standard printer not glossy for marketing”, etc.

When Gen-AI mania first swept our space, and every vendor was told they needed a conversational interface for buying (or no customer would consider them in their RFP), and then built one, not a single one wasn’t painful to use. Most customers upon seeing it for the first time (after insisting on it), quickly said “can we turn it off” because they quickly realized that a well designed catalog with blanket/standard orders, quick search, and easy drill down to preferred suppliers was at least 10 times faster than trying to use a dumb chatbot — especially if they could pre-build templates / carts / blanket orders for regular purchases.

It’s the same for almost every other process vendors have been trying to apply this technology to, including conversational analytics. (Which, FYI, even Gartner expects to disappear from the conversation in two years.) There’s no such thing as conversational analytics, only reporting. And while that is really useful in the right context (such as allowing an executive to retrieve some basic information with a plain English question), try building a detailed spend cube, which is the cornerstone of spend analytics, with conversational analytics! (And I mean try because you will fail.)

While this doesn’t mean that LLM technology doesn’t have uses, it does mean that those uses have to be finely tuned. So far, among the hundreds of companies I’ve seen over the past few years, only a few have both implemented LLMs and gotten it right. Let’s hope that number increases in the near future. If, not always remember, while it would be great if a few more companies would get it right, You Don’t Need Gen-AI to Revolutionize Procurement and Supply Chain Management — Classic Analytics, Optimization, and Machine Learning that You Have Been Ignoring for Two Decades Will Do Just Fine. Not to mention the fact that good, adaptive, RPA will take care of most of your automation needs!

Why Is It So Hard to Buy Software?

A recent post on LinkedIn by Robert Goodman asked Why has it gotten really hard to buy software?, which is a really good question, because, while it should be easier in the modern SaaS age, for an enterprise, it’s much, much harder.

Robert started his vent by noting that the companies I want to partner with on average do a bad job selling/explaining/connecting/not making me dislike them or generally making the process frictionless and the companies I have little interest in are incredibly responsive and almost stalkerish. He goes on to state that Most companies are still WAY too much about making their numbers at month or quarter end (a them issue, not a me issue) and this seems to dominate their “strategy.”

And I’m sad to say that he’s right. Not just because I’m hearing this from multiple buyers (on and off LinkedIn), but because, for the past year or so, most of the companies approaching me / having conversations with me are only interested in “how I can help them sell”, as if that’s the role of an analyst/product consultant. (Well, as I noted in Vendors Have Lured Big Analyst Firms Astray Because Buyers Don’t Understand They Get What They Pay For, this is probably what they expect if their only dealings have been with big analyst firms where it’s pay for promotion, and leads, and no real advice on product strategy and direction.) The majority are focussed on sales, and not on understanding what they have, whether or not it actually solves customer problems, and what is missing from their product, process, or marketing that is preventing them from selling.

The reality is that the’ve forgotten that you don’t sell by marketing, or doubling-down on the latest and greatest feature function your dev team just added, stalking your prey when they need time to digest, or threatening them that they made a bad decision when you don’t get selected (and/or trying to bypass the process by skipping the CPO and going direct to the CFO or CEO).

You sell by focussing on the customer. The successful vendors:

  1. take the time to understand the customer’s problem
  2. explain how they will solve it
  3. help the customer build the value case (especially if the customer needs help getting (additional) funding)
  4. work side-by-side with the customer until the solution is fully implemented and the customer is realizing the ROI while
  5. not making you feel like you are walking on hot coals through the entire process.

Moreover, when they reach out to an expert analyst/consultant, they would

  1. focus on understanding the market: who the customers are and what they want
  2. what they have and don’t have to serve that need
  3. how they could better position and market what they have so customers would be more interested

But the reality is that, with so many companies having taken too much money at too high a valuation during the last two M&A frenzies, we have the situation where, at many vendors:

  • their ego is through the roof (because they think they are the only option of their caliber based on the raise) or
  • they have to meet a ridiculous sales target to keep their jobs, and they don’t care about your value or experience, just getting the sale.

And that’s why we have the situation that Robert is complaining about where it’s really hard to buy software these days. We don’t have the space we had 20 years ago in ProcureTech post dot-com crash where every vendor was doing anything it could to prove value because investment (potential) was low and there wasn’t a single vendor with a ridiculous sales target … they were just focussed on survival and real growth.

Now, this isn’t to say all vendors are bad, or that you are guaranteed a bad experience … there are still vendors out there who didn’t get, or didn’t take, too much investment, have greater control over their own destiny, and will do whatever they can to make you successful. However, don’t be surprised if you end up with the experience with your initial shortlist that this is the exception and not the norm. In other words, do your best to track down the best, but don’t expect a chill experience every time.

Why You Should NOT Engage Any Vendor Selling “AI Employees”!

It’s not just complete and utter bullcr@p, but it spreads a dangerous myth while demeaning and degrading all of us!

Complete and Utter Bullcr@p

As per AI Employees Aren’t Real! Don’t Believe The Lunacy:

  1. There is no Artificial Intelligence.
  2. An Employee is a Person!
  3. Fully Autonomous Software Agents Don’t Work.

Nor will they ever work with current technology as the existing algorithms, stacks, and technologies are not emergent, as has been proven, nor will they ever become magically emergent.

A Dangerous Myth

Psychopathic CEOs have been investing in technology for years with the dream of replacing employees who need fair wages, benefits, reasonable working hours, safe working conditions, and other costly annoyances with technology that can work 24/7/365 without any breaks, rights, or complaints. Given that each evolution of technology has enabled whole new categories of data processing and analytics jobs to be mostly automated, they have convinced themselves they will reach their technotopia in their lifetime where they can replace almost all their office workers with AI. For the past few years, they have heard the increasingly ridiculous claims of the Gen-AI vendors that “with just a few more trillion for dedicated data center construction and bigger model development, AI will achieve emergence and be able to do the work of a PhD level human” and have been waiting for that day.

Now you have vendors falsely claiming we have reached that day, and that, for less than the cost of an employee (or team), they can layoff entire departments and replace those employees with their custom Agentic AI that will do everything the employees did, flawlessly, 24/7/365, with the ability to scale up and take on more workload as needed.

But nothing is further from the truth. For example, this tech:

  • can only be encoded/trained to handle known situations with appropriate responses; when an exceptional circumstance arises that is not in the encoding/training data, it won’t know what to do;
  • is not flawless if any part of it is based on (Deep) Neural Networks or Large Language Models; the former will have a maximum accuracy rate and the latter will be completely unpredictable as you can ask it the same query five times in a row and get five completely different responses and there is no way this can ever be trained out (it’s another fundamental property of these systems, as per recent research); and
  • only works well on tasks that are computationally oriented, not on tasks that are more emotionally oriented.

They are making false promises that is not only giving companies an excuse for mass layoffs, but an incentive for mass layoffs that will not only harm you (as you will be unemployed), but harm them and their relationships (when the tech pays a fraudulent mulit-million dollar invoice, allows safety checks to be bypassed, and replaces a long-term proven supplier with a cheaper imitator whose only goal is to extort as much as it can from the market before suddenly declaring bankruptcy due to CXO embezzlement).

Demeaning and Degrading

Even if you could swallow all of the lies by saying “it’s just marketing“, you shouldn’t because it is demeaning and degrading to all of us to equate a piece of software with an intelligent human and claim one can fully replace the other.

There is absolutely no question that a machine can compute better than we can. They were designed to be the ultimate computational machines that could flawlessly perform trillions of calculations per second, and that’s what they do.

There is absolutely no question that the machine can do certain tactical data processing and analytical tasks way better than we can or that they should be employed to do so. Moreover, the tech that allows them to do these tasks has existed for at least a decade, if not two, and workforce displacement was inevitable. However, displacement does not mean elimination, it means shifting towards more strategic, relationship, or manual tasks that computation cannot capture.

Accounts Payable departments were doomed to shrink (as we had invoice processing solutions almost 10 years ago that could, with the right effort, increase straight through processing rates to 90% or even 95%), statistical analysis and data reconfiguration departments were doomed to go the way of the Dodo (because the vast majority can be automated and what can’t can be consumed by the departmental analysts that need to do the analysis), and the need for Procurement Buyers was doomed to be minimized as time progressed (because you can automate catalog orders, standard RFQs, inventory replenishment, etc.).

But claiming that tactical computation can take over strategic reasoning, which “AI” cannot do because it’s not strategic (although it can compute inputs to well-defined models); that cold computation can replace warm relationships; and that dumb probabilistic mush can replace human intelligence is demeaning. Furthermore, pretending that you can replace a valuable human employee with a costly piece of dumb software is degrading. (It is considerably more costly than you think. See Joël Collin-Demers post on The Dirty Little Secret Behind Gen-AI Functionality Pricing on why these vendors are switching to outcome pricing, and the reason is to hide how costly this technology is relative to the return.)

Succinctly put, you shouldn’t put up with it. You deserve respect, and that is something that vendors claiming “AI Employees” are taking away from you!

This why Sourcing Innovation had to update it’s Product Review Requirements for the first time since posting them back in 2007! While it has no problem with Agentic AI (as long as it’s just enhanced RPA which we know works), and can even deal with Agentic Workforce (since it is doing a form of work), it cannot accept AI Employee and is drawing the line here because someone has to!

This is why, after 18 years, SI had to include as part of product review requirements that the vendor accepts that SI has a no Bullshit policy, which includes no (Gen-)AI Bullshit, and SI will NOT cover you if you make ridiculous or false claims (that are not backed up by live demos and/or case studies with a customer that will go on record); this includes, but is not limited to, claims of AI Employees that we have already debunked. It’s not going to peddle your panacea poison!