A recent CIO article drew my ire because it claimed that AI Overcomes 10 Common Procurement Challenges as it oversimplified the problems and overstated the benefits of AI. Let’s take them one-by-one.
Procurement Takes Too Long, Slowing Innovation: According to the article, AI-driven platforms can generate RFPs, accelerate sourcing, automate approvals, and reduce cycle times … which is mostly true. Properly applied, AI can accelerate sourcing, reduce cycle times, and automate approvals … but not all approvals. As for RFP generation, that’s very limited — LLMs can generate RFPs with a simple prompt, but not necessarily a good RFP. The best RFPs are designed by humans (and then automation, which may or may not use AI, can pull in data from supporting documents as needed), and as for acceleration, it depends on the project — it can’t speed up supplier qualification where humans need to inspect the products and verify the requirements.
Moreover, a rush to AI can make things worse, and not better. Letting AI generate an RFP that misses a key requirement in terms of required certifications, performance criteria, production capacity, etc. can entirely invalidate an RFP process and lead to months of wasted effort if no human realizes that this key requirement was missed until an award is offered and a request for the certification, capacity, etc. is delivered and a “sorry, we don’t have / can’t do that” is returned.
Legal and Budget Complexities Create Bottlenecks: Budget tracking systems and rules-based automation allows for instantaneous budgetary approvals. Contract negotiation software can automate redlining, compliance checks, etc., but cannot handle a complex negotiation for a complex project where each side has a lot of requirements and multiple parties to satisfy. AI speeds up the technical drudgery, but not the human interaction.
Moreover, if you turn over negotiations to software, you have no idea what the end result will be. If you let it negotiate based on market data, and the cost data is off, you could be committing to a bad deal. If you let it predict timeframes based on how it expects prices to rise/stay high, but it’s off by two years, it could lock you into a three year deal when you only need a one year deal. And so on.
CIOs Need to Upskill Their Teams in AI and Cybersecurity: Just because “AI” can simplify processes with guided intelligence, that doesn’t mean the team is upskilled in the process. The reality is, there is no incentive for users to learn anything if they think the system will guide them in everything they need to do.
Thus, if you over invest in AI, especially the kind that guides users in every task they have to do, and works quite well on the basic tasks they have to do daily, and doesn’t screw up the first half dozen or so moderately complex tasks, the user will believe the system is almost flawless, start to trust it implicitly, stop questioning it as time goes on, start believing there is no need to learn anything else because the system knows it, and, over time, stop thinking. And then, instead of performance improving, it will decline … and that decline might be accompanied by a major financial loss if a bad contract is signed or major risk ignored.
Data Inaccuracy Leads to Poor Procurement Decisions: While it’s true that over three quarters of organizations struggle with unreliable data, AI doesn’t magically fix the problem. It can help with cleansing, validation, and procurement trend analysis, but ask any spend analysis vendor who has tried to apply an LLM to unclassified vendors about the classification accuracy (which tends to top out around 70%) — good data still requires manual cleansing and classification, especially where the system reports good confidence. It can definitely help, but it doesn’t take the onus off of the human.
In other words, if you believe that you can plug in a magic AI black box ad that it will fix your data, you are gravely mistaken. Sure it will tell you that it has cleansed, classified, and validated all of your data, but if it’s only 70% accurate, it’s only made matters worse if you trust the data 100% and don’t know what 30% is inaccurate. When you base your decisions on data, and the data is bad, you are bound to make a bad decision. The question is, how bad. You don’t know. And that’s a big problem!
B2B Software Selection is Increasingly Complex: Moreover, despite the claims, AI-powered vendor analysis doesn’t really help that much — see Pierre Mitchell’s crazy conversations with DeepSeek-Rq. Note how it not only recommends inappropriate vendors, but also recommends vendors that don’t even exist anymore … it can help you discover potential vendors, but you still need human reviews and deep pricing intelligence (from expert SaaS optimizers).
Trusting AI to select your software is worse than trusting an analyst firm map! And we know all of the problems those maps contain. (First of all, they only mention the same 10 to 20 vendors year after year, ignoring the dozens of other vendors that might be more appropriate for you.) AI cannot understand your needs, cannot truly map needs to requirements, cannot truly map requirements to features, and cannot truly assess how relevant a solution is, and definitely can’t assess how well a provider’s culture will match yours.
Come back Thursday for Part II!
