Monthly Archives: April 2025

It’s Not Just Public Procurement Offices That Should Avoid Tech Fads

A recent article over on State Tech Magazine boldly stated that State Procurement Offices Should Carefully Avoid Tech Fads. And the headline, and author, was right. But it’s not just the public procurement offices that should be avoiding tech fads, the private sector offices should be avoiding them too. But more on this later.

The author noted that Artificial Intelligence is everywhere these days, and that news, advertising [and marketing] may leave you feeling [more than a bit] pressured to join the crowd and be an early adopter. But, as the article points out, and as THE REVELATOR would also be quick to point out, successful IT procurement inolves engaging with a comprehensive list of stakeholders, conducting thorough research and careful implementation planning, and, as THE REVELATOR reminds us on a regular basis, understanding what you need in the first place.

As the author notes, emerging technologies often present unforeseen challenges and novel issues that procurement offices must be aware of and prepared for. Failure to do due diligence can lead to embarassing or costly results. Not only do you have the extremely high failure rates with advanced tech projects, exceeding 85% according to Gartner, but, as the author points out, in the public sector technology breakdowns can have much more consequential impacts. The Air Canada lawsuit is just the first example of what is to come from the inevitable failures of AI not ready for prime time.

It’s not about the hype, it’s about the value the solution will provide, which includes, as the author notes, the total cost of ownership and longevity. The solution must fulfill the organizational need, not the hype. Otherwise, the total cost of ownership is high as no value is delivered while the longevity will be very limited.

But this should be just as true in the private sector. After all, a solution that could get you sued if it fails, that doesn’t solve the problem, that is worthless from the minute it is implemented, and that will paralyze you until a replacement is found and implemented is not something you should ever, ever want in private industry either. So don’t fall for the hype, and stay on the course that’s right — real solutions that solve real problems.

To Manage Innovation, Governments Must Fix Procurement … And Take Care Where AI is Concerned!

A recent article on Civil Service World noted two things that attracted my attention:

  1. To manage innovation, governments must fix procurement
  2. Too often, contracts in AI do not give governments powers to investigate algorithms or the data they are trained on. As a result, they risk taking the blame when things go wrong without the means to find out why.

Public Procurement is expensive. Very expensive. Given that it represents 12% of the annual GDP of an average developed economy, that is a huge amount of spend. Given that the overspend in most departments of most jurisdictions is likely as bad as in the private sector, which means, depending on the category, is likely in the 4% to 6% range at a minimum (based on the results high performing organizations see when implementing best-in-class processes and technology), that means a minimum of 1/2% of GDP is being wasted annually, but based on the fact that most public sector projects exceed initial budgets and timelines, we’d bet that the overspend is double that and at least 1% of the annual GDP. That’s a lot of waste — 770 Billion on the top 10 economies. Furthermore, that assumes that all of the spend is necessary and well planned. (There is likely considerably more savings with better demand planning, more operational efficiency, better project planning, etc. We’re just stating that the savings on committed spend alone is likely 10%.)

The article notes that despite the strategic importance of Procurement, it’s rarely seen as a priority and is more often treated as a standardized compliance function, rather than a tool for strategic investment and, in some cases, has become synonaomous with absurdity, due to an accumulation of rules so complex that even those administering them cannot interpret them creates the perverse incentive of doing the least risky thing to avoid individual liability. As a result, governments end up buying obsolete technologies that make them vulnerable, because innovation evolves so rapidly, and forces them to buy more. The cycle repeats, budgets balloon, and public capabilities diminish.

And, unfortunately, public procurement is a brick-and-mortar process, still more suited to bulk-buying precisely describable goods, accounting for them, and moving onto the next purchase. Innovation is different: you do not know today what is going to be possible tomorrow, even when you are the one inventing the tech. While governments work in one-off projects, innovation is made of ever-changing, always-fleeting products.

Furthermore, those in charge of procuring these technologies are not technologists. Public procurement is professionalized in only 38% of OECD countries, so even if officials had the incentive to experiment, they would not have the expertise.

To combat this, the authors of the article propose that Procurement systems should be like good software, fluid, flexible, and constantly evolving. However, as they note, this will take more than changing rules. As they note, it will take talent that are experts in what they are buying. It will take the treatment of Procurement as a strategic function, with clear lines for advancement for all personnel (as studies have shown that even a marginal improvement in skill can yield significant reductions in costs, times, and contracting complexity). Thirdly, they will need a federated data environment to make use of modern technology. (Especially if they want to use AI.)

This is just the start of what is necessary. There needs to be regular training. There needs to be specialization to different types of functions and purposes. There needs to be a rewrite of rules to focus on the right outcomes, not just a plethora of rules designed to prevent previously undesirable outcomes. There needs to be clear paths from buyer to public organization CPO to department head, not just paths of advancement within the Procurement function. There needs to be a focus on what’s best for the public being served, not best to minimize the risk to the buyer. And a willingness to accept that their may be a few mistakes made here and there as new buyers learn the ropes, while a willingness to weed out anyone that “makes a mistake” in order to give a contract to a supplier who is not the best fit (and do so in exchange for a kickback).

But most importantly, if they acquire AI technology, they also need to acquire the right to investigate the algorithms being used, the data it is trained on, the results of prior training, and the right to inspect any changes to the algorithms, data, and training. Otherwise, you can never trust any AI technology you might want to acquire.

Because governments need to apply the most appropriate AI-enhanced technology more than the private sector, but are the least likely to be able to use them properly.

Data Governance is Essential to Good Data Management …

… so why is there still so little of it in most organizations?

Good data is becoming ever more successful to business and Procurement success, especially if you want to use any any sort of predictive analytics or AI, but so few organizations have so little data governance, if they have any at all. With good data, you can get great insight into current operations, opportunities, and ordeals. Without good data, you have no clue what you’re buying or selling, what processes are going on at any point of time, or what problems are festering about to explode and cause major issues.

But good data is a rarity in most organizations, getting rarer by the day due to rapidly increasing data volumes (in excess of 400 million terabytes of data being generated daily across the globe), lack of controls in legacy systems, poor data processes, and lack of good IT talent with enough history to know what the data is, what it’s used for, and how to qualify it as good, or bad.

Why? Because organizations are putting systems in place before understanding what data those systems will need, where it will live, how it will be validated, how it will be maintained, how it will be archived, and how it will eventually be retired.

In most organizations, when they need data for an analytics-based project, the current answer is to get a “data warehouse”, “data lake”, or “data lakehouse”; dump all the organizational data to that warehouse, lake, or lakehouse; possibly run a simple AI-cleansing/enrichment algorithm, and hope for the best. However, this is not governance, and, in fact, exacerbates the problem more than it solves it. Now there are two copies of bad data, no strategy for pushing back any data that is cleansed, and if the data is changed in the source system before any eventual synch with the data warehouse, which data is correct? Chances are neither record is fully accurate, and any synch has to be done at the field level, if you have enough data to validate which field is correct (as you can’t just use time stamps, because if some data was updated by AI and unvalidated, it may not be right).

Governance is not just maintaining data in systems as you use it, occasionally validating it against third party databases or by manual review, and occasionally enriching it.

Governance is


  • defining what data the organization needs for its various functions
  • defining what data will be collected
  • defining what systems it will be maintained in, and, if the data is in multiple systems, which system is master
  • defining which data fields are critical and how they will be validated
  • defining when and how critical fields will be revalidated
  • defining the process for any data migration from master systems

And doing it


  • collecting the data
  • installing a new system
  • stating an analytics / AI project

NOT AFTER!

But how many organizations do that? Most don’t even do a proper RFP (taken in by the FREE RFP scam), even though the solution to good software (which is critical to maintaining good data) is an Affordable RFP.

Moreover, part of the RFP for any software solution should define the data management strategy as it impacts, and is impacted by, the solution.

Who Needs The Beef?

For those of you who have been following my rants, especially on intake-to-orchestrate (which really is clueless for the popular kids as it doesn’t do anything unless you already have all the systems you need and don’t know how to connect them), you’ll know that one of my big qualms, to this day, is Where’s the Beef?, because while the intake and orchestrate buns are nice and fluffy and likely very tasty, they aren’t filling. If you want a full stomach, you need the beef (or at least a decent helping of Tofu, which, unless you are vegetarian, won’t taste as good or be quite as filling, but will give you the subsistence you need).

And you need filling. Specifically, you need the part of the application that does something — that takes the input data (possibly properly transformed), applies the complex algorithms, and produces the output you need for a transaction or to make a strategic decision. That’s not intake-to-orchestrate, that’s not a fancy UI/UX, that’s not an agent that can perform transactional tasks that fall within scope, and that’s NOT a fancy bun. It’s the beef.

But, apparently, at least as far as THE PROPHET is concerned, (bio) re-engineering is going to eliminate the need for the beef. Apparently, the buns are going to have all the nutrients (or data processing abilities) you need to function and do your job.

In THE PROPHET‘s latest analogy, today’s enterprise technology burger consists of:

  • the patty: (not to be mistaken for the paddy) which combines enterprise technology and labour (which means it really should be the patty [labour] and the trimmings [technology] in this analogy)
  • the upper bun: and
  • the lower bun: which collectively provide you a way to cleanly get a grip on the patty

But tomorrow’s enterprise technology burger will consist of:

  • the upper bun: which will be replaced by a new type of technology that fuses co-pilots and agentic systems to power autonomous agents and replaces the patty [labour] and part of trimmings
  • the lower bun: which will represent the next generation data store and information supply chain and build in “self-healing” technology for data maintenance and replace the other part of the trimmings

… and that’s it. NO BEEF! Just two co-dependent buns that are destined to fuse into a roll … and not a very tasty one at that. Because this roll will, apparently, operate fully autonomously and never get anywhere near you, leaving you perpetually hungry.

Now, apparently, not all parts of the patty (with its complex amino acid chains and protein structures) will be capable of being (bio) re-engineered into the buns right away and the patty won’t disappear all at once, just shrink bit by bit over the next decade until there’s nothing left and the last protein structure is absorbed (or replaced by a good enough AI-generated facsimile — they can do that now too). In THE PROPHET‘s view, legacy systems of record (ERP/MRP, payment platforms, etc.) will be the last to be replaced, and those will survive along with the legacy labour to maintain them until they can finally be split up into components and absorbed into the bun.

In other words, in THE PROPHET‘s view, you don’t need the patty, and, more specifically, you don’t need (or even want) the beef. I have to argue this is NOT the case.

1. You Need the Beef

Thinking that the patty can be completely absorbed into the buns is what results from a lack of understanding of enterprise software architecture best practices and software development in general.

The best architecture we have, which took years to get two, is MVC, which stands for

  • Model: specifically, data model, which should be at the bottom (and could be absorbed into a data bun)
  • View: specifically, the UI/UX we interact with (and could be absorbed into a soft, warm, sweet smelling sourdough bun)
  • Controller: the core algorithms and data processing, which needs to be its own layer that supports the UX (and allows the UX to reconfigure the processing steps and outputs as needed) and can be cross-adapted to the best available data sources (that need to be remain independent)

Moreover, even Bill Gates, who predicts AI will have devastating effects across all industries, realizes that you can’t replace coders, energy experts, and biologists, and, by extension, jobs that require constantly evolving code, organic structure, and energy requirements to complete. So you will still need labour that creates, and relies on, highly specialized algorithms and expert interpretations of outputs to do their jobs. That also means that, in our field, strategic sourcing and procurement professionals cannot be replaced but tactical AP clerks are on their way out as AP software automatically processes 99% to 99.9% of invoices with no human involvement, even those with missing data and errors, handling the return, correction, negotiation, etc. until all of the data matches and costs are within tolerance.

2. You Want the Beef!

The whole point of modern architectures and engineering is to minimize legacy code / technical debt and maximize tactical data processing and system throughput (and have the system do as much thunking as possible, which is what it’s good at). If you try to push too much into the lower bun, you don’t have separation of data and processing, which means it’s almost impossible to validate the data as it’s not data you’re getting, but processed data, which means that the system might be continually pushing wrong data to the outer bun, even with good data fed in, due to a bug deep in the transformation and normalization code. But your automatic checks and fail safes would never catch it because you’ve turned what should be a crystal (clear) box into a black box! If you try to push too much processing into the upper bun, you have to replicate common functionality across every agent and application, leading to a lot of replication and bloat that consumes too much space, uses too much energy, and makes the systems even harder to maintain than the legacy applications of today.

So while the burger of tomorrow might be different with a much leaner, more protein rich, patty (with less sauce and unhealthy trimmings), and the bread might be a super healthy natural yeast-free multi-grain flat bread, making for a smaller (and possibly less appetizing burger from a surface view), it still needs to be a burger and anyone who thinks otherwise has joined the pretty fly Gen-AI in hallucination land!

Why Aren’t You Realizing the Full Value of Your Sourcing Efforts?

It’s been a well known statistic going back all the way to 2009 that at least 30% (and often 40%) of identified value in a sourcing event is never realized when Mickey North Rizza of AMR Research (acquired by Gartner in 2010 in an acquisition game of the 64,000,000 pyramid) published her classic 3-part series on Reaching Sourcing Excellence with Part 1 titled How to Keep 30 Cents of Every Dollar Spent. The reality is that while many leading organizations adopted strategic sourcing quickly during its first heyday in the mid-2000s, often before Procurement, because of the huge savings opportunities that were identified with good reverse auction platforms (in markets where supply exceeded demand) and good sourcing optimization (regardless of market conditions), as sourcing optimization identified an average savings of 12% consistently (compared to reverse auctions which saw significant drops every time they were applied to the same category), most of these leaders who identified savings of 10% or more never saw half of the identified savings. This is because savings requires more than just identification and a signature on a contract, it requires execution!

Execution that, at a minimum, requires:

  • making sure you order on the contract
  • … on time to receive delivery on time using the preferred shipping method
  • making sure you receive defect-free goods that meet the spec before paying for them
  • making sure the amount you are billed is the amount as per the contract
  • … and that you are not billed for expediting fees or surcharges you DID NOT agree to
  • making sure you pay on time (to avoid penalties)
  • … and only ever pay for any good or service once (using an m-way match)
  • making sure you terminate or renegotiate before an evergreen renewal
  • … and that you have verified the supplier has all the certifications and insurances in place before placing an order or renewing the contract
  • … etc.

It comes back to the concept of the perfect order which must be

  • on time,
  • complete,
  • damage free,
  • correctly documented,
  • correctly billed, and
  • adherent to all contract terms

This is not easy to do unless you

  • have a good procurement system
  • have a (carrier that has a) good WIMS (Warehousing and Inventory Management) system

and, the part that most people miss,

  • have a good contract lifecycle management system that manages the contract execution post signing

And when you look at the majority of contract management systems, they tend to fall into three categories:

  • a glorified e-filing cabinet / document repository where you can store your contracts and search their metadata (and literally no better than what a high school student with Microsoft Access and minimal coding skills could build 20 years ago)
  • a contract creation system that will allow you to quickly draft contracts using:
    • contract templates, from your, or their, legal department, tagged by region and category they can be used for,
    • clause libraries and templates, possibly with multiple version support based on territories and categories, or, today
    • Gen-AI drafting of templates through specification of category, region, requirements, and risks that must be covered as well as e-versions of all previously signed contracts in the category, region, business requirement, or risk categories (which then need to be mildly to moderately edited by a Legal expert)
  • a signatory platform with negotiation support (version control, dynamic redlining, audit trails, etc.)

Which is all fine and dandy, and well implemented can make your Legal team and Sourcing teams considerably more productive during the negotiation process, but does diddly squat when it comes time to actually helping you manage the contract execution. Now, you might think that you can do that in the Supplier Management system, because you’re ultimately managing a supplier, or the Risk Management system, because you’re ultimately managing a risk, and you can, to a point, and specifically the point at which those systems allow you to define contract tasks, but none of these are set up to let you holistically manage a contract — contract 360 if you will. This is especially the situation if you have a master contract with a number of sub-contracts, and those sub-contracts have sub-contracts as well. This will be the case if you are buying off of a contract tied to a GPO master contract, a holding company master contract (if your company is part of a group of companies), or in construction / engineering / shipbuilding industries where your main supplier will need to subcontract to a number of smaller suppliers for custom parts or services and your organization needs to manage that for regulatory or risk reasons.

In other words, the only contract lifecycle management solution that is truly valuable to Procurement is the solution that allows the contract to be managed from post signature to termination, helping the organization ensure all of the obligations are met and rights are received.