The Best Article Xavier Olivera Has Ever Written!

In what “good” looks like today, and what it enables next, Xavier writes:


The next phase of P2P evolution will not be defined by who adds the most AI features fastest. It will be defined by who builds systems that make better decisions easier, safer and more repeatable, without losing the discipline that P2P was designed to enforce in the first place.

Truer words have never been spoken, especially in the Age of AI hype where the A.S.S.H.O.L.E. floods us with AI BS faster than we’ve ever been flooded with tech propaganda before!

Gen-AI LLMs (which are now powering the AGI craze, because if the first offering flops, just tweak and relaunch it with a few new buzzwords and claim it just needed more time, processing power, and tweaking) are not intelligent. They’re not even reliable. Hallucinations are a core function, Predictions are based on data available, even if it’s incomplete, incorrect, or indicative of actions known to be wrong for the situation in question that is typically an exception to the rule (or pattern). And many actions that can be taken automatically by these systems can’t be reversed (as there is not only no mechanism, but when they trigger an external event, the ability to reverse an incorrect action is completely out of your control).

Given this harsh reality, while they can monitor and make suggestions on how to govern, they can not govern and they do not count as governance. Governance is the only way to get to better, safer, and repeatable decisions. In reality, these Gen-AI /AGIs count as risk. Any error made with respect to a commitment (transaction, obligation, contract, large financial transfer) is an error that increases organizational jeopardy!

Governance is predictability, determinism, explainability, and traceability. This is not modern LLM-based Gen-AI / AGI system, but a traditional RPA or modern ARPA system (where all suggested rule and workflow changes and adaptations to prevent a future exception from occurring must be approved by a human) where all actions are governed by unbreakable rules, all exceptions are approved by a human, and all actions are completely traceable and 100% explainable — with no lies.

Remember that when you’re looking for your next Procurement solution, or you’ll end up with one that is worse, more dangerous, and less repeatable than the last generation solution you have now. For example, let’s say you implement an agent that monitors the inbound email channel for supplier communications regarding payment instructions and invoices. A communication comes in requesting a change of banking details for a supplier. The IPs and source domain look good so the change, and the change is to another bank local to the supplier (that they did business with in the past), so the update is sent to the AP system. The next day, an invoice comes in from the supplier for 10 times the number of units on the last PO. It’s from a supplier where shipment quantities never match the PO and where the buyer always approves the discrepancies, so the invoice is automatically paid. The next day another request comes in to change the bank account back to the original. It also passes the AI’s sniff test, so it happens. No one notices that a multi-million dollar payment was made to a fake supplier on a fake invoice, until the real invoice comes in a few days later, gets rejected because the PO has been matched, and the supplier flags an issue two weeks later when its AR team finally gets around to processing the exception, the AP team investigates, tells the supplier an invoice was paid, a back and forth occurs, and when the supplier finally gets the “proof”, informs the buyer that is NOT their bank account. By now, over three weeks and a day have passed, and the funds are unrecoverable as the thieves transferred the money out of the country and closed the fake account the day the fake invoice was paid. This is the “governance” you’ll get from an unintelligent agentic solution (masquerading as an AI employee) that does everything on probabilities.