Forbes recently published an article on Responsible AI Procurement: A Practical Guide For Selecting Trustworthy AI Vendors. It wasn’t bad, but it missed the point.
Today, there’s only one way to responsibly address AI in Procurement.
JUST SAY NO!
1) We don’t really understand proper AI Governance (especially when most vendors are using third parties which are illegally scarping content, not checking for bias, and tweaking models on the fly without consideration for the new problems the on-the-fly tweaks will cause).
Plus, it’s not just ethical codes of conduct, it’s agreeing on what the ethics are, and, most importantly, making sure the models are transparent and unbiased — but we don’t know how to do that today, especially since all these models are huge black box models.
2) You can demand all the evidence you want from the vendor as backup for the vendor claims, but if you can’t verify it, how can you trust it?
3) These models require huge datasets to train. Even if you know the data set used and the processing method used, how can you be sure every element was properly vetted? Just like one bad apple can spoil the bunch, just one bad element in a clustering or optimization model can spoil the entire model. Just one! It only takes a small amount of bad data to spoil a model, regardless of the model used.
4) These models can fail, and sometimes fail spectacularly. If you don’t understand the model, you don’t understand where it can fail, and thus what to look for. Also, many minor incidents (which can foretell future catastrophic failures) will go unnoticed if a human isn’t checking everything.
5) These models are not secure … the AI can leak any training data at any time without warning. Your vendor can have every security certification under the sun, and all will be for naught if they use LLMs.
So, JUST SAY NO!