Continuing on with our statement that sometimes you have to listen to a lawyer, a recent article over on Bloomberg Law noted that Companies Should Ask These Risk Questions When Procuring AI Tools and gave us four questions in particular that were good:
Do I Understand the Data
The article gets it right when it says that AI tools are only as robust as the data they’re trained on, as well as the need to know what data is collected, how, and if all rights are respected when doing so. But what they didn’t get is that the data determines what models and techniques can be used, and what models won’t be that effective or reliable. A vendor sales rep will tell you that whatever technique it’s using is just right for your problem, but the reality is that the sales rep likely doesn’t have anywhere close to the mathematical knowledge to know if its appropriate or not, especially since that sales person may have barely passed remedial junior math (as not all US states require remedial senior math to graduate High School). Furthermore, there’s no guarantee that even the tech teams know if the model is appropriate or not. If the company just hired a bunch of developers with maybe a year of university math, gave them access to a bunch of libraries, and all they did was test out various machine learning models until one appeared to work to a sufficient degree of accuracy on the test suites they compiled, it doesn’t mean they understand the model, why it worked, or even the appropriate characteristics of the data set that allowed the model to work — it just means that they can say for data sets that look like this, it should work. (But what is look like?) You need to understand the data, and find someone who understands the models that it is appropriate for.
Have I considered Regulatory Scrutiny?
Not only do you have to take note that The Department of Justice, Federal Trade Commission, and other regulators are focused on whether technology companies and their tools create anti-competitive environments or put consumers at a disadvantage, but many jurisdictions are considering or implementing laws against the use of black-box technology where the output — which determines whether or not a person can get a loan, be insured, or even apply for a job or government program — and the logic behind the decisions, and the rules that were applied, cannot be explained. You could also be in trouble if the process is fully automated and there isn’t a human in the loop to validate the decision, if the systems uses (third party) data that it has no right to use, or if the output data is not sufficiently protected if it was generated from input data that must be protected and the output can be reverse engineered.
Have I Mitigated Security Risks?
It’s not just traditional cyber attacks on the system, it’s well calculated queries that can slightly perturb the system over time until the outputs after the 10th, 100th, or 1000th slight, imperceptable, perturbation result in an output the system never should have given in the first place, such as approving a ten million dollar loan to a high-risk foreigner who will take the money and run or denying insurance to all people with a genetic defect likely to result in a specific condition down the road that can only be treated by a single drug owned by a single pharmaceutical who will drive people into bankruptcy for a pill that costs $5 to make.
Did I include Best Practices in the Contract?
More specifically, did you include the best practices you want followed in the contract? Don’t leave best practices up to the vendor to define however they want to define them. Make sure you cover all necessary security measures, compliance with all government and regulatory guidelines on AI in the regions you intend to use it (and open standards if there are none, guidelines from the UN, the Responsible AI Institute, or something similar), and so on.
And these are great questions, but the first question you should always ask is:
Do I Really Need AI?
And only when you choose the wrong answer, and say yes, do you need to ask the questions above. The reality is that you don’t ever need AI. AI means that you, or the vendor, were just unwilling to take the time to understand the problem and design an appropriate solution. Remember that when you try to jump on the AI bandwagon heading off the cliff (for the sixth decade in a row).