Despite the fact that machines aren’t intelligent, can’t think, and know nothing more about themselves and their surroundings than we program them to, cognitive is the new buzzword and it seems cognitive is inching it’s way into every aspect of Procurement. It’s become so common that over on SpendMatters UK, the public defender has stated that “this house believes that robots will run (and rule) procurement by 2020”. Not movie robots, but automated systems that, like hedge fund trading algorithms, will automate the acquisition and buying of products and services for the organization.
And while machine learning and automated reasoning is getting better and better by the day, it’s still a long way from anything resembling true intelligence and just because it’s trend prediction algorithms are right 95% of the time, that doesn’t mean that they are right 100% of the time or that they are smarter than Procurement pros. Maybe those pros are only right 80% of the time, but the real question is, how much does it cost when those pros are wrong vs. how much does it cost when a robot is wrong and makes an allocation to a supplier about to go bankrupt, forcing the organization to quickly find a new source of supply at a 30% increase when supply goes from abundant to tight.
The reality is that a machine only knows what it knows, it doesn’t know what it doesn’t know, and that’s the first problem. The second problem is that when the systems work great, and do so the first dozen or so times, we don’t want to think about that someday they wouldn’t. We want the results, especially when they come with little or no effort on our parts. It’s too easy to just forget our knowledge that as great as these systems can be, they can also be bad. Very bad. Much more bad than Mr. Thorogood, who claims to be bad to the bone.
We forget because it’s very discomforting to simultaneously think about how much these systems can save us when they identify trends we miss while also realizing that when they screw up, they screw up so bad that it’s devastating. So, rather than suffer this cognitive dissonance, we just forget about the bad if it hasn’t reared about it’s ugly head in a while and dwell on the good. And if we’ve never experienced the real bad, it’s all too easy to proclaim the virtue to those who don’t understand how bad things can be when they fail. And this is problematic. Because one of these days those that don’t understand will select those systems, but not to augment our ability (as we would only use them as decision support), but to replace part of us and that will be bad. Very bad indeed.
So don’t let your cognitive dissonance get in the way. Always proclaim the value of these systems as decision support and tactical execution guidance, but never proclaim their ability to get it right. They give us what we need to make the right decision (and when they don’t, we’re smart enough to realize it, feed them more data, or just go the other way). They should never make it for us.