Monthly Archives: August 2017

There are 4 Modes of Innovation, But Only Two Types!

A recent article over on HBR.org on the 4 types of innovation and they problems they solve didn’t really discuss the types of innovation, but rather the modes. The author, who broke innovation down into the age-old 2*2 matrix, with domain definition on one axis and problem definition on the other, indicated that their was basic research — typically carried out by or with academia, breakthrough innovation — typically accomplished by skunk work projects, sustaining innovation — typically done by R&D labs, and disruptive innovation — that often comes out of VC-funded innovation labs.

As you can say, these are not really “types” but methods of innovation which can each lead to innovations that might be classified as basic, sustaining, breakthrough, or even disruptive innovations (so the names are quite confusing), and this leaves the question, what are the real types of innovation and how does innovation happen. (An academic might come up with a disruptive way to create new communications technology and the best-funded VC lab might, after years of research, just come up with a way to make a fabrication process more efficient, saving 20% of time and 10% of cost, and not discover a single revolution.)

So how is innovation accomplished? These days, it’s fundamentally accomplished in one of two ways — either using the tried and true method of good old fashioned human ingenuity or the new method of deep learning that can discover patterns, formulas, or correlations that humans can miss. But is this the kind of innovation we need? Or even want?

As per our last article where we asked if the end of the digital west was in sight, while these deep learning systems can, with enough data, make predictions that are much more accurate than the best human experts, the fact that they cannot explain their reasoning is very disturbing. Very disturbing indeed. Do we really want to trust them with a new drug formula that, while having the potential to save thousands, also has the potential to kill hundreds, with no knowledge of which individuals are at risk of instant death? the doctor hopes not!

While it’s okay to use these systems to identify the most likely directions of success, it’s not okay to use these systems to blindly choose those directions without independent verification and confirmation with rationale, deterministic explanations. In other words, while we should use every tool at our disposal, we should never replace human intelligence and ingenuity with dumb systems. Because, while there are two types of innovation in use these days, there’s only one real type of innovation — human innovation. the doctor hopes that we never forget it and return to the glory days where all innovation was human innovation.

One Hundred and Ten Years Ago Today …

The first taxicabs begin their operation in New York City, imported by Harry N. Allen, a thirty year old businessmen, who, as per this great NY Times article on The Creation of the Taxi Man, became incensed when a hansom cab driver charged him $5 for a three-quarter-mile trip from a Manhattan restaurant to his home.

These vehicles were imported from France as he wanted reliable, improved automobiles that were superior to the American versions derided as “smoke-wagons” using par of the eight million in capital he raised to start the business and the first taxi cab went into operation on August 13, 1907. (Source: 6sqft) Less than two months later, on October 1, 1907, Alan he orchestrated a parade of sixty-five shiny new red gasoline-powered French Darracq cabs, equipped with fare meters, down Fifth Avenue, which could be interpreted as the grand opening of the taxicab revolution in New York and the United States in general.

It was an important milestone in the evolution of supply chain, as it allowed the people who run it to get around quicker and more predictably.

Is the End of the Wild Digital West in Sight? I Hope So!

The MIT Technology Review recently published a great article on The Dark Secret at the Heart of AI which notes that decisions that are made by an AI based on deep learning cannot be explained by that AI and, more importantly, even the engineers who build these apps CAN NOT fully explain their behaviour.

The reality is that AI that is based deep learning uses artificial neural networks with hidden layers and neural networks are a collection of nodes that identify patterns using probabilistic equations whose weights change over time as similar patterns are recognized over and over again. Moreover, these systems are usually trained on very large data sets (that are much larger than a human can comprehend) and then programmed with the ability to train themselves as data is fed into them over time, leading to systems that have evolved with little or no human intervention and that have, effectively, programmed themselves.

And what these systems are doing is scary. As per the article, last year, a new self-driving car was released onto New Jersey roads (presumably, because, the developers felt it couldn’t drive any worse than the locals) that didn’t follow a single instruction provided by an engineer or programmer. Specifically, the self-driving car ran entirely on an algorithm that had taught itself to drive by watching a human do it. Ack! The whole point of AI is to develop something flawless that will prevent accidents, not create a system that mimic us error prone humans! And, as the MIT article states, what if someday it [the algorithm] did something unexpected — crashed into a tree. There’s nothing to stop the algorithm from doing so and no warning will be coming our way. If it happens, it will just happen.

And the scarier thing is that these algorithms aren’t just being used to set insurance rates, but to determine who gets insurance, who gets a loan, and who gets, or doesn’t get, parole. Wait, what? Yes, they are even used to project recidivacy rates and influence parole decisions based on data that may or may not be complete or correct. And they are likely being used to determine if you even get an interview, yet alone a job, in this new economy.

And that’s scary, because a company might reject you for something you deserved only because the computer said so, and you deserve a better explanation than that. And, fortunately for us, the European Union thinks so too. So much so that companies therein may soon be required to provide an adequate, and accurate, explanation for decisions that automated systems reach. They are considering making it a legal right for individuals to know exactly why they were accepted for, or declined, anything based on the decision of an AI system.

This will, of course, pose a problem for those companies that want to continue using deep-learning based AI systems, but the doctor thinks that is a good thing. If the system is right, we really need to understand why it is right. We can continue to use these systems to detect patterns or possibilities that we would miss otherwise, many of which will likely be correct, but we can’t make decisions based on this until we identify the [likely] reasons therefore. We have to either develop tests, that will allow us to make a decision, or use other learning systems to find the correlations that will allow us to arrive at the same decision in a deterministic, and identifiable, fashion. And if we can’t, we can’t deny people their rights on an AI’s whim, as we all know that AI’s just give us probabilities, not actualities. We cannot forget the wisdom of the great Benjamin Franklin who said that it is better 100 guilty persons should escape than that one innocent person should suffer, and if we accept the un-interrogable word of an AI, that person will suffer. In fact, many such persons will suffer — and all for not of a reason why.

So, in terms of AI, the doctor truly hopes that the EU stands up and brings us out of the wild digital west and into the modern age. Deep Learning is great, but only as a way to help us find our way out of the dark paths it can take us into and into the lighted paths we need.

Will Cognitive Dissonance Lead to the Inadvertent Rise of Cognitive Procurement?

Despite the fact that machines aren’t intelligent, can’t think, and know nothing more about themselves and their surroundings than we program them to, cognitive is the new buzzword and it seems cognitive is inching it’s way into every aspect of Procurement. It’s become so common that over on SpendMatters UK, the public defender has stated that this house believes that robots will run (and rule) procurement by 2020. Not movie robots, but automated systems that, like hedge fund trading algorithms, will automate the acquisition and buying of products and services for the organization.

And while machine learning and automated reasoning is getting better and better by the day, it’s still a long way from anything resembling true intelligence and just because it’s trend prediction algorithms are right 95% of the time, that doesn’t mean that they are right 100% of the time or that they are smarter than Procurement pros. Maybe those pros are only right 80% of the time, but the real question is, how much does it cost when those pros are wrong vs. how much does it cost when a robot is wrong and makes an allocation to a supplier about to go bankrupt, forcing the organization to quickly find a new source of supply at a 30% increase when supply goes from abundant to tight.

The reality is that a machine only knows what it knows, it doesn’t know what it doesn’t know, and that’s the first problem. The second problem is that when the systems work great, and do so the first dozen or so times, we don’t want to think about that someday they wouldn’t. We want the results, especially when they come with little or no effort on our parts. It’s too easy to just forget our knowledge that as great as these systems can be, they can also be bad. Very bad. Much more bad than Mr. Thorogood, who claims to be bad to the bone.

We forget because it’s very discomforting to simultaneously think about how much these systems can save us when they identify trends we miss while also realizing that when they screw up, they screw up so bad that it’s devastating. So, rather than suffer this cognitive dissonance, we just forget about the bad if it hasn’t reared about it’s ugly head in a while and dwell on the good. And if we’ve never experienced the real bad, it’s all too easy to proclaim the virtue to those who don’t understand how bad things can be when they fail. And this is problematic. Because one of these days those that don’t understand will select those systems, but not to augment our ability (as we would only use them as decision support), but to replace part of us and that will be bad. Very bad indeed.

So don’t let your cognitive dissonance get in the way. Always proclaim the value of these systems as decision support and tactical execution guidance, but never proclaim their ability to get it right. They give us what we need to make the right decision (and when they don’t, we’re smart enough to realize it, feed them more data, or just go the other way). They should never make it for us.

Whichever Moron Decided that “Touched Spend” Was a Good Metric Should be “Touched” Out of a Job!

Last week over on Spend Matters, the maverick pointed out that there is a new benchmarking metric being collected by CAPS Research called “touched spend“, which is supposedly defined as a new metric to encapsulate sourceable spend and managed spend … and then some. Specifically:

  • Sourceable Spend
    All company-wide external purchases that could be sourced by supply management (whether they currently are or are not). Does not include such items as taxes, fees, legal judgments or charitable contributions.
  • Managed Spend
    Purchases made following policies, procedures or commercial framework developed by the supply management group.
  • Touched Spend
    Total of all spend that has been bid, negotiated, indexed or influenced in any way by the supply management group during the reporting period.

The first two metrics, appropriately defined (using the definitions provided by the maverick, are quite good, but the latter, not so much. the maverick points out that it could use a “little” touch-up, and, in particular, the word “influence” should be removed. As influence could refer to any policy/procedure/system/tactic/whim put in place by Procurement but executed by someone else, who might have a completely different definition or interpretation of the policy/procedure/system/tactic/whim and source using a methodology that would not be approved by any Procurement personnel under any market (or mental) condition.

But that alone wouldn’t make the definition meaningful. There’s still that word “indexed“. Just because you “index” something doesn’t mean you actually take, directly or indirectly, any action to actually manage, or even influence, the spend. The index can be completely ignored by anyone doing the actual buying. Or, worse yet, interpreted as “the minimum” one should pay (instead of the maximum) and lead to all sorts of problems.

And even though what remains after removing the words “influence” and “indexed” is a decent definition of a spend metric, it shouldn’t be called “touched” spend because that just conveys a definition so loose that anything qualifies.

If we examine the definition of touch, which is to come so close to (an object) as to be or come into contact with it, then my way, your way, anything goes tonight. You sent a policy to an end user, they half read it. The spend was touched! You passed an engineering manager in the hall, gave him a tip on a metals category, that he may or may not have taken into account, the spend was touched! You compiled an index that no one looked at … hey, they spend was touched! Because, in each case, you came so close that you could have come into contact with it … even though you did squat. See how stupid this metric is. How blazingly stupid. If you report this metric to any CFO with half a brain (and the vast majority have a full brain, by the way), your credibility is shot … forever.

It’s just too damn easy to touch too much.

And that’s why whichever moron that decided touched spend was a good metric should be touched, where, to be specific, we use the other definition of touch that is to handle in order to manipulate, alter, or otherwise affect, especially in an adverse way where we define “adverse way” to mean shown the door.