Category Archives: Market Intelligence

Is the End of the Wild Digital West in Sight? I Hope So!

The MIT Technology Review recently published a great article on The Dark Secret at the Heart of AI which notes that decisions that are made by an AI based on deep learning cannot be explained by that AI and, more importantly, even the engineers who build these apps CAN NOT fully explain their behaviour.

The reality is that AI that is based deep learning uses artificial neural networks with hidden layers and neural networks are a collection of nodes that identify patterns using probabilistic equations whose weights change over time as similar patterns are recognized over and over again. Moreover, these systems are usually trained on very large data sets (that are much larger than a human can comprehend) and then programmed with the ability to train themselves as data is fed into them over time, leading to systems that have evolved with little or no human intervention and that have, effectively, programmed themselves.

And what these systems are doing is scary. As per the article, last year, a new self-driving car was released onto New Jersey roads (presumably, because, the developers felt it couldn’t drive any worse than the locals) that didn’t follow a single instruction provided by an engineer or programmer. Specifically, the self-driving car ran entirely on an algorithm that had taught itself to drive by watching a human do it. Ack! The whole point of AI is to develop something flawless that will prevent accidents, not create a system that mimic us error prone humans! And, as the MIT article states, what if someday it [the algorithm] did something unexpected — crashed into a tree. There’s nothing to stop the algorithm from doing so and no warning will be coming our way. If it happens, it will just happen.

And the scarier thing is that these algorithms aren’t just being used to set insurance rates, but to determine who gets insurance, who gets a loan, and who gets, or doesn’t get, parole. Wait, what? Yes, they are even used to project recidivacy rates and influence parole decisions based on data that may or may not be complete or correct. And they are likely being used to determine if you even get an interview, yet alone a job, in this new economy.

And that’s scary, because a company might reject you for something you deserved only because the computer said so, and you deserve a better explanation than that. And, fortunately for us, the European Union thinks so too. So much so that companies therein may soon be required to provide an adequate, and accurate, explanation for decisions that automated systems reach. They are considering making it a legal right for individuals to know exactly why they were accepted for, or declined, anything based on the decision of an AI system.

This will, of course, pose a problem for those companies that want to continue using deep-learning based AI systems, but the doctor thinks that is a good thing. If the system is right, we really need to understand why it is right. We can continue to use these systems to detect patterns or possibilities that we would miss otherwise, many of which will likely be correct, but we can’t make decisions based on this until we identify the [likely] reasons therefore. We have to either develop tests, that will allow us to make a decision, or use other learning systems to find the correlations that will allow us to arrive at the same decision in a deterministic, and identifiable, fashion. And if we can’t, we can’t deny people their rights on an AI’s whim, as we all know that AI’s just give us probabilities, not actualities. We cannot forget the wisdom of the great Benjamin Franklin who said that it is better 100 guilty persons should escape than that one innocent person should suffer, and if we accept the un-interrogable word of an AI, that person will suffer. In fact, many such persons will suffer — and all for not of a reason why.

So, in terms of AI, the doctor truly hopes that the EU stands up and brings us out of the wild digital west and into the modern age. Deep Learning is great, but only as a way to help us find our way out of the dark paths it can take us into and into the lighted paths we need.

Will Cognitive Dissonance Lead to the Inadvertent Rise of Cognitive Procurement?

Despite the fact that machines aren’t intelligent, can’t think, and know nothing more about themselves and their surroundings than we program them to, cognitive is the new buzzword and it seems cognitive is inching it’s way into every aspect of Procurement. It’s become so common that over on SpendMatters UK, the public defender has stated that this house believes that robots will run (and rule) procurement by 2020. Not movie robots, but automated systems that, like hedge fund trading algorithms, will automate the acquisition and buying of products and services for the organization.

And while machine learning and automated reasoning is getting better and better by the day, it’s still a long way from anything resembling true intelligence and just because it’s trend prediction algorithms are right 95% of the time, that doesn’t mean that they are right 100% of the time or that they are smarter than Procurement pros. Maybe those pros are only right 80% of the time, but the real question is, how much does it cost when those pros are wrong vs. how much does it cost when a robot is wrong and makes an allocation to a supplier about to go bankrupt, forcing the organization to quickly find a new source of supply at a 30% increase when supply goes from abundant to tight.

The reality is that a machine only knows what it knows, it doesn’t know what it doesn’t know, and that’s the first problem. The second problem is that when the systems work great, and do so the first dozen or so times, we don’t want to think about that someday they wouldn’t. We want the results, especially when they come with little or no effort on our parts. It’s too easy to just forget our knowledge that as great as these systems can be, they can also be bad. Very bad. Much more bad than Mr. Thorogood, who claims to be bad to the bone.

We forget because it’s very discomforting to simultaneously think about how much these systems can save us when they identify trends we miss while also realizing that when they screw up, they screw up so bad that it’s devastating. So, rather than suffer this cognitive dissonance, we just forget about the bad if it hasn’t reared about it’s ugly head in a while and dwell on the good. And if we’ve never experienced the real bad, it’s all too easy to proclaim the virtue to those who don’t understand how bad things can be when they fail. And this is problematic. Because one of these days those that don’t understand will select those systems, but not to augment our ability (as we would only use them as decision support), but to replace part of us and that will be bad. Very bad indeed.

So don’t let your cognitive dissonance get in the way. Always proclaim the value of these systems as decision support and tactical execution guidance, but never proclaim their ability to get it right. They give us what we need to make the right decision (and when they don’t, we’re smart enough to realize it, feed them more data, or just go the other way). They should never make it for us.

Whichever Moron Decided that “Touched Spend” Was a Good Metric Should be “Touched” Out of a Job!

Last week over on Spend Matters, the maverick pointed out that there is a new benchmarking metric being collected by CAPS Research called “touched spend“, which is supposedly defined as a new metric to encapsulate sourceable spend and managed spend … and then some. Specifically:

  • Sourceable Spend
    All company-wide external purchases that could be sourced by supply management (whether they currently are or are not). Does not include such items as taxes, fees, legal judgments or charitable contributions.
  • Managed Spend
    Purchases made following policies, procedures or commercial framework developed by the supply management group.
  • Touched Spend
    Total of all spend that has been bid, negotiated, indexed or influenced in any way by the supply management group during the reporting period.

The first two metrics, appropriately defined (using the definitions provided by the maverick, are quite good, but the latter, not so much. the maverick points out that it could use a “little” touch-up, and, in particular, the word “influence” should be removed. As influence could refer to any policy/procedure/system/tactic/whim put in place by Procurement but executed by someone else, who might have a completely different definition or interpretation of the policy/procedure/system/tactic/whim and source using a methodology that would not be approved by any Procurement personnel under any market (or mental) condition.

But that alone wouldn’t make the definition meaningful. There’s still that word “indexed“. Just because you “index” something doesn’t mean you actually take, directly or indirectly, any action to actually manage, or even influence, the spend. The index can be completely ignored by anyone doing the actual buying. Or, worse yet, interpreted as “the minimum” one should pay (instead of the maximum) and lead to all sorts of problems.

And even though what remains after removing the words “influence” and “indexed” is a decent definition of a spend metric, it shouldn’t be called “touched” spend because that just conveys a definition so loose that anything qualifies.

If we examine the definition of touch, which is to come so close to (an object) as to be or come into contact with it, then my way, your way, anything goes tonight. You sent a policy to an end user, they half read it. The spend was touched! You passed an engineering manager in the hall, gave him a tip on a metals category, that he may or may not have taken into account, the spend was touched! You compiled an index that no one looked at … hey, they spend was touched! Because, in each case, you came so close that you could have come into contact with it … even though you did squat. See how stupid this metric is. How blazingly stupid. If you report this metric to any CFO with half a brain (and the vast majority have a full brain, by the way), your credibility is shot … forever.

It’s just too damn easy to touch too much.

And that’s why whichever moron that decided touched spend was a good metric should be touched, where, to be specific, we use the other definition of touch that is to handle in order to manipulate, alter, or otherwise affect, especially in an adverse way where we define “adverse way” to mean shown the door.

The UX One Should Expect from Best-in-Class Spend Analysis … Part V

In this post we wrap up our deep dive into spend analysis and what is required for a great user experience. We take our vertical torpedo as far as it can go and wrap the series up with insights beyond what you’re likely to find anywhere else. We’ve described necessary capabilities that go well beyond the capabilities of many of the vendors on the market, and more will fall by the wayside today. But that’s okay. The best will get up, brush off the dirt, and keep moving forward. (And the rest will be eaten by the vultures.)

And forward momentum is absolutely necessary. One of the keys to Procurement’s survival (unless it really wants to meet it’s end in the Procurement Wasteland we described in bitter detail last week) is an ability to continually identify value in excess of 10% year-over-year. Regardless of what eventually comes to pass, the individuals who are capable of always identifying value will survive in the organizations of the future.

But if this level of value is to be identified, buyers are going to need powerful, usable, analytics — much more powerful and usable then what the average buyer has today. Much more.

As per our series to date, this requires over a dozen key useablity features, many of which are not found in your average first, and even second generation, “reporting” and “business intelligence” analytics tool. In our brief overview series to date here on SI (on The UX One Should Expect from Best-in-Class Spend Analysis … Part I, Part II, Part III, and Part IV) we’ve covered four key features:

  • real, true dynamic dashboards,
  • simultaneous support for multiple cubes,
  • real-time idiot-proof data categorization, and
  • descriptive, predictive, and prescriptive analytics

And deep details on each were provided in the linked posts. But even prescriptive analytics, which, for many vendors, is really pushing the envelope, is not enough. Great solutions really push the envelope. For example, the most advanced solutions will also offer permissive analytics. As the doctor has recently explained in his two-part series (Are We About to Enter the Age of Permissive Analytics and When Selecting Your Prescriptive, and Future, Permissive, Analytics System), a great spend analysis system goes beyond prescriptive and uses AR and a rules-engine to enable a permissive system that will not only prescribe opportunities to find value but initiate action on those opportunities.

For example, if the opportunity is a tail-spend opportunity that could best be captured by a spot-auction, approved products that meet the bill, and approved suppliers that can automatically be invited to an auction to provide them, the system will automatically set up the auction and invite the suppliers, and if the total spend is within an acceptable amount, automatically offer an award (subject to pre-defined standard terms and conditions).

And that’s just the tip of the iceberg. For more insight onto just how much a permissive analytics platform can offer, check out the doctor and the prophet‘s fifth and final instalment on What To Expect from Best-in-Class Spend Analysis Technology and User Design (Part V) over on Spend Matters Pro (membership required). It’s worth it. And maybe, just maybe, when you identify, and adopt, the right solution, you won’t end up wandering the Procurement Wasteland.

The University is Still Here Because …

A couple of years ago TechCrunch wrote an article that asked Why is the University Still Here? In a time where information is universally accessible, knowledge can be compiled by experts and shared in a reviewed and verified form far and wide, and intelligence can be conveyed direct from an expert in Oxford (England) to an able learner in Liberal (Kansas) if both are ready, willing, and able thanks to virtual classrooms with audio-visual conferencing and screen sharing.

Then, earlier this decade, we saw the launch of massive open online courses (MOOCs) where anyone can register for a course from a leading professor, get the lectures, complete assignments, send them to TAs (teaching assistants) half a world away, get graded (automatically for multiple choice and by a human for essay or problem solving questions), and work towards what is supposed to be the equivalent of a University degree. But is it?

First of all, universities, even with remote learning aspects, have always been based on classroom learning. Secondly, advanced programs have always been based on one-on-one instruction between teacher and student. Thirdly, they have always been based on carefully structured curriculums that are designed to ensure a student gets an appropriate depth and breadth of knowledge. Fourth, the testing is always done in a manner that makes cheating or plagiarism difficult.

MOOCs are the antitheticals of University. They are trying to abolish classrooms. There is no personal one-on-one instruction between a recorded lecture and a semi-engaged viewer. The student can design their own haphazard curriculum that ensures neither depth nor breadth in the appropriate subject matter. And anyone can submit a document created by anyone else and there is no way to know.

But the failure of MOOCs to displace universities is not an argument for the continued existence of universities. Just because X does not displace Y, that doesn’t mean that Y is superior. It just means that the masses do not believe that X is superior. In our case, it’s not enough of a case for universities.

To make the case, we look at where MOOCs failed. As per the techcrunch article, they failed in keeping a user’s interest. Most people who registered for and even started a course, never completed. Most who completed didn’t come back. They weren’t motivated. The reasoning in the article is that because, for the majority of learners, it was part time, on their own time, it never got primacy and without primacy, efforts get abandoned.

And that’s part of the reason MOOCs failed and part of the reason we still need Universities. When you go to University, you make education a primary focus of your life. But the other reason is that a real, established, prestigious University provides something no other form of education can — a well-rounded full-featured educational experience with primacy, one-on-one instruction from an expert, great curriculums, and, most important, a community to share the experience with. This last aspect is key — you are part of a dedicated group of people there to learn and share the experience of learning and better each other in the process. And while that group shrinks a bit over the years, by the end, you have your own support group, and possibly a few colleagues for life, that got you there and take you further. That’s something you’ll never get from a MOOC.

And that’s why Universities still exist and need to continue to exist.