As per a recent article in Strategy + Business on cleaning the crystal ball, which discussed the challenges associated with forecasting, we are reminded how the old game of estimating the number of jelly beans in a jar illustrates the innate wisdom of the crowd. In a class of 50 to 60 students, the average of the individual guesses will typically be better than all but one or two of the individual guesses. Furthermore, not only can you not identify the best guesser in advance, but that “expert” may not be the best individual for the next jar because the first result likely reflected a bit of random luck. Then there’s the fact that research by James Shanteau, professor of psychology at Kansas State, has shown that expert judgements often demonstrate logically inconsistent results. For example, medical pathologists presented with the same evidence twice would reach a different conclusion 50% of the time.
However, teams of forecasters often generate better results (and decisions) than individuals as long as the teams include a sufficient degree of diversity of information and perspectives. This is partially because a naive forecaster often frames the question a different way and thinks more deeply about the fundamental driver of the forecast than an expert who has developed an intuitive, but often overconfident, sense of what the future holds. But you can’t just throw a group of people together in a room and ask them to come up with a consensus because the most vocal or senior person might dominate the discussion and overly influence the consensus because most people put too much confidence in the most senior or highest-paid person.
So how do we harness the wisdom of the crowds and insure that no one voice dominates the forecast when the forecast is inherently risky and unpredictable? We look in the place that we are probably most familiar with — e-Sourcing. A blind RFX survey sent out to an interdisciplinary team of carefully chosen and randomly chosen individuals who, given a scenario description, past sales, and expected market trends (from third party analyst firms) are asked to provide their input to the short-, medium-, and long-term forecasts at the SKU and group level. Then, we simply average all of the responses, giving slightly higher, but individually equal, weighting to the carefully chosen respondents (who collectively complete an interdisciplinary team and do it as part of their jobs) and slightly lower, but individually equal, weighting to a section of random organizational individuals asked to weigh in with outside opinions. It won’t be perfect, but it will be substantially better than all but a few guesses — and since you won’t know what guesses will be good or bad in advance, it will substantially reduce your risk.
Thoughts? Comments? Criticisms?