As one of the flagship publications in the space, this is one that, for better or worse, a lot of people look forward to come decision making time. So, just like SI tackled the Gartner Quadrant last year, it’s going to tackle the latest Forrester Wave on eProcurement Solutions (Q1, 2011) to help you understand what’s good, what’s bad, and, in some cases, what’s downright ugly. Because, in the end, if you don’t know how to ride the wave, you might just end up digging your own grave.
First off, I agree with Jason (who commented that Forrester’s eProcurement Wave Captures the State of the Market) that the Forrester ranking methodology, generally, does a better job than Gartner because it provides better transparency into the criteria that contribute to a ranking on each axis, that this report in particular does a solid high-level job of creating a credible segmentation for a sub-set of the vendors in the market, and that “there was little to broadly differentiate” among providers, at least on a feature/function level for providers that were included in the report. But better is not sufficient, high-level segmentations are pretty easy, and if you limit your report to the 800lb Gorillas, all of the solutions are going to pretty much look alike.
- if you have to get from New York to Los Angeles quickly, rail is better than car (because even though it makes lots of stops, the train runs 24 hours a day and you can’t drive 24 hours a day), but doesn’t match the efficiency of air and a direct flight
- there are lots of ways to credibly segment vendors — product focus vs service focus, e-Procurement focus vs ERP focus, generic solution vs vertical solutions — but such segmentation is meaningless to a buyer if it doesn’t segment according to the buyer’s particular needs
- if you limit your search to slivery mid-sized sedans, from a distance, there’s not much difference between a Toyota Camry, Ford Fusion, Nissan Altima, Honda Accord, or a Hyundia Sonata (and you’re likely to confuse them if you’re driving fast and just take a quick look)
In other words, while this was a little better than last year’s Tragic Quadrant from Gartner — where strict guidelines were set down but vendors allowed to slip in on exceptions or technicalities anyway, where some of the evaluation criteria didn’t make any sense at all, and where some non-standard definitions were used — it wasn’t much.
Basically, for just about every fundamental it correctly included, there was an accompanying flaw. And while most of the flaws weren’t that bad, the net result is that the overall report isn’t that useful unless you’re a 1000 lb Gorilla trying to figure out which 800 lb Gorilla you should buy from. And since there are only 1000 companies in the Fortune 1000 club, this means that the number of companies that will find this report useful are few and far between, and, as usual, the burgeoning middle-market, where most of the need is, goes unserved again, and the tsunami you might have been expecting is nothing more than a weak 6-foot wave that won’t do anything more than get you a little wet.
So what were the (major) flaws? That’s the subject of tomorrow’s post.