Daily Archives: June 16, 2023

Dear Analyst Firms: Please stop mangling maps, inventing award categories, and evaluating what you don’t understand!


If there’s a place you got to go
I’m the one you need to know
I’m the map
I’m the map, I’m the map
If there’s a place you got to get
I can get you there, I bet
I’m the map
I’m the map, I’m the map

… but if there’s a tool you want to score
I’m the one you must ignore
I’m the map
I’m the map, I’m the map
if there’s a tool you got to get
I’ll lead you astray, I bet
I’m the map
I’m the map, I’m the map

It’s map time! (It’s always map time!) The 2*2 onslaught isn’t over yet (and may never be)! Prepare to be continually overwhelmed with cool graphics, big company names and logos, and no information you can actually use (as is). Why? Because when you map 6+ criteria or dimensions of information down to a single dimension, and 12+ dimensions of information down to a 2×2 grid, it’s meaningless. All you know is which vendor had a total score on two sets of 6+ criteria that was in the top percentile. But you don’t know if that’s because they are good across the board on those 6 criteria, or top score on 3 of those dimensions (in the analyst’s opinion) and average score on the other 3; or top score on 3 of those criteria, average score on 2 criteria, and below average score on the last criteria — which happens to be the core technology criteria that also happens to be the most important criteria to you!

It might not be so bad if all the criteria were different aspects of a criteria category — such as core architecture, product features, and integration under technology; or product innovation, service delivery, and operational efficiency under innovation — but you have a mish-mash of scores on the seven different dimensions of product capability, market viability, sales execution / funnel success rate, marketing execution / visibility, market responsiveness, corporate operations, and overall customer experience which are squished into a single execution dimension in one of the big name maps and a mish-mash of product specific capabilities, related application offering, integrations, globalization, technology, and customer references into an offering dimension in another big name map. It’s crazy! And useless.

And it’s also mind-boggling when you consider the significant effort some of these firms put into their research, the detailed reports they produce, and the great work that often results otherwise. (You may not agree with the analysts’ opinion of what a good strategy is, what true innovation is, what the appropriate product features are, or the scoring scales; but as long as all of the vendors are scored consistently, it’s still valuable insight that you could use in differentiating vendors to find the ones that might be the most right for your organization and your challenges IF all these scores weren’t mangled into one meaningless score you can’t use.)

So, dear analyst firms, please stop! You don’t need to to this. You can provide much more value by not creating these 2*2 mangled maps and either:

  • use a graphing technique that was made for comparing multiple dimensions visually, like a spider graph
  • score less dimensions and then do multiple 2*2s on the different dimension pairs
    (after all, when customers want to buy a solution, do those customers really care about how good a vendor’s marketing is or how successful the salesperson is? heck no! they care only about how good the product is, how well the vendor can serve them, how stable the vendor is, and maybe about how innovative the vendor is if they are forward thinking and want longevity)
  • create bar, or similar, charts on the different dimensions and then give customers a tool to build their own weightings meaningful to them

It’s bad enough these map-creation analyst firms are eliminating vendors from their maps based on criteria that range from somewhat to completely arbitrary, which can include, but not be limited to:

  • an arbitrary minimum on overall revenue in the prior year on software alone
  • an arbitrary client minimum
  • an arbitrary minimum on the average number of users per client
  • an arbitrary minimum on customer size for a % of the customer base
  • an arbitrary minimum on license fees (for the majority of the customer base)
  • an arbitrary list of core “features” that are absolute
  • an arbitrary exclusion of any solution deemed to narrow/industry focussed
  • some other arbitrary requirement merely to maximize the number of vendors that can be included … which might actually eliminate the vendor with the best or most innovative product or service! (Which entirely misses the point, doesn’t it?)

Given all of this, these firms could at least produce maps meaningful to the average buyers where those buyers could extract useful information from the maps as is!

“Two by two they’re still coming down
… the satellite circus never leaves town …”

Holy smoke holy smoke,
plenty bad mappers for the doctor to stoke
Feed ’em in feet first, this is no joke
This is thirsty work, making holy smoke, yeah
Holy smoke
Smells good

The only thing that is as annoying as these meaningless 2*2s is other analyst firms inventing award categories just to create attention for themselves when those award categories are totally meaningless and useless to end customers who have no clue what they mean or what the award categories are evaluating (especially when these award categories often mix vendors with completely different solutions) such as “insight“, “innovation“, “customer-centric” and/or “growth“. While we can be sure that every vendor wants to be seen as “insightful“, “innovative“, “customer-focussed“, and “growing“, that doesn’t tell the customer if the vendor offers a product or service, or if that product is e-Sourcing, e-Procurement, Risk Monitoring, or a simple carbon calculator. And if that’s the only category the vendor is listed in, well, that’s just useless.

I want to run, I want to hide
I wanna tear down the walls that keep them outside
I wanna reach out and set the flame
Where the sheets have no name, ha, ha, ha

I wanna see insight on the page
And see confusion disappear without a trace
I wanna take shelter, I can’t ascertain
Where the sheets have no name, ha, ha, ha

As a postscript, the doctor isn’t annoyed by all of the 2*2 maps (just the majority). Although they aren’t perfect, he finds that the Spend Matters Solution Maps that, in full disclosure, he did co-create (and which he no longer has any association with) are still useful as they are still focussed entirely on two dimensions: product (& underlying technology) evaluation and customer score. (As of V3, released Fall 2021, not due for update until [at least] Fall 2023.) The product evaluation is against an extremely well defined set of criteria where each criteria has a scoring scale that at least defines fledgeling through industry standard capability (and usually above standard as well) and the customer evaluation is done entirely by the customer completing surveys with no analyst interaction whatsoever (as any survey done by an analyst introduces bias based on the way the analyst asks the question and the tone the analyst uses).

The Solution Maps are two, and only two, dimensions that can be consistently scored by any analyst who scores on the product side and consistently scored against perceived value on the customer side. Are they perfect? Of course not! The product side contains some services questions (which are soft and more open to interpretation) (but were less than 5% of the questions); the customer side can be very subjective based upon cultural norms for that customer, customer stage in the relationship (new vs. longer term), and service level the customer subscribed to (and, thus, if there are only a few customer scores, one really bad or really good, out of range, score can really affect the average); and the weightings for the maps are still analyst interpretation of what criteria are most important for each market size, but it’s one relatively pure dimension mapped against another relatively pure dimension, consistently scored, and consistently weighted.  And that’s still considerably more useful than any other map currently is.

Plus, at least when the doctor was involved, there was only ONE requirement for participation: have a standalone solution you are willing to openly demo (without an NDA) and sign a form committing to participation regardless of where you end up falling on the map (which is all mathematically, and not subjectively, computed). So while you can’t say the top vendor is for you, you can say any vendor who makes the map likely has the core tech you need (as they need to at least be industry average) and likely enough customer service to get you going on it. You can produce a short list of comparable vendors that produce a solution of the type you are looking for, of various sizes (not just the biggest vendors), and know that the solutions are reasonably comparable. This allows you to focus on the other value drivers relevant to your organization in the RFP. And if the other maps gave you just granular insight into service, innovation, and any other dimension relevant to you, think how useful they could be?