A post late last month on LinkedIn started off as follows:
“If you’ve ever read any research papers or solution maps on procurement tech, you’ve probably figured out a couple of things.
1. It’s confusing and overly complex
2. It doesn’t cover the basic, most obvious-of-the-obvious fundamentals that everyone needs to consider.
These are:
– User interface and user experience (UI/UX)
– Ease and speed of implementation
Why don’t they do this?
Honestly, I don’t know the answer.
The cynic in me says it’s because their biggest paymasters have a horrible UI/UX and require a very complex and lengthy implementation.”
This really bothered me, not because UX and implementation time aren’t super important, they are, and they are among the biggest determinants of adoption (which is critical to success), but because anyone would think an analyst firm should address this.
The reality is that no proper analyst will attempt to score these because they are completely subjective! As a result:
- There is no objective, function-based/capability-based scale that could be scored consistently by any knowledgeable analyst on the subject and
- What is a great experience to one person, with a certain expectation of tech based upon prior experience and knowledge of their function, can be complete CR@P to another person.
Now, some firms do bury such subjective evaluations on UX and implementation time in their 2*2s where they squish an average of 6 subjective ratings into a dimension, but that is why those maps are complete garbage! (See: Dear Analyst Firms: Please stop mangling maps, inventing award categories, and evaluating what you don’t understand!) So no self-respecting analyst should do it. As an example, one analyst might like solutions with absolute minimalist design, with everything hidden and everything automated against pre-built rules (that may, or may not, be right for your organization and may result in an automated sourcing solution placing a Million dollar order with payment up front for a significant early payment discount to a supplier that subsequently files for bankruptcy and doesn’t deliver your goods) while a second might like full user control through a multi-screen multi-step interface for what could be a one-screen and one-step function and a third might like to see as much capability and information as possible squished into every screen and long for the days of text-based green-screens where you weren’t distracted by graphics and animations and design. Each of these analyst would score the same UX completely different! On a 10 point scale, for a given UX design, three analysts in the same firm could give scores of 1, 5, and 10, averaged to 5 … and how is that useful? It’s not!
(And while analysts can define scales of maturity for the technology the UX is based on, just because a vendor is using the latest technology, that doesn’t mean their UX is any good. New technology can be just as horrendously misused as old technology.)
The same goes for implementation time. An analyst that mainly focuses on simple sourcing/procurement where you should just be able to flick a SaaS switch and go would think that an implementation time of more than a week is abysmal, but an analyst that primarily analyzes CLM and SMDM would call BS on anything less than six weeks and expect three months for an implementation time. This is because, for CLM, you have to find all the contracts, feed them in, run them through AI for automated meta-data extraction, do manual review, and set up new processes while for SMDM you have to integrate half a dozen systems, do data integrations, cleansing, and enrichment through cross-referencing with third party sources, create golden records, do manual spot-check reviews, and push the data back . Implementation time is dependent on the solution, the architecture, what it does, what data it needs, what systems it needs to be integrated with, what support there is for data extraction and loading in those legacy systems, etc. Implementation time needs to be judged against the minimum amount of time to do it effectively, which is also customer dependent. Expecting an analyst to understand all the potential client situations is ridiculous. Expecting them to craft an “average customer situation”, base an implementation time on this, and score a set of random vendors accordingly is even more ridiculous.
The factors ARE absolutely vital, but they need to be judged by the buying organization as part of the review cycle, AFTER they’ve verified that the vendor can offer a solution that will meet
- their current, most pressing, needs as an organization,
- their evolving needs as they will need to get other problems under control, and
- do so with a solution that is technically sound and complete with respect to the two requirements above while also being capable of scaling up and evolving over time (as well as capable of being plugged into an appropriate platform-based ecosystem through a fully Open API).
A good analyst an guide you on ways to judge this and what you might want to consider, but that’s it … you have to be the final judge, not them.
That’s why, when the doctor co-designed Solution Map when he was a Consulting Analyst for Spend Matters, the Solution Map focussed on scoring the technological foundations, which could be judged on an objective scale based on the evolution of underlying technology over the past two-plus decades and/or the evolution of functionality to address a specific problem over the past two-plus decades. It’s up to you whether you like it or not, think the implementation time frames are good or not, believe the vendor is innovative or not, and are satisfied with the vendor size and maturity, not the analyst. Those are business viewpoints that are business dependent. Analysts should score capabilities and foundations, particularly where buyers are ill-equipped to do so (and this also means that analysts scoring technology MUST be trained technologists with a formal, educational, background in technology — computer science, engineering, etc. — and experience in Software Development or Implementation –and yes, the doctor realizes this is not always the case, and that’s probably why most of the analyst maps are squished dimensions across half-a-dozen subjective factors [as they are not capable of properly evaluating what they are claiming to be subject matter experts in; as a comparison, when you have a journalist or historian or accountant rating modern SaaS platforms that’s the equivalent of having a plumber certify your electrical wiring or a landscaper judging the strength of the framing in your new house — sure, they’re trade pros, but do you really want to judge their opinion that the wiring is NOT going to start an electrical fire and burn your house down or the frame is strong enough for the 3,000 pounds of appliances you intend to put on the 2nd floor? the doctor would hope not!).
The cynic might say they don’t want to embarrass their sponsors, but the realist will realize the analysts can’t effectively judge vendors on this and the smart analysts won’t even try (but will instead guide you on the factors you should consider and look for when evaluating potential solutions on the shortlist they can help you build by giving you a list of vendors that provide the right type of solution and are technically sound, vs. three random vendors from a Google search that don’t even offer the same type of solution).