Category Archives: Analyst

Don’t Trust an Analyst Firm to Score UX and Implementation Time!

A post late last month on LinkedIn started off as follows:

If you’ve ever read any research papers or solution maps on procurement tech, you’ve probably figured out a couple of things.

1. It’s confusing and overly complex
2. It doesn’t cover the basic, most obvious-of-the-obvious fundamentals that everyone needs to consider.

These are:

– User interface and user experience (UI/UX)
– Ease and speed of implementation

Why don’t they do this?

Honestly, I don’t know the answer.

The cynic in me says it’s because their biggest paymasters have a horrible UI/UX and require a very complex and lengthy implementation.”

This really bothered me, not because UX and implementation time aren’t super important, they are, and they are among the biggest determinants of adoption (which is critical to success), but because anyone would think an analyst firm should address this.

The reality is that no proper analyst will attempt to score these because they are completely subjective! As a result:

  1. There is no objective, function-based/capability-based scale that could be scored consistently by any knowledgeable analyst on the subject and
  2. What is a great experience to one person, with a certain expectation of tech based upon prior experience and knowledge of their function, can be complete CR@P to another person.

Now, some firms do bury such subjective evaluations on UX and implementation time in their 2*2s where they squish an average of 6 subjective ratings into a dimension, but that is why those maps are complete garbage! (See: Dear Analyst Firms: Please stop mangling maps, inventing award categories, and evaluating what you don’t understand!) So no self-respecting analyst should do it. As an example, one analyst might like solutions with absolute minimalist design, with everything hidden and everything automated against pre-built rules (that may, or may not, be right for your organization and may result in an automated sourcing solution placing a Million dollar order with payment up front for a significant early payment discount to a supplier that subsequently files for bankruptcy and doesn’t deliver your goods) while a second might like full user control through a multi-screen multi-step interface for what could be a one-screen and one-step function and a third might like to see as much capability and information as possible squished into every screen and long for the days of text-based green-screens where you weren’t distracted by graphics and animations and design. Each of these analyst would score the same UX completely different! On a 10 point scale, for a given UX design, three analysts in the same firm could give scores of 1, 5, and 10, averaged to 5 … and how is that useful? It’s not!

(And while analysts can define scales of maturity for the technology the UX is based on, just because a vendor is using the latest technology, that doesn’t mean their UX is any good. New technology can be just as horrendously misused as old technology.)

The same goes for implementation time. An analyst that mainly focuses on simple sourcing/procurement where you should just be able to flick a SaaS switch and go would think that an implementation time of more than a week is abysmal, but an analyst that primarily analyzes CLM and SMDM would call BS on anything less than six weeks and expect three months for an implementation time. This is because, for CLM, you have to find all the contracts, feed them in, run them through AI for automated meta-data extraction, do manual review, and set up new processes while for SMDM you have to integrate half a dozen systems, do data integrations, cleansing, and enrichment through cross-referencing with third party sources, create golden records, do manual spot-check reviews, and push the data back . Implementation time is dependent on the solution, the architecture, what it does, what data it needs, what systems it needs to be integrated with, what support there is for data extraction and loading in those legacy systems, etc. Implementation time needs to be judged against the minimum amount of time to do it effectively, which is also customer dependent. Expecting an analyst to understand all the potential client situations is ridiculous. Expecting them to craft an “average customer situation”, base an implementation time on this, and score a set of random vendors accordingly is even more ridiculous.

The factors ARE absolutely vital, but they need to be judged by the buying organization as part of the review cycle, AFTER they’ve verified that the vendor can offer a solution that will meet

  • their current, most pressing, needs as an organization,
  • their evolving needs as they will need to get other problems under control, and
  • do so with a solution that is technically sound and complete with respect to the two requirements above while also being capable of scaling up and evolving over time (as well as capable of being plugged into an appropriate platform-based ecosystem through a fully Open API).

A good analyst an guide you on ways to judge this and what you might want to consider, but that’s it … you have to be the final judge, not them.

That’s why, when the doctor co-designed Solution Map when he was a Consulting Analyst for Spend Matters, the Solution Map focussed on scoring the technological foundations, which could be judged on an objective scale based on the evolution of underlying technology over the past two-plus decades and/or the evolution of functionality to address a specific problem over the past two-plus decades. It’s up to you whether you like it or not, think the implementation time frames are good or not, believe the vendor is innovative or not, and are satisfied with the vendor size and maturity, not the analyst. Those are business viewpoints that are business dependent. Analysts should score capabilities and foundations, particularly where buyers are ill-equipped to do so (and this also means that analysts scoring technology MUST be trained technologists with a formal, educational, background in technology — computer science, engineering, etc. — and experience in Software Development or Implementation –and yes, the doctor realizes this is not always the case, and that’s probably why most of the analyst maps are squished dimensions across half-a-dozen subjective factors [as they are not capable of properly evaluating what they are claiming to be subject matter experts in; as a comparison, when you have a journalist or historian or accountant rating modern SaaS platforms that’s the equivalent of having a plumber certify your electrical wiring or a landscaper judging the strength of the framing in your new house — sure, they’re trade pros, but do you really want to judge their opinion that the wiring is NOT going to start an electrical fire and burn your house down or the frame is strong enough for the 3,000 pounds of appliances you intend to put on the 2nd floor? the doctor would hope not!).

The cynic might say they don’t want to embarrass their sponsors, but the realist will realize the analysts can’t effectively judge vendors on this and the smart analysts won’t even try (but will instead guide you on the factors you should consider and look for when evaluating potential solutions on the shortlist they can help you build by giving you a list of vendors that provide the right type of solution and are technically sound, vs. three random vendors from a Google search that don’t even offer the same type of solution).

Have the Analyst Firms Finally Admitted They Don’t Know What They’re Doing?

the doctor recently went on a big rant about the analyst firms and the utter lack of usefulness in the maps they release, the focus they put on what they don’t understand, and the award categories they invent because, even though they have/had some great talent (and should be doing incredible work), what they’ve publicly released has been mostly valueless to the market they’ve been trying to serve (when it wouldn’t be too hard to provide a lot of value based on all the research and work they do). In the doctor‘s view, this is very sad because if they could demonstrate the value they provide, they would be more relevant across the market (and likely get a lot more business from smaller and/or more innovative providers who think that, because of the budgets the big players like Oracle, SAP, and Coupa have, the analysts are always going to recommend those companies anyway).

However, now he’s gone from sad to mad about something he has just heard from a couple of vendors regarding one of the biggest firms, because, if true, it means not only do they not have a clue about what is and is not valuable in tech, but they are unnecessarily creating confusing and obfuscating technology that still may be best in class.

So what have they done now? Well, apparently they are now basing 30% of the score on whether or not the vendor has “AI” in their platform, something which they’ve repeatedly proven they have ZERO ability to score whatsoever! So, either a vendor makes false, grandiose claims (and tries to use Applied Indirection to fool the Analyst Idiot that they have more than Artificial Idiocy in their Application Implementation), or they get scored low even if they have the best technology built on best practices, proven algorithms, and consistent results that give their customers a 5X to 10X ROI.

True AI adds value, but, in the doctor‘s experience,

  • up to 80% of AI claims are Applied Indirection (at best) or Artificial Idiocy (at worst); in fact, some of the “AI” in spend analysis is still the “AI” they used in the early 2000s, and the doctor would rather not spell out that sad, but still true for some vendors, racial slur
  • up to 80% of the rest, or up to 16% of tech that claims AI, is level one Assistive Intelligence; and this is typically just classic RPA (Robotic Process Automation) using human-defined parameter-based rules, and the “AI” is the automatic parameter adjustment based on user overrides … not very intelligent, eh?
  • up to 80% of the rest, or up to 4% of the tech that claims AI, is level 2 Augmented Intelligence, which is the first level of AI where the tech can learn from human feedback and provide better insights and recommendations over time on one or more specific tasks, and the first level of AI that you should even consider as AI
  • up to 80% of the rest, up to 1% of the tech that claims AI, and the highest level modern technology has generally achieved, is level 3, Apperceptive Intelligence, or Cognitive Intelligence, where the systems can not only learn from specific human feedback to recommendations but from general knowledge and intelligence available to it from integrated data sources to mimic the performance of the best human experts over time, even evolving processes, behaviours, and actions within well-defined bounds
  • and then the rest, 0.1% or less, is nearing level 4, Autonomous Intelligence, where the system can learn, evolve, adapt, and maintain itself over time without human intervention … and hopefully execute meaningful, appropriate decisions grounded in best process and fact that considers all of the relevant information available (and not go off of the rails and advise you to commit suicide because you feel bad, Hail Hitler, or sacrifice a trolley full of people and a cross-walk full of pedestrians because there might be a cat in the road — all things AI has already done)

And even where a platform has semblances of real AI, chances are that the AI (the vendor is now forced to include or arbitrarily be relegated to the dustbin because, apparently, it’s not solutions but buzz-acronymns that matter now) is producing worst results than the best traditional algorithm or methodology on expert curated data sets and dimensions. For example, the vast majority of the market believes AI improves forecasting. It doesn’t. The best AI is still inferior to the best techniques developed in the 70s when applied to the right data dimensions. All the “AI”, which is just fancy, souped-up versions of classical machine learning (using algorithms developed in the 80s and 90s for which we didn’t have enough computing power until recently), does is run all of the data through a model that integrates classification with prediction to filter out the most relevant dimensions and the best curve fitting technique as all these algorithms, at the core, are based on 50+ year old statistics! This means that, at the end of the day, their best case performance is something a human genius figured out 50+ years ago.

But to achieve that best case, the developers have to implement the right AI algorithms, tune them properly, allow them to run long enough to correctly fit (but not over-fit) the training data sets, and monitor those algorithms over time … and to do that they need to be an expert in those algorithms, which they probably aren’t. So, in order to “check a box”, and sell you a product, they are ultimately integrating algorithms that will give you an inferior result (while requiring considerably more computing power that runs up your cloud utilization bill), versus sticking to tried-and-true algorithms and processes that their experts tweaked over years and that their experts can explain and verify at any time.

And this is an almost reasonable example of what a technology vendor might do (as the best predictive algorithms are not untested “AI” but based on classical, tried-and-true, statistical or optimization functions). Most of what the doctor has seen is MUCH worse than this. And the fact that some big analyst firms are now forcing vendors with good tech to integrate underdeveloped, unproven, and often untested AI just to get a rating, make a map, or be recommended is downright stupid.

SHAME ON ANY ANALYST FIRM THAT DOES THIS! Buzzwords are not products, and unproven tech is not value. Analysts should be recommending the best solutions, regarding of the tech they are based on. the doctor is simply appalled!

A 60 Minute Call is NOT Due Diligence!

It used to be the doctor would only get a request once every month or so for a “call with a client looking for some insight into the space from an expert“, but now it’s the case he’s getting these every week, often multiple times a week, from yet another firm that “specializes in connecting clients looking for insight with experts” or some other such meaningless gobblydygook from a knock-off Dilbert Mission Statement Generator.

Maybe it wouldn’t be so bad in the grand scheme of events except,

  1. You can’t learn anything meaningful in 60 minutes. (We’re talking enterprise software solutions, not the results of an investigative whodunnit.)
  2. These requests are now coming from kids so young the doctor is wondering if they are still high school (despite the fancy LinkedIn titles their firms give them) … and not to be ageist, but there’s no way these kids have any deep understanding at all of any industry domain or what makes an expert (and how to judge if that expert has the right education and experience).
  3. It seems companies are using a handful of these calls as “due diligence” on a space or a company.

And a 60-minute call is NOT due diligence. the doctor does product and technical due diligence, and even a high-level due diligence on a company (which is just looking for potential red-flags and yellow-flags that will have to be watched) takes weeks of man power (as the team he worked with did market, strategy, product, and technology, and even though the doctor can do an entire product and technical diligence in S2P on his own — no team of 6 to 10 needed — it’s at least a week of effort on a single module to get enough certainty that there are no red flags and the important yellow flags have been identified). This is because a due diligence involves process reviews, document reviews, code reviews, focussed interviews, etc. etc. etc. and comparisons to standards, best practices, and market norms.

Given this, just what are you going to learn from a few call with external “experts” who don’t have any access to documents, processes and practices, and the internal stakeholders who make the decisions? Opinions. Maybe. But most likely, absolutely nothing!

In other words, if you need deep insight, find an analyst, diligence, or strategy firm that knows the space and, if you are interested in a company in particular, find an analyst, diligence, or strategy firm that that knows that company AND that company’s peers. And go with them. Don’t pay for the privilege of paying for the privilege to talk to someone who won’t end up being that useful to you. Especially if you need to be able to back up a(n investment [related]) decision that involves the company and prove you did your homework and the stars were aligned as well as they could have been when the decision was made (since no one can predict the future, just play the odds).

In other words, these firms, which the doctor will have nothing to do with, need to go away. A consultant who has the expertise to find the right analyst / diligence / consultancy for you and introduces you to the right individual in that firm deserves a finder’s fee, but doesn’t deserve a fee for the privilege of hooking you up with a random yahoo who can’t help you at all. (And even if that individual is an expert in their area, if it’s not the area you need, and they know next to nothing of relevance in the area and company relevant to you, they’re a yahoo from your perspective.)

And as you probably figured out by now, if you reach out to the doctor and he’s not the right expert for you, he’ll pass you on to someone he believes can (which could be one of the 40 experts he explicitly mentioned, and linked to, in yesterday’s post). It’s not about “sign the contract at all costs and hope to figure it out later“, it’s about helping your prospect because, even if they don’t become a client today, when their need is appropriate, they will become a client tomorrow.

Seeking an Analyst? Who does the doctor recommend?

In our last two posts, we asked how relevant is the analyst firm and then answered that it’s not the analyst firm that’s relevant, but the senior analysts in its rank that are relevant. (And if the firm doesn’t have any in your Source-to-Pay/Supply Chain (related) area, it doesn’t matter how many employees it has, how many countries it is in, how many Billions or Trillions its customers spend, etc. because it won’t be able to help you get your message right, hit your market, or enhance your strategy.)

So, in their honour, here are forty analysts (even if they didn’t work for analyst firms and mainly did behind-the-scenes analyst work as a consultant/advisor) that the doctor has trusted and often referred inquiries to over the years, past and present, and (some of) their (former) areas of specialty:

Present (note that many [14/25] are independent and NOT with a big firm):

Andrew Bartels Back-Office Tech-Driven Business Transformation Forrester
Andrew Karpie CWM/VMS/HCS Independent
Bertrand Maltaverne Source-to-Contract Spend Matters
Bob Derocher SC/Procurement Process Transformation Digitization Independent
Bob Ferrari Supply Chain, Manufacturing, Digitization Strategy Independent
Brian Sommer ERP, HR, & Finance (Transformation) Independent
Chris Sawchuk Metric-Based Procurement Modernization Advisory Hackett Group
Doug Smock Supply Chain Evolution Independent
Garry Mansell Entrepreneurship and Business Growth in S2P and SC / Global Supply Chain Design and Management Independent
Jason Busch Broader Source-to-Pay Market Strategy Spend Matters
Katie Evans AI Ethics Independent
Kelli Coviello Business Growth, Diversity, Work/Life Balance Independent
Lora Cecere Supply Chain Supply Chain Insights
Mickey North Rizza Sourcing, Procurement, Commerce, SRM, Risk IDC
Navroop Sahdev Digital Economy Digital Economist
Noha Tohamy Logistics, Supply Chain Digitization, Analytics Gartner
Patrick Connaughton (Ecosystem) Enterprise Applications Gartner
Pete Loughlin Purchase to Pay / Procurement / Coupa & Ariba Independent
Peter Smith Best Practices, Sustainability, Procurement with Purpose Independent
Phil Fersht Emerging Technologies, Automation, Outsourcing, Global Business, and Horses HFS Research
Pierre Mitchell Procurement and Services Benchmarking & Transformation Spend Matters
Robert Rudzki Strategic Advisory and Procurement Transformation Independent
Sigi Osagie Business Growth through Personal and Team-Based Growth Independent
Vinnie Mirchandani Enterprise Applications and Outsourcing Independent
Voitek Szewczyk Strategic Sourcing, Procurement Transformation, Eastern Europe Independent
Xavier Olivera Procure-to-Pay/LATAM Market Spend Matters

Past ([semi-]retired, out of the analyst world, and/or working for a vendor; 4 independent):

Charles Dominick Procurement and Procurement Training Independent
Debbie Wilson ERP & Finance Independent
Dick Locke Operations, Strategic Sourcing, and International Trade Independent
Doug Hudgeon Back Office Integration & Modernization / Australasia Market Managed Functions
Duncan Jones Procurement Independent
Gerraint John Sourcing, Procurement, SRM, and Risk Interos
Jon W. Hansen Procurement Independent
Magnus Bergfors Strategic Sourcing, Strategic Procurement, SRM Keelvar
Mark Perera Procurement and SRM Vizibl
Nick Heinzmann Procurement, CLM, Sustainability, Fraud Risk, and Startups Zip
Sudy Bhardadwaj Direct Supply Chain, Source-to-Supply, Entrepreneurship SAP
Tim Minahan Procurement & Supply Chain, Business Performance Benchmarks, Best Practices Citrix
Tony Poshek Strategic Sourcing Simfoni
Vance Checketts Supply Chain, Operations Built for Teams
Viktoria Sadlovska Anshu Supply Chain, Trade Finance, Analytics RepTrak Company
Vishal Patel CLM and P2P Ivalua

It’s not the Analyst Firm. It’s the Analyst!

In our post on Friday that asked How relevant is the Analyst Firm?, we noted that, these days, I’m hearing far too often from new companies or smaller companies that weren’t acquired in the M&A mania that their [marketing] strategy is to “get on Analyst Firm Map XYZ” or “get in front of the big analyst firms as fast as possible and, hopefully get written up“.

And this scares me because,

1) as pointed out in our last post, they think “the firm” is the answer, when, in fact, it’s not the firm but the analyst because “the firm” will only get it right IF the analyst gets it right (and, if you get a junior analyst, you may find that they over-promote a competitor with great marketing and misleading AI claims but limited capability over a unique solution you offer that, due to the subtlety of the power at the solution core, the analyst isn’t able to grasp what he’s unable to see)

2) for an analyst to get it right, that analyst needs, at least, a dozen skill sets that, combined, require
a) an education that sometimes goes beyond an average PhD and includes
i) the equivalent of a bachelor’s in mathematics
ii) the equivalent of a bachelor’s in computer science or engineering
iii) the equivalent of a Master’s in Procurement or Supply Chain or
Advanced Operations Management (a Bachelor’s in business ain’t enough)
b) at least a decade of experience in the space to understand the breadth of technology, industries, and current capabilities
c) exceptional analytical skills (and questioning skills)
d) great writing skills (in a day where it seems no one can write anything without AI, but AI is only as good as the content sources fed into it, and those raw sources have to come from … that’s right … a human!!!)

3) the number of senior analysts we’ve had with the right education and experience has always been few and far between (with even the biggest firms never having more senior analysts in our space than you can count on the fingers of one hand at any one time), but with the departures / retirements of the majority of the best analysts in our space from Gartner, Forrester, and Hackett*2 over the last few years, and almost half of the senior analysts from Spend Matters …

that’s not leaving many senior analysts, or viable analyst firms, left, and, at least in the doctor‘s view, all of the firms except Spend Matters have been gutted*3 in our space at least once over the last few years, and, given the breadth and depth of requirements to be a good analyst, where’s the next generation of senior analysts going to come from?

[Unless another visionary in our space with a strong tech background, a couple of decades of domain experience, and great analytical and writing skills is willing to jump the fence to the analyst side, we’re not going to be getting many new senior analysts that we can rely on. They’re not at other analyst firms (outside our space), they’re not at consultancies, and the reality is that there’s only a handful of visionaries left that didn’t make their millions and retire already or who are still thrilled by the space and want to stay in it as long as possible.]

It wouldn’t be unrealistic to say that Bertrand Maltaverne could be the last great analyst in our space.

In other words, if you want to be sure you’re getting the right coverage, review, or feedback, you need to STOP assuming the analyst firm is the answer and start looking at the analyst inside, or outside, that firm (and further remember that many of the senior analysts who are still in the game are on our own for various reasons), find the right analyst for you, and make sure you get in front of that analyst (or don’t bother with the firm at all). And you need to further realize that it’s not possible for every analyst to be an expert in every technology in the Source to Pay space. You need the right review and guidance from the right senior analyst, or the end result is that it will be worse than no review and guidance at all.

 

*2 but, in fairness, we will point out that Gartner and Forrester have been aggressively working on replacing them, although this has often required poaching from other peer firms, so the number of senior analysts hasn’t increased by much 🙁

*3 one has to remember that, in addition to vendor poaching, there was M&A in the analyst space too, and this wasn’t always for the better! Especially when the acquirer worked to a beat or a model that was different then the acquired firm that itself was only successful because it was different and had the right people who worked well under their own unique beat or model.