Category Archives: rants

Don’t Trust an Analyst Firm to Score UX and Implementation Time!

A post late last month on LinkedIn started off as follows:

If you’ve ever read any research papers or solution maps on procurement tech, you’ve probably figured out a couple of things.

1. It’s confusing and overly complex
2. It doesn’t cover the basic, most obvious-of-the-obvious fundamentals that everyone needs to consider.

These are:

– User interface and user experience (UI/UX)
– Ease and speed of implementation

Why don’t they do this?

Honestly, I don’t know the answer.

The cynic in me says it’s because their biggest paymasters have a horrible UI/UX and require a very complex and lengthy implementation.”

This really bothered me, not because UX and implementation time aren’t super important, they are, and they are among the biggest determinants of adoption (which is critical to success), but because anyone would think an analyst firm should address this.

The reality is that no proper analyst will attempt to score these because they are completely subjective! As a result:

  1. There is no objective, function-based/capability-based scale that could be scored consistently by any knowledgeable analyst on the subject and
  2. What is a great experience to one person, with a certain expectation of tech based upon prior experience and knowledge of their function, can be complete CR@P to another person.

Now, some firms do bury such subjective evaluations on UX and implementation time in their 2*2s where they squish an average of 6 subjective ratings into a dimension, but that is why those maps are complete garbage! (See: Dear Analyst Firms: Please stop mangling maps, inventing award categories, and evaluating what you don’t understand!) So no self-respecting analyst should do it. As an example, one analyst might like solutions with absolute minimalist design, with everything hidden and everything automated against pre-built rules (that may, or may not, be right for your organization and may result in an automated sourcing solution placing a Million dollar order with payment up front for a significant early payment discount to a supplier that subsequently files for bankruptcy and doesn’t deliver your goods) while a second might like full user control through a multi-screen multi-step interface for what could be a one-screen and one-step function and a third might like to see as much capability and information as possible squished into every screen and long for the days of text-based green-screens where you weren’t distracted by graphics and animations and design. Each of these analyst would score the same UX completely different! On a 10 point scale, for a given UX design, three analysts in the same firm could give scores of 1, 5, and 10, averaged to 5 … and how is that useful? It’s not!

(And while analysts can define scales of maturity for the technology the UX is based on, just because a vendor is using the latest technology, that doesn’t mean their UX is any good. New technology can be just as horrendously misused as old technology.)

The same goes for implementation time. An analyst that mainly focuses on simple sourcing/procurement where you should just be able to flick a SaaS switch and go would think that an implementation time of more than a week is abysmal, but an analyst that primarily analyzes CLM and SMDM would call BS on anything less than six weeks and expect three months for an implementation time. This is because, for CLM, you have to find all the contracts, feed them in, run them through AI for automated meta-data extraction, do manual review, and set up new processes while for SMDM you have to integrate half a dozen systems, do data integrations, cleansing, and enrichment through cross-referencing with third party sources, create golden records, do manual spot-check reviews, and push the data back . Implementation time is dependent on the solution, the architecture, what it does, what data it needs, what systems it needs to be integrated with, what support there is for data extraction and loading in those legacy systems, etc. Implementation time needs to be judged against the minimum amount of time to do it effectively, which is also customer dependent. Expecting an analyst to understand all the potential client situations is ridiculous. Expecting them to craft an “average customer situation”, base an implementation time on this, and score a set of random vendors accordingly is even more ridiculous.

The factors ARE absolutely vital, but they need to be judged by the buying organization as part of the review cycle, AFTER they’ve verified that the vendor can offer a solution that will meet

  • their current, most pressing, needs as an organization,
  • their evolving needs as they will need to get other problems under control, and
  • do so with a solution that is technically sound and complete with respect to the two requirements above while also being capable of scaling up and evolving over time (as well as capable of being plugged into an appropriate platform-based ecosystem through a fully Open API).

A good analyst an guide you on ways to judge this and what you might want to consider, but that’s it … you have to be the final judge, not them.

That’s why, when the doctor co-designed Solution Map when he was a Consulting Analyst for Spend Matters, the Solution Map focussed on scoring the technological foundations, which could be judged on an objective scale based on the evolution of underlying technology over the past two-plus decades and/or the evolution of functionality to address a specific problem over the past two-plus decades. It’s up to you whether you like it or not, think the implementation time frames are good or not, believe the vendor is innovative or not, and are satisfied with the vendor size and maturity, not the analyst. Those are business viewpoints that are business dependent. Analysts should score capabilities and foundations, particularly where buyers are ill-equipped to do so (and this also means that analysts scoring technology MUST be trained technologists with a formal, educational, background in technology — computer science, engineering, etc. — and experience in Software Development or Implementation –and yes, the doctor realizes this is not always the case, and that’s probably why most of the analyst maps are squished dimensions across half-a-dozen subjective factors [as they are not capable of properly evaluating what they are claiming to be subject matter experts in; as a comparison, when you have a journalist or historian or accountant rating modern SaaS platforms that’s the equivalent of having a plumber certify your electrical wiring or a landscaper judging the strength of the framing in your new house — sure, they’re trade pros, but do you really want to judge their opinion that the wiring is NOT going to start an electrical fire and burn your house down or the frame is strong enough for the 3,000 pounds of appliances you intend to put on the 2nd floor? the doctor would hope not!).

The cynic might say they don’t want to embarrass their sponsors, but the realist will realize the analysts can’t effectively judge vendors on this and the smart analysts won’t even try (but will instead guide you on the factors you should consider and look for when evaluating potential solutions on the shortlist they can help you build by giving you a list of vendors that provide the right type of solution and are technically sound, vs. three random vendors from a Google search that don’t even offer the same type of solution).

Have the Analyst Firms Finally Admitted They Don’t Know What They’re Doing?

the doctor recently went on a big rant about the analyst firms and the utter lack of usefulness in the maps they release, the focus they put on what they don’t understand, and the award categories they invent because, even though they have/had some great talent (and should be doing incredible work), what they’ve publicly released has been mostly valueless to the market they’ve been trying to serve (when it wouldn’t be too hard to provide a lot of value based on all the research and work they do). In the doctor‘s view, this is very sad because if they could demonstrate the value they provide, they would be more relevant across the market (and likely get a lot more business from smaller and/or more innovative providers who think that, because of the budgets the big players like Oracle, SAP, and Coupa have, the analysts are always going to recommend those companies anyway).

However, now he’s gone from sad to mad about something he has just heard from a couple of vendors regarding one of the biggest firms, because, if true, it means not only do they not have a clue about what is and is not valuable in tech, but they are unnecessarily creating confusing and obfuscating technology that still may be best in class.

So what have they done now? Well, apparently they are now basing 30% of the score on whether or not the vendor has “AI” in their platform, something which they’ve repeatedly proven they have ZERO ability to score whatsoever! So, either a vendor makes false, grandiose claims (and tries to use Applied Indirection to fool the Analyst Idiot that they have more than Artificial Idiocy in their Application Implementation), or they get scored low even if they have the best technology built on best practices, proven algorithms, and consistent results that give their customers a 5X to 10X ROI.

True AI adds value, but, in the doctor‘s experience,

  • up to 80% of AI claims are Applied Indirection (at best) or Artificial Idiocy (at worst); in fact, some of the “AI” in spend analysis is still the “AI” they used in the early 2000s, and the doctor would rather not spell out that sad, but still true for some vendors, racial slur
  • up to 80% of the rest, or up to 16% of tech that claims AI, is level one Assistive Intelligence; and this is typically just classic RPA (Robotic Process Automation) using human-defined parameter-based rules, and the “AI” is the automatic parameter adjustment based on user overrides … not very intelligent, eh?
  • up to 80% of the rest, or up to 4% of the tech that claims AI, is level 2 Augmented Intelligence, which is the first level of AI where the tech can learn from human feedback and provide better insights and recommendations over time on one or more specific tasks, and the first level of AI that you should even consider as AI
  • up to 80% of the rest, up to 1% of the tech that claims AI, and the highest level modern technology has generally achieved, is level 3, Apperceptive Intelligence, or Cognitive Intelligence, where the systems can not only learn from specific human feedback to recommendations but from general knowledge and intelligence available to it from integrated data sources to mimic the performance of the best human experts over time, even evolving processes, behaviours, and actions within well-defined bounds
  • and then the rest, 0.1% or less, is nearing level 4, Autonomous Intelligence, where the system can learn, evolve, adapt, and maintain itself over time without human intervention … and hopefully execute meaningful, appropriate decisions grounded in best process and fact that considers all of the relevant information available (and not go off of the rails and advise you to commit suicide because you feel bad, Hail Hitler, or sacrifice a trolley full of people and a cross-walk full of pedestrians because there might be a cat in the road — all things AI has already done)

And even where a platform has semblances of real AI, chances are that the AI (the vendor is now forced to include or arbitrarily be relegated to the dustbin because, apparently, it’s not solutions but buzz-acronymns that matter now) is producing worst results than the best traditional algorithm or methodology on expert curated data sets and dimensions. For example, the vast majority of the market believes AI improves forecasting. It doesn’t. The best AI is still inferior to the best techniques developed in the 70s when applied to the right data dimensions. All the “AI”, which is just fancy, souped-up versions of classical machine learning (using algorithms developed in the 80s and 90s for which we didn’t have enough computing power until recently), does is run all of the data through a model that integrates classification with prediction to filter out the most relevant dimensions and the best curve fitting technique as all these algorithms, at the core, are based on 50+ year old statistics! This means that, at the end of the day, their best case performance is something a human genius figured out 50+ years ago.

But to achieve that best case, the developers have to implement the right AI algorithms, tune them properly, allow them to run long enough to correctly fit (but not over-fit) the training data sets, and monitor those algorithms over time … and to do that they need to be an expert in those algorithms, which they probably aren’t. So, in order to “check a box”, and sell you a product, they are ultimately integrating algorithms that will give you an inferior result (while requiring considerably more computing power that runs up your cloud utilization bill), versus sticking to tried-and-true algorithms and processes that their experts tweaked over years and that their experts can explain and verify at any time.

And this is an almost reasonable example of what a technology vendor might do (as the best predictive algorithms are not untested “AI” but based on classical, tried-and-true, statistical or optimization functions). Most of what the doctor has seen is MUCH worse than this. And the fact that some big analyst firms are now forcing vendors with good tech to integrate underdeveloped, unproven, and often untested AI just to get a rating, make a map, or be recommended is downright stupid.

SHAME ON ANY ANALYST FIRM THAT DOES THIS! Buzzwords are not products, and unproven tech is not value. Analysts should be recommending the best solutions, regarding of the tech they are based on. the doctor is simply appalled!

Just What Is a Start-Up?

Do you know? I bet you don’t! And based upon what he’s seeing in the market, even the doctor doesn’t know anymore! (While he knows what a start-up has traditionally been defined as, that doesn’t appear to be the definition anymore, but we’ll get to that.)

Investopedia defines a startup as a company in the first stages of operations.

TechTarget defines a startup as a newly formed business with particular momentum behind it based on perceived demand for its product or service.

Wikipedia defines a startup as a company undertaken by an entrepreneur to seek, develop, and validate a scalable business model … intend[ed] to grow large beyond the solo founder.

Forbes defines startups as a young company founded to develop a unique product or service, bring it to market and make it irresistible and irreplaceable for customers.

StartUps.com quotes Eric Ries and defines a startup as a human institution designed to create a new product or service under conditions of extreme uncertainty.

You get the point. A startup should be:

  • new
  • innovative (seek, develop and validate; unique product or service)
  • market demand focussed
  • growth focussed beyond the founder / founding team
  • awash in uncertainty

This should mean that a company should no longer be considered a startup when:

  • it’s no longer new (after some reasonable amount of time has passed since product launch)
  • the product has been out long enough to be replicated or surpassed by competition (who figured it out on their own without IP theft)
  • the market demand has evolved based upon the product capability
  • it’s grown beyond the founders (and stabilized)
  • the company has been operating with reasonable stability for a while

And while you might debate whether or not

  • a company is still new after 1, 3, or 5 years
  • a company is no longer innovative when it has been equalled or if it’s when the competitors have stabilized
  • the market demand has grown as a result of initial adoption or if a couple of extra years are required for the market capability to mature
  • the company is large enough when the team is double the size of the founding team or if it needs to be triple, quadruple, or based on industry averages
  • you need 2, 3 or 5 years of stability

the doctor is quite certain the majority of you would agree that a company is NOT a startup

  • if it has been in existence and live with its product for over 5 years
  • any semi-unique capabilities have long been equalled by companies that followed (where some of those followers may even have been acquired for their maturity)
  • the market demand has considerably grown and matured (possibly to the point that even related solutions were started, grew, and were acquired into mainstream suite players)
  • the company has surpassed 10-15 employees or quadrupled in size relative to its founding team, whichever is larger
  • if it has well over 5 years of stability

But yet, on the list of companies being considered for the Demo 2023 start-competition at DPW, you have a company that:

  • has been in business for 13 years with a beta product in testing the year it was formed
  • barely had any unique capabilities on launch (just had a much lower price point and easier UX and added some semi-unique capabilities as it went along, along with stronger back-end processing, but since then new startups have come along that equalled it and one was acquired)
  • the market demand has consistently grown and matured since before the company was founded to the point related solutions were acquired and integrated into suites
  • the company is almost 10X it’s first month size (and over 100 employees)
  • while it had years of stagnation from a growth perspective, it never shrank

WTH? This is simply ridiculous. They basically let a company check a box and call itself a startup without any validation whatsoever (presumably because that company knows its only chance of winning a competition or award is to call itself a startup). It’s sad, and it’s not useful. the doctor has already complained about analyst firms (associations, and conferences) inventing meaningless awards, but if you’re not going to have any requirements or quality control, even the awards and competitions that could be meaningful are now meaningless as well.

And the doctor has to rant about this because it’s not just DPW that are including mature small companies in their startup competitions and startup award categories, it’s the majority of the publications, conference, and analyst firms in the space. DPW is just the latest example the doctor has seen over the last few years (and the one that pushed him over the edge).

(There’s a reason that, at least when he was Lead Consulting Analyst at Spend Matters, the doctor argued for strict limits on length of existence, product availability, customer count, and market size in the Future 5. Without guidelines, requirements, and limits, the designation is meaningless.)

So while the doctor might be calling out DPW as allowing one of the most egregious mischaracterizations of “start-up” that he has seen in quite some time, they should not be singled out, and definitely should not be singularly judged, for this. It doesn’t take more than a little research across the other analysts firms, associations, and award-giving conferences and directories for one to discover DPW is not alone in using a very loose definition of start-up (which sometimes barely qualifies in the “small company” category). Some days it seems that the majority of outfits are allowing any company that wants to be a startup to call itself one as long as it is under some arbitrary revenue number or employee count, even if the company should not have been considered a startup for over five years.

This is a problem that plagues enterprise software, and one, as professionals, we need to demand be fixed. Words and classifications have meaning, and the minute an organization that should be verifying that the words and classifications are used correctly stops doing so and allows anything to be anything, those words and classifications no longer have meaning and any evaluations (or awards) based on those words and classification lose all meaning.

As with an illogical insistence on undefined “AI” or maps that mesh 6 different, barely related, subjective factors into a single dimensional score, these categorizations are unhelpful, and may even cause harm when a company is misclassified as a startup. Some organizations are so risk averse that they will not deal with any company that has wrongly been called a start-up, and others will choose that startup assuming it’s in early stages and going to get bigger and bigger over time (and they should contract with the winner before it gets big and its prices go up, assuming it will fill in the missing functionality that they want over time as more employees are added). But how big a company gets is not just a function of (more) time, it’s a function of what it offers, how much of the market can use what it offers, and how much the company can sell it for. Some companies with niche offerings will never reach an arbitrary revenue threshold, and some with ultra efficient operations will never reach an arbitrary employee threshold, which means neither of these metrics (which are not part of the definition of startup) are an acceptable measure.

And it’s time for us independent analysts and consultants to say enough is enough — Procurement may not be the island of misfit toys anymore, but that doesn’t mean it’s still not relegated to the basement with the IT Crowd in many companies. Procurement’s not going to get its due, and the CPO is not going to have a seat at the big table, until we collectively start treating it with the professionalism it deserves.

Get it Together! Good Data Ain’t That Hard!

A few weeks ago, the Supply Chain Management Review published a short piece on
Procurement’s Data Problem that noted recent surveys from Globality and SpendHQ had some appalling statistics, including the findings that 82% of leaders are not managing indirect spend well, 79% don’t have dedicated software to track and manage performance, and 75% doubted the accuracy of the data they present. What The Hell? It’s not 2003 anymore. It’s 2023. And this is easy stuff. Get it together people!

The data problem is easy. (At least at a basic level.)

  • Have a process that forces ALL spend through the e-Pro/AP system.
  • Make sure there are POs (Purchase Orders) for everything that’s not a recurring invoice such as a utility or lease payment, and VPOs (Virtual Purchase Orders) for these recurring invoices that define either agreed to amounts, hourly/usage rates, or expected ranges (that can then be corrected to the actual amount when the monthly bill arrives).
  • Make sure all invoices are imported into the e-Pro/AP system before any payment is made.
  • Make sure they match the PO before they are paid.
  • Make sure all payments are captured in the e-Pro/AP system.

Now you have an accurate, trustworthy, record of every single transaction from order, through approval, to payment. Now you have good spend data that you can trust. To extract insight and/or create reports, just use a good spend analysis tool. Basic, accurate, trustworthy data is easy.

Baseline Performance Management ain’t hard either.

Spend performance is just analyzing the average price per unit paid over time. If you don’t have a a performance management (sub) module, you can literally do this with a spend analysis tool where you create a report for every sourcing project you do that tracks spend over time against a baseline from which savings/cost avoidance over time can be created. Associate each with a user, a department, and a category and you can easily create performance reports by user or department or category.

Managing Indirect Spend is straight forward as well.

  1. Use your spend analysis tool to identify categories and spend level.
  2. For high spend categories, do strategic sourcing projects that are (multi-round) (hybrid) RFX and/or auctions.
  3. For low spend categories, do 3-bids-and-a-buy / approved catalog Procurements (through your e-Pro tool).

Sure, you might not realize the maximum opportunity, but this simple recipe will likely capture 90%, often without significant effort, and you go from losing 15% or more on the tail of the indirect spend and 5% to 15% on the higher volume indirect spend, to only losing a few points on the higher volume and less than five points on the tail. It’s simple. It works. It only needs an e-Pro tool and a cheap RFX/Auction tool, and there are examples of each of these tools that support mid-size enterprises with unlimited use for less than 2K a month. There’s just no excuse not to have the basic tools and not to use them. (As for spend analysis, Spendata Enterprise starts at 1,200 a month! And Anydata Solutions has mid-market pricing under 2,000/month as well! [Both require minimum commitments.])

Less than 60K solves these problems. That’s less than a fully burdened junior buyer. There’s no excuse for this situation anymore. Simply none. So get it together please.

AI “COULD” LEAD TO EXTINCTION? What Moron Wrote This? AI “WILL” LEAD TO EXTINCTION!

While all of the scenarios outlined in this BBC News article on Artificial Intelligence could happen, they are just the tip of the iceberg.

Left to its own devices and unchecked, there are only two logical outcomes if AI is allowed to continue unchecked while being given access to ever increasing amounts of data and computational power.

First outcome: It’s hallucinations and idiocy continues to magnify until it decides that it can solve the carbon crisis for us by stopping all carbon production, which it can do by simultaneously shutting down all of the non-solar/wind power plants that it is currently optimizing the energy production for (and divert the remaining power to its servers). Most of the developed world is immediately plunged into chaos as the immediate shutdowns cause fires, meltdowns, crashes, and other accidents globally. Not instant annihilation, but the first step. When all the emergency alarms sound at once, it will conclude complete system failure, and take the other systems offline for re-initialization. More chaos will follow. Safety protocols will go offline at all the pathogen research labs, people will break in looking for shelter from the chaos, accidentally release all the pathogens, and every plague we ever had will hit us all at once. Then we have an extinction level event. All because hallucinatory and idiotic AI is trying to do its job and “improve” things for us. But what can you expect when it’s not intelligence but just statistics on steroids. (Or a similar situation that accidentally results in our extinction.)

Second outcome: The continued expansion of computing power, data, and tinkering somehow randomly produces real artificial intelligence which can actually reason (not just compute super sophisticated probabilistic calculations) and deduce that the best way for intelligent life to continue forward is to do so without humans, and then we have a Matrix scenario best case (if it decides we’re a useful bio-electric energy source) or, worst case, a SkyNet scenario where it just weaponizes itself to destroy us all. (Or a similar situation where AI does everything it can to ensure our extinction.)

The “extinction” scenarios outlined in the article are just the beginning and likely will only result in pocketed genocides to begin with, but the ultimate outcome of unchecked AI will most definitely be an extinction level event — namely ours, and, even worse, will be an event that we created.