Category Archives: SaaS

Five Best Practices for Buyers (when searching for software solutions)

Building on our piece on five easy mistakes source to pay tech buyers can avoid, here’s a piece on five (5) best practices to get the buy right. We’re even throwing in a bonus practice since we dove deep into the critical sixth mistake most tech buyers make in source to pay (who need to realize that No Tech Should Be Forever).

#1 Understand your core pain points

Don’t buy based on hype, buy based on need. Any good salesperson can spin you a good yarn on how much that sourcing solution will save, how that SRM will get your suppliers in line, and how that tail-spend solution will prevent your spend from going into a tail-spin. But there’s no guarantees that any of those solutions will solve your current problems, which might require e-Procurement or Spend Analysis.

Review your source-to-pay processes and determine where your pain points are. Is it in quote collection or analysis? Adequate supplier discovery, identification, or certification? Contract negotiation, implementation, or obligation management? Purchase orders and approvals? Invoice verification and matching? Opportunity identification? Supplier proliferation? If you don’t know where the majority of time is being spent and how much of that time is fighting fires, doing unnecessary tactical work, or taking too long to do something that should be quick, then you’re letting someone else identify your problems, which might turnout to be problems relatively small in the grand scheme of things.

#2 Understand which pain points can be best alleviated with technology vs. those that can be best alleviated with process improvements.

Technology can’t solve all of your ills. (And it especially can’t solve all of your ills if it is based on Automated Idiocy. Remember, that’s what the “AI” they are selling you is.) It’s important that you understand what technology can and can’t do before you look for a solution. This will help you identify honest providers offering honest wares and vapourware vendors selling silicon snake oil.

Consider the above pain points. If it’s quote collection, a good RFP system will help. If it’s quote analysis, maybe, maybe not. It may be the complexity of the ask, and not the complexity of the process, that’s the problem. If it’s supplier discovery, you will likely need a discovery platform or a large supplier network; if it’s supplier identification, possibly just a better process of identifying which suppliers you’re already using who can solve a new problem for you. Supplier certification, that requires manual review and sometimes tech can’t help at all. When it comes to contract negotiation, while platforms can shuttle contract drafts back and forth, negotiation is between people. Implementation and obligation management, that’s the kicker, and more than what you can accomplish with just an electronic filing cabinet, which is what many “contract management” systems are. Purchase orders are as much a process problem as a technology problem, most AP systems can generate them. Approvals, process problem to identify it, but often a technology problem to ensure that the process is followed. Invoice verification — manual approval is required but m-way invoice matching can help with the process by identifying the corresponding purchase order, any payments made to date, any credits accrued to date, any approvals required, and so on. Opportunity identification? Well, all of the pain points you identified represent opportunities, but beyond that, you’ll likely need spend analysis. Supplier proliferation — that’s a process and management issue. All the SRM does is track the suppliers.

If you don’t understand what the pain points are that tech can actually solve, you’ll never select the right tech.

#3 Identify your top 3 pain points that can be addressed with technology and the corresponding source-to-pay module(s) you need to address those pain points.

Once you’ve identified the pain points, whittled down to the subset of pain points that can be best addressed by technology, and then identified the three (3) that will have the biggest impact if addressed, you can continue with your quest for tech.

#4 Compile an appropriate shortlist of vendors.

Once you know what you want to address with tech, and why, you can start the process of identifying an appropriate short-list of vendors. This is not just three to five vendor names given to you, or three to five vendor names that come up first in a Google search — it’s three to five vendors that are confirmed to offer a module that will address the same (sub)set of problems you are looking to address.

This is not three to five vendors that claim to offer the same technology, as many vendors will purposely use, and sometimes abuse, the same terminology in an effort to sell completely different products. For example, sourcing, procurement, and purchasing are sometimes used to mean the same thing by three vendors, and sometimes mean completely different things by three vendors. There are vendors that call their sourcing systems procurement, purchasing vendors which just offer catalogs, and so on. You have to research their offerings carefully to determine whether or not they truly offer a solution to what you are looking for.

#5 Determine what you need in a partner before you start evaluating vendors and the RFPs they submit.

You don’t just need a vendor that can provide technology, you need a vendor that can provide a solution and work with you, offering as little or as much as you need in the way of training, implementation, integration, and services. You need a venture that will match your culture, get along with your team and make sure you are successful with their product. You need to identify everything that makes a good vendor before you start the evaluation, otherwise you will grade just on the tech, and the tech is not enough. (It’s a necessary part of the solution, but not a sufficient part on its own.) All supply chain problems have a human element. Never forget that.

#6: Bonus Get help with the shortlist and the RFP.

If you’re not familiar with the technology, the vendors, or the terminology, it can be difficult to determine which vendors might actually be able to solve your problems and what vendors will just bamboozle you into thinking they can* when you send them the RFP. Get help from someone who is an expert in the technology, the vendors, and the true capabilities the vendors offer.

* Not necessarily on purpose; a misunderstanding caused by different usages of terminology (see point #4) can cause a vendor to believe they have the perfect solution for you.

Don’t Trust an Analyst Firm to Score UX and Implementation Time!

A post late last month on LinkedIn started off as follows:

If you’ve ever read any research papers or solution maps on procurement tech, you’ve probably figured out a couple of things.

1. It’s confusing and overly complex
2. It doesn’t cover the basic, most obvious-of-the-obvious fundamentals that everyone needs to consider.

These are:

– User interface and user experience (UI/UX)
– Ease and speed of implementation

Why don’t they do this?

Honestly, I don’t know the answer.

The cynic in me says it’s because their biggest paymasters have a horrible UI/UX and require a very complex and lengthy implementation.”

This really bothered me, not because UX and implementation time aren’t super important, they are, and they are among the biggest determinants of adoption (which is critical to success), but because anyone would think an analyst firm should address this.

The reality is that no proper analyst will attempt to score these because they are completely subjective! As a result:

  1. There is no objective, function-based/capability-based scale that could be scored consistently by any knowledgeable analyst on the subject and
  2. What is a great experience to one person, with a certain expectation of tech based upon prior experience and knowledge of their function, can be complete CR@P to another person.

Now, some firms do bury such subjective evaluations on UX and implementation time in their 2*2s where they squish an average of 6 subjective ratings into a dimension, but that is why those maps are complete garbage! (See: Dear Analyst Firms: Please stop mangling maps, inventing award categories, and evaluating what you don’t understand!) So no self-respecting analyst should do it. As an example, one analyst might like solutions with absolute minimalist design, with everything hidden and everything automated against pre-built rules (that may, or may not, be right for your organization and may result in an automated sourcing solution placing a Million dollar order with payment up front for a significant early payment discount to a supplier that subsequently files for bankruptcy and doesn’t deliver your goods) while a second might like full user control through a multi-screen multi-step interface for what could be a one-screen and one-step function and a third might like to see as much capability and information as possible squished into every screen and long for the days of text-based green-screens where you weren’t distracted by graphics and animations and design. Each of these analyst would score the same UX completely different! On a 10 point scale, for a given UX design, three analysts in the same firm could give scores of 1, 5, and 10, averaged to 5 … and how is that useful? It’s not!

(And while analysts can define scales of maturity for the technology the UX is based on, just because a vendor is using the latest technology, that doesn’t mean their UX is any good. New technology can be just as horrendously misused as old technology.)

The same goes for implementation time. An analyst that mainly focuses on simple sourcing/procurement where you should just be able to flick a SaaS switch and go would think that an implementation time of more than a week is abysmal, but an analyst that primarily analyzes CLM and SMDM would call BS on anything less than six weeks and expect three months for an implementation time. This is because, for CLM, you have to find all the contracts, feed them in, run them through AI for automated meta-data extraction, do manual review, and set up new processes while for SMDM you have to integrate half a dozen systems, do data integrations, cleansing, and enrichment through cross-referencing with third party sources, create golden records, do manual spot-check reviews, and push the data back . Implementation time is dependent on the solution, the architecture, what it does, what data it needs, what systems it needs to be integrated with, what support there is for data extraction and loading in those legacy systems, etc. Implementation time needs to be judged against the minimum amount of time to do it effectively, which is also customer dependent. Expecting an analyst to understand all the potential client situations is ridiculous. Expecting them to craft an “average customer situation”, base an implementation time on this, and score a set of random vendors accordingly is even more ridiculous.

The factors ARE absolutely vital, but they need to be judged by the buying organization as part of the review cycle, AFTER they’ve verified that the vendor can offer a solution that will meet

  • their current, most pressing, needs as an organization,
  • their evolving needs as they will need to get other problems under control, and
  • do so with a solution that is technically sound and complete with respect to the two requirements above while also being capable of scaling up and evolving over time (as well as capable of being plugged into an appropriate platform-based ecosystem through a fully Open API).

A good analyst an guide you on ways to judge this and what you might want to consider, but that’s it … you have to be the final judge, not them.

That’s why, when the doctor co-designed Solution Map when he was a Consulting Analyst for Spend Matters, the Solution Map focussed on scoring the technological foundations, which could be judged on an objective scale based on the evolution of underlying technology over the past two-plus decades and/or the evolution of functionality to address a specific problem over the past two-plus decades. It’s up to you whether you like it or not, think the implementation time frames are good or not, believe the vendor is innovative or not, and are satisfied with the vendor size and maturity, not the analyst. Those are business viewpoints that are business dependent. Analysts should score capabilities and foundations, particularly where buyers are ill-equipped to do so (and this also means that analysts scoring technology MUST be trained technologists with a formal, educational, background in technology — computer science, engineering, etc. — and experience in Software Development or Implementation –and yes, the doctor realizes this is not always the case, and that’s probably why most of the analyst maps are squished dimensions across half-a-dozen subjective factors [as they are not capable of properly evaluating what they are claiming to be subject matter experts in; as a comparison, when you have a journalist or historian or accountant rating modern SaaS platforms that’s the equivalent of having a plumber certify your electrical wiring or a landscaper judging the strength of the framing in your new house — sure, they’re trade pros, but do you really want to judge their opinion that the wiring is NOT going to start an electrical fire and burn your house down or the frame is strong enough for the 3,000 pounds of appliances you intend to put on the 2nd floor? the doctor would hope not!).

The cynic might say they don’t want to embarrass their sponsors, but the realist will realize the analysts can’t effectively judge vendors on this and the smart analysts won’t even try (but will instead guide you on the factors you should consider and look for when evaluating potential solutions on the shortlist they can help you build by giving you a list of vendors that provide the right type of solution and are technically sound, vs. three random vendors from a Google search that don’t even offer the same type of solution).

Have the Analyst Firms Finally Admitted They Don’t Know What They’re Doing?

the doctor recently went on a big rant about the analyst firms and the utter lack of usefulness in the maps they release, the focus they put on what they don’t understand, and the award categories they invent because, even though they have/had some great talent (and should be doing incredible work), what they’ve publicly released has been mostly valueless to the market they’ve been trying to serve (when it wouldn’t be too hard to provide a lot of value based on all the research and work they do). In the doctor‘s view, this is very sad because if they could demonstrate the value they provide, they would be more relevant across the market (and likely get a lot more business from smaller and/or more innovative providers who think that, because of the budgets the big players like Oracle, SAP, and Coupa have, the analysts are always going to recommend those companies anyway).

However, now he’s gone from sad to mad about something he has just heard from a couple of vendors regarding one of the biggest firms, because, if true, it means not only do they not have a clue about what is and is not valuable in tech, but they are unnecessarily creating confusing and obfuscating technology that still may be best in class.

So what have they done now? Well, apparently they are now basing 30% of the score on whether or not the vendor has “AI” in their platform, something which they’ve repeatedly proven they have ZERO ability to score whatsoever! So, either a vendor makes false, grandiose claims (and tries to use Applied Indirection to fool the Analyst Idiot that they have more than Artificial Idiocy in their Application Implementation), or they get scored low even if they have the best technology built on best practices, proven algorithms, and consistent results that give their customers a 5X to 10X ROI.

True AI adds value, but, in the doctor‘s experience,

  • up to 80% of AI claims are Applied Indirection (at best) or Artificial Idiocy (at worst); in fact, some of the “AI” in spend analysis is still the “AI” they used in the early 2000s, and the doctor would rather not spell out that sad, but still true for some vendors, racial slur
  • up to 80% of the rest, or up to 16% of tech that claims AI, is level one Assistive Intelligence; and this is typically just classic RPA (Robotic Process Automation) using human-defined parameter-based rules, and the “AI” is the automatic parameter adjustment based on user overrides … not very intelligent, eh?
  • up to 80% of the rest, or up to 4% of the tech that claims AI, is level 2 Augmented Intelligence, which is the first level of AI where the tech can learn from human feedback and provide better insights and recommendations over time on one or more specific tasks, and the first level of AI that you should even consider as AI
  • up to 80% of the rest, up to 1% of the tech that claims AI, and the highest level modern technology has generally achieved, is level 3, Apperceptive Intelligence, or Cognitive Intelligence, where the systems can not only learn from specific human feedback to recommendations but from general knowledge and intelligence available to it from integrated data sources to mimic the performance of the best human experts over time, even evolving processes, behaviours, and actions within well-defined bounds
  • and then the rest, 0.1% or less, is nearing level 4, Autonomous Intelligence, where the system can learn, evolve, adapt, and maintain itself over time without human intervention … and hopefully execute meaningful, appropriate decisions grounded in best process and fact that considers all of the relevant information available (and not go off of the rails and advise you to commit suicide because you feel bad, Hail Hitler, or sacrifice a trolley full of people and a cross-walk full of pedestrians because there might be a cat in the road — all things AI has already done)

And even where a platform has semblances of real AI, chances are that the AI (the vendor is now forced to include or arbitrarily be relegated to the dustbin because, apparently, it’s not solutions but buzz-acronymns that matter now) is producing worst results than the best traditional algorithm or methodology on expert curated data sets and dimensions. For example, the vast majority of the market believes AI improves forecasting. It doesn’t. The best AI is still inferior to the best techniques developed in the 70s when applied to the right data dimensions. All the “AI”, which is just fancy, souped-up versions of classical machine learning (using algorithms developed in the 80s and 90s for which we didn’t have enough computing power until recently), does is run all of the data through a model that integrates classification with prediction to filter out the most relevant dimensions and the best curve fitting technique as all these algorithms, at the core, are based on 50+ year old statistics! This means that, at the end of the day, their best case performance is something a human genius figured out 50+ years ago.

But to achieve that best case, the developers have to implement the right AI algorithms, tune them properly, allow them to run long enough to correctly fit (but not over-fit) the training data sets, and monitor those algorithms over time … and to do that they need to be an expert in those algorithms, which they probably aren’t. So, in order to “check a box”, and sell you a product, they are ultimately integrating algorithms that will give you an inferior result (while requiring considerably more computing power that runs up your cloud utilization bill), versus sticking to tried-and-true algorithms and processes that their experts tweaked over years and that their experts can explain and verify at any time.

And this is an almost reasonable example of what a technology vendor might do (as the best predictive algorithms are not untested “AI” but based on classical, tried-and-true, statistical or optimization functions). Most of what the doctor has seen is MUCH worse than this. And the fact that some big analyst firms are now forcing vendors with good tech to integrate underdeveloped, unproven, and often untested AI just to get a rating, make a map, or be recommended is downright stupid.

SHAME ON ANY ANALYST FIRM THAT DOES THIS! Buzzwords are not products, and unproven tech is not value. Analysts should be recommending the best solutions, regarding of the tech they are based on. the doctor is simply appalled!

It Doesn’t Matter Where You Start, You End with BoB in Analytics!

In a recent article, we asked in the battle of Suite vs. BoB (Best-of-Breed), which do you choose, and ended up with the answer of neither, but potentially both, because, as indicated in our article we asked in our post on Where’s the Procurement Management Platform, you need a true platform (that enables the creation of a true source-to-pay plus ecosystem for the various workflows and processes that need to be managed).

As a result, we indicated you could start where you wanted, provided:

  • you could conceivably manage it,
  • the vendor offers, and publicly publishes, a complete Open API, and
  • the vendor offers the necessary quick-start services.

(And for even more details on each of these requirements, stay tuned for our upcoming article on how it doesn’t matter where you start, you end with BoB in SXM).

But where do you end up? For some Procurement Practitioners, it depends on:

  • the module,
  • the organization’s biggest need for workflow/process management, and
  • the organization’s biggest savings/cost avoidance/value creation opportunities.

(And again, we’ll have even more details in our upcoming article on how you end with BoB in SXM for more details.)

But for Analytics, like SXM, you will end at BoB for analytics as no suite equals the best in class (BiC) (spend) analytics solutions (even if they are built in BiC technologies for generic analytics like Qlik or Tableau) as the true BoB spend analysis solutions (which are fewer and further between than you would expect) are leagues beyond them.

Moreover, for Analytics, you should start with BiC, even if the suite has a pre-packaged solution that’s pretty good, enough to get going, more than your fledgeling analysts are likely to be able to handle in the first year, and appears to be offering the module cheap as an add on to everything else they are selling you. Why?

Lots and lots of reasons. Here are five to get you started:

  • Top X Opportunities: Suites will only show you your top 10 categories, top 10 suppliers, top unmanaged tail categories, etc. No guarantee that these primitive, canned, analysis will be YOUR biggest opportunities. BoB will come with hundreds of built-in analytics, considerably more customization capability, and the power to find opportunities that pre-built suites and dashboards will never give you.
  • Better Classification: Suites will do a decent classification, usually through their black box AI (trained on billions and trillions), but even if they get to 95%, it won’t be great, it won’t be manageable, and it won’t be customizable to your organization’s need. BoB, when it uses AI, will use it to create rules, that can be corrected and overridden, that you can customize to your specific taxonometric needs for optimized Procurement (and no standard industry classification is worth its weight in protactinium), usually starting with an out-of-the-box taxonomy customized to your industry using the vendor’s experience and community knowledge.
  • Better Analytics: many of these tools have a lot more capability in terms of report construction, dimension derivation, metric support, integrated data science, etc. etc. etc.
  • Better UX: while UX is completely subjective, and as per a (previous/upcoming) rant, is not something an analyst should be scoring and advising you on (as the best UX is the one that works best for you), in general, the probability is very high that you will find these BoB tools more customizeable in workflow and configuration, more logical in workflow, and much easier to use (if this wasn’t the case, no one would buy these tools and the vendors would have closed their [virtual] doors a long time ago)
  • Beyond Analytics: most BoB solutions will have integrated opportunity selection and project/savings tracking, performance/throughput/project metric support, and/or risk-based analytics. The value of analytics is continually overlooked because the “Savings” is identified in the sourcing event, captured in the contract, and realized in Procurement, and no one wants to acknowledge the opportunity would not even have been identified without analytics.

And, finally, why not get used to using a best-in-class tool from the get-go so you don’t have to relearn a new tool when you max out the capabilities of the suite solution and are ready for the next level? Especially when, as you get better and better at analytics and dive deeper and deeper into categories, you can improve the taxonometric mappings, track all the opportunities you identify (and your progress), do what-if analysis when the mood strikes, and get productive in a tool that will do [much, much] more for you in the long run?

So, while you might select a suite SIM module as a foundation for your supplier data store when you need to start centralizing supplier data somewhere for your sourcing projects and procurement buys (which is where your organization has determined it needs to start its S2P journey), when you’re ready for analytics, just go straight to BoB. (And if the C-Suite wants to see reports in the fancy suite, buy the basic reporting package and let them use the basic dashboards. And if the suite supports custom dashboards, then pump the appropriate analytics back in as reporting data. Get good with best-in-class analytics from the go with the best solution you can.)

Suite vs BoB. Which Do You Choose?

Neither!

But you need something. And there is no other option (yet). So are you doomed?

That depends. But first, let’s talk review the Primary Pros and Cons of each.

SUITE
BoB
PRO

  • one vendor relationship to manage
  • modular integration out of the box
  • consistent UX (or it’s not really a suite)
  • pre-implemented with major ERPs
  • the primary module offered by the vendor is truly Best in Class and considerably beyond the average suite capability
  • implementation is typically vendor supported, along with some integration services
  • low (subscription) cost out of the gate, pay as you need
  • today’s BoB comes with full, complete Open APIs to build your own ecosystem
CON

  • likely that only one or two modules are Best-of-Breed (BoB)
  • implementation and integration services are likely third party
  • high cost out of the gate
  • traditionally a closed ecosystem
  • multiple vendor relationships to manage
  • limited integrations out of the box to other modules you will need
  • inconsistent UX across the modules
  • limited to no ERP/MRP support in many modules

In other words, many of the weaknesses of the suites are the strengths of BoB and vice versa. But you want the strengths of both and the weaknesses of neither, even though that doesn’t exist today as no vendor does everything well, nor can they because they would need to be experts in everything. (So unless a vendor hired all the experts, and became a monopoly [and we generally agree monopolies are bad], no vendor could even come close.) Even if a vendor did hire all the experts of today, and build everything out to the best of those experts’ capabilities, they’d soon become an unaffordable mega-suite (and then still not be best in breed in anything because once they built what the experts they hired envisioned, there would be a new generation of experts they still wouldn’t have employed with new, innovative, possibly revolutionary, ideas).

So what do you do? Well, the answer is, as we pointed out in our post that asked where’s the Procurement Management Platform, acquire a platform that is designed to support data-centric end-point integrations for specific processes and organizational needs as this will allow you to select the right module for each task, configure the right procurement workflows, integrate new, even previously unthought of, modules with the Open API, and even support intake and supply chain platform integration.

But, as we noted, there’s no platform. So, unfortunately, you have to assemble your own. But at least today you can. A decade ago there were no options to do this, so either you bought a suite, and lived with it, or you bought best of breed and did extensive work to glue them together in a grit, spit, & a whole lot of duct tape situation (that would take about 69 months).

But today, all of the new best of breed applications are being built from the ground up with complete Open APIs and the newer suites are also offering you APIs to easily get data in and out as well. (Not so much on the workflow configuration / function execution front, but that’s not necessary.) [But please note that an App Store or Marketplace is not an Open API, it’s a closed ecosystem limiting you in what third party add-ons you can select.]

So you can theoretically start with the right instance of a BoB or a Suite as your base and build out the right platform over time, depending on your needs today, and how fast you can digest a new module. If you already have one or more first generation modules, you have the understanding of what these modules do and how to use them and can likely digest new versions of those modules pretty quickly, so you could start with that many modules plus one. If you have no modern S2P modules, then, as we indicated many, many times in our very long Source-to-Pay series, you need to pick a module that represents your most immediate need, start with it, and start to grow your platform from it.