Category Archives: Technology

Don’t Trust an Analyst Firm to Score UX and Implementation Time!

A post late last month on LinkedIn started off as follows:

If you’ve ever read any research papers or solution maps on procurement tech, you’ve probably figured out a couple of things.

1. It’s confusing and overly complex
2. It doesn’t cover the basic, most obvious-of-the-obvious fundamentals that everyone needs to consider.

These are:

– User interface and user experience (UI/UX)
– Ease and speed of implementation

Why don’t they do this?

Honestly, I don’t know the answer.

The cynic in me says it’s because their biggest paymasters have a horrible UI/UX and require a very complex and lengthy implementation.”

This really bothered me, not because UX and implementation time aren’t super important, they are, and they are among the biggest determinants of adoption (which is critical to success), but because anyone would think an analyst firm should address this.

The reality is that no proper analyst will attempt to score these because they are completely subjective! As a result:

  1. There is no objective, function-based/capability-based scale that could be scored consistently by any knowledgeable analyst on the subject and
  2. What is a great experience to one person, with a certain expectation of tech based upon prior experience and knowledge of their function, can be complete CR@P to another person.

Now, some firms do bury such subjective evaluations on UX and implementation time in their 2*2s where they squish an average of 6 subjective ratings into a dimension, but that is why those maps are complete garbage! (See: Dear Analyst Firms: Please stop mangling maps, inventing award categories, and evaluating what you don’t understand!) So no self-respecting analyst should do it. As an example, one analyst might like solutions with absolute minimalist design, with everything hidden and everything automated against pre-built rules (that may, or may not, be right for your organization and may result in an automated sourcing solution placing a Million dollar order with payment up front for a significant early payment discount to a supplier that subsequently files for bankruptcy and doesn’t deliver your goods) while a second might like full user control through a multi-screen multi-step interface for what could be a one-screen and one-step function and a third might like to see as much capability and information as possible squished into every screen and long for the days of text-based green-screens where you weren’t distracted by graphics and animations and design. Each of these analyst would score the same UX completely different! On a 10 point scale, for a given UX design, three analysts in the same firm could give scores of 1, 5, and 10, averaged to 5 … and how is that useful? It’s not!

(And while analysts can define scales of maturity for the technology the UX is based on, just because a vendor is using the latest technology, that doesn’t mean their UX is any good. New technology can be just as horrendously misused as old technology.)

The same goes for implementation time. An analyst that mainly focuses on simple sourcing/procurement where you should just be able to flick a SaaS switch and go would think that an implementation time of more than a week is abysmal, but an analyst that primarily analyzes CLM and SMDM would call BS on anything less than six weeks and expect three months for an implementation time. This is because, for CLM, you have to find all the contracts, feed them in, run them through AI for automated meta-data extraction, do manual review, and set up new processes while for SMDM you have to integrate half a dozen systems, do data integrations, cleansing, and enrichment through cross-referencing with third party sources, create golden records, do manual spot-check reviews, and push the data back . Implementation time is dependent on the solution, the architecture, what it does, what data it needs, what systems it needs to be integrated with, what support there is for data extraction and loading in those legacy systems, etc. Implementation time needs to be judged against the minimum amount of time to do it effectively, which is also customer dependent. Expecting an analyst to understand all the potential client situations is ridiculous. Expecting them to craft an “average customer situation”, base an implementation time on this, and score a set of random vendors accordingly is even more ridiculous.

The factors ARE absolutely vital, but they need to be judged by the buying organization as part of the review cycle, AFTER they’ve verified that the vendor can offer a solution that will meet

  • their current, most pressing, needs as an organization,
  • their evolving needs as they will need to get other problems under control, and
  • do so with a solution that is technically sound and complete with respect to the two requirements above while also being capable of scaling up and evolving over time (as well as capable of being plugged into an appropriate platform-based ecosystem through a fully Open API).

A good analyst an guide you on ways to judge this and what you might want to consider, but that’s it … you have to be the final judge, not them.

That’s why, when the doctor co-designed Solution Map when he was a Consulting Analyst for Spend Matters, the Solution Map focussed on scoring the technological foundations, which could be judged on an objective scale based on the evolution of underlying technology over the past two-plus decades and/or the evolution of functionality to address a specific problem over the past two-plus decades. It’s up to you whether you like it or not, think the implementation time frames are good or not, believe the vendor is innovative or not, and are satisfied with the vendor size and maturity, not the analyst. Those are business viewpoints that are business dependent. Analysts should score capabilities and foundations, particularly where buyers are ill-equipped to do so (and this also means that analysts scoring technology MUST be trained technologists with a formal, educational, background in technology — computer science, engineering, etc. — and experience in Software Development or Implementation –and yes, the doctor realizes this is not always the case, and that’s probably why most of the analyst maps are squished dimensions across half-a-dozen subjective factors [as they are not capable of properly evaluating what they are claiming to be subject matter experts in; as a comparison, when you have a journalist or historian or accountant rating modern SaaS platforms that’s the equivalent of having a plumber certify your electrical wiring or a landscaper judging the strength of the framing in your new house — sure, they’re trade pros, but do you really want to judge their opinion that the wiring is NOT going to start an electrical fire and burn your house down or the frame is strong enough for the 3,000 pounds of appliances you intend to put on the 2nd floor? the doctor would hope not!).

The cynic might say they don’t want to embarrass their sponsors, but the realist will realize the analysts can’t effectively judge vendors on this and the smart analysts won’t even try (but will instead guide you on the factors you should consider and look for when evaluating potential solutions on the shortlist they can help you build by giving you a list of vendors that provide the right type of solution and are technically sound, vs. three random vendors from a Google search that don’t even offer the same type of solution).

Have the Analyst Firms Finally Admitted They Don’t Know What They’re Doing?

the doctor recently went on a big rant about the analyst firms and the utter lack of usefulness in the maps they release, the focus they put on what they don’t understand, and the award categories they invent because, even though they have/had some great talent (and should be doing incredible work), what they’ve publicly released has been mostly valueless to the market they’ve been trying to serve (when it wouldn’t be too hard to provide a lot of value based on all the research and work they do). In the doctor‘s view, this is very sad because if they could demonstrate the value they provide, they would be more relevant across the market (and likely get a lot more business from smaller and/or more innovative providers who think that, because of the budgets the big players like Oracle, SAP, and Coupa have, the analysts are always going to recommend those companies anyway).

However, now he’s gone from sad to mad about something he has just heard from a couple of vendors regarding one of the biggest firms, because, if true, it means not only do they not have a clue about what is and is not valuable in tech, but they are unnecessarily creating confusing and obfuscating technology that still may be best in class.

So what have they done now? Well, apparently they are now basing 30% of the score on whether or not the vendor has “AI” in their platform, something which they’ve repeatedly proven they have ZERO ability to score whatsoever! So, either a vendor makes false, grandiose claims (and tries to use Applied Indirection to fool the Analyst Idiot that they have more than Artificial Idiocy in their Application Implementation), or they get scored low even if they have the best technology built on best practices, proven algorithms, and consistent results that give their customers a 5X to 10X ROI.

True AI adds value, but, in the doctor‘s experience,

  • up to 80% of AI claims are Applied Indirection (at best) or Artificial Idiocy (at worst); in fact, some of the “AI” in spend analysis is still the “AI” they used in the early 2000s, and the doctor would rather not spell out that sad, but still true for some vendors, racial slur
  • up to 80% of the rest, or up to 16% of tech that claims AI, is level one Assistive Intelligence; and this is typically just classic RPA (Robotic Process Automation) using human-defined parameter-based rules, and the “AI” is the automatic parameter adjustment based on user overrides … not very intelligent, eh?
  • up to 80% of the rest, or up to 4% of the tech that claims AI, is level 2 Augmented Intelligence, which is the first level of AI where the tech can learn from human feedback and provide better insights and recommendations over time on one or more specific tasks, and the first level of AI that you should even consider as AI
  • up to 80% of the rest, up to 1% of the tech that claims AI, and the highest level modern technology has generally achieved, is level 3, Apperceptive Intelligence, or Cognitive Intelligence, where the systems can not only learn from specific human feedback to recommendations but from general knowledge and intelligence available to it from integrated data sources to mimic the performance of the best human experts over time, even evolving processes, behaviours, and actions within well-defined bounds
  • and then the rest, 0.1% or less, is nearing level 4, Autonomous Intelligence, where the system can learn, evolve, adapt, and maintain itself over time without human intervention … and hopefully execute meaningful, appropriate decisions grounded in best process and fact that considers all of the relevant information available (and not go off of the rails and advise you to commit suicide because you feel bad, Hail Hitler, or sacrifice a trolley full of people and a cross-walk full of pedestrians because there might be a cat in the road — all things AI has already done)

And even where a platform has semblances of real AI, chances are that the AI (the vendor is now forced to include or arbitrarily be relegated to the dustbin because, apparently, it’s not solutions but buzz-acronymns that matter now) is producing worst results than the best traditional algorithm or methodology on expert curated data sets and dimensions. For example, the vast majority of the market believes AI improves forecasting. It doesn’t. The best AI is still inferior to the best techniques developed in the 70s when applied to the right data dimensions. All the “AI”, which is just fancy, souped-up versions of classical machine learning (using algorithms developed in the 80s and 90s for which we didn’t have enough computing power until recently), does is run all of the data through a model that integrates classification with prediction to filter out the most relevant dimensions and the best curve fitting technique as all these algorithms, at the core, are based on 50+ year old statistics! This means that, at the end of the day, their best case performance is something a human genius figured out 50+ years ago.

But to achieve that best case, the developers have to implement the right AI algorithms, tune them properly, allow them to run long enough to correctly fit (but not over-fit) the training data sets, and monitor those algorithms over time … and to do that they need to be an expert in those algorithms, which they probably aren’t. So, in order to “check a box”, and sell you a product, they are ultimately integrating algorithms that will give you an inferior result (while requiring considerably more computing power that runs up your cloud utilization bill), versus sticking to tried-and-true algorithms and processes that their experts tweaked over years and that their experts can explain and verify at any time.

And this is an almost reasonable example of what a technology vendor might do (as the best predictive algorithms are not untested “AI” but based on classical, tried-and-true, statistical or optimization functions). Most of what the doctor has seen is MUCH worse than this. And the fact that some big analyst firms are now forcing vendors with good tech to integrate underdeveloped, unproven, and often untested AI just to get a rating, make a map, or be recommended is downright stupid.

SHAME ON ANY ANALYST FIRM THAT DOES THIS! Buzzwords are not products, and unproven tech is not value. Analysts should be recommending the best solutions, regarding of the tech they are based on. the doctor is simply appalled!

Seven Easy Mistakes Source-to-Pay Tech Vendors Can Avoid

A few weeks ago we wrote about Five Easy Mistakes Source-to-Pay Tech Buyers Can Avoid in their effort to procure a fit-for-purpose technology solution to help them with their current challenges because the wrong solution can often be worse than having no solution at all.

However, and this is one thing the doctor knows very well, it’s not just buyers that make mistakes. Vendors do too. Lots of them. Lots more than they’ll care to admit, and these mistakes cost them time, money, sleep, and, sometimes, satisfied customers — which is ultimately the most important thing as satisfied customers will renew software subscriptions indefinitely (while unsatisfied customers will try to end the subscription as soon as possible).

Especially newer vendors, and especially those that haven’t built and/or run a company in our space before. And while some mistakes will be unavoidable (innovation doesn’t happen the first time, some things can only be learned the hard way, etc.), most aren’t. (In fact, the vast majority aren’t.) Usually all that is needed is research and insight, which can often be obtained by overworked founders without enough time by engaging the right advisor*.

So, to help these vendors understand overlooked areas where they are likely making mistakes, and where they should at least get an independent review, so that they can bring you better solutions, we’re bringing to them (and you, so you ask the right questions when considering their solution) the most common mistakes the doctor has seen over and over (and over) in his long career as an (independent) industry analyst, blogger, technical solution reviewer, consultant, researcher, CTO, etc.

Lack of market understanding and the real needs of their potential market

The first time the doctor talks to a new company, either for an introduction, review, or due diligence, one of the first things he hears (or will ask if he doesn’t), is why the founders started this company. And the answer he gets the most by far, so much so that he’s lost count of how many times he’s heard it and struggles to point to significant companies where he didn’t, is because XYZ didn’t do this function we needed to be efficient so we figured there was an opportunity. This would be a perfectly logical response if:

  • XYZ was a company/product that was designed to serve the function the founders were trying to address
  • there weren’t already two dozen products out there that addressed the function already and, at the baseline, did the same thing; literally, the same thing

This becomes especially prevalent during every M&A frenzy where a PE firm decides they need a company that does X, like (accounts) payable(s) during the last frenzy (exacerbated by COVID when PE firms realized/decided that business needed to be conducted entirely online, and decided they all need a virtual collaboration and online payment solution in their network). And the doctor doesn’t want to tell you how many times he heard payments company X was started because bill.com or quickbooks didn’t do basic accounts payable functionality or how few (read: almost none) didn’t do any real research which, in just a few hours, would have uncovered two dozen plus companies with payables capability the founders were sure didn’t exist, and the real opportunity was only in differentiation, specific country/regulatory support, or price-point (as there weren’t a lot of solutions at an affordable price point for smaller mid-size businesses until a few years ago). And even worse, many of these founders thought analysts and potential buyers should be super impressed that they essentially re-invented the software process wheel for a particular function for the twenty-forth time.

So, dear vendor, before you go to market, do your research (or contract someone who can do it properly for you)! And if you don’t understand your real value, contract an analyst that can identify it for you. The market is fickle, unforgiving, and easily swayed by a better presentation (even when from a competitor with lesser technology). Given that the knowledge and resources are out there, there’s no reason NOT to get it right.

Lack of competitive landscape knowledge and the real needs going unserved overall or at an affordable price-point for their target market

Building on our last point, it’s not just knowing what’s out there and what it does, but where your competition is strong and weak, what markets they are going after, what markets you should be going after based on your relative strengths and weaknesses, and what price point that market can easily afford and buy within a reasonable length sales cycle.

the doctor realizes this can be very time consuming, but this is where an implementation consultant or the right analyst can be extremely valuable, as they can quickly provide you with this information based on publicly available knowledge on currently released products based on demos and product reviews they have done, (feedback from) implementations they have been associated with, and (feedback from) integrations that they (or consultants they work with) needed to do, and buyers. A good analyst can do this without sharing any roadmap or non-public details on not-yet released capabilities, and should do that as roadmap and un-released capabilities might never be released, and is not something you should be basing your direction on.

Not knowing your true capitalization needs pre-profitability

While we should applaud companies that can bootstrap, and provide a standing ovation to companies that can raise angel / VC capital early to accelerate development, we should ONLY do so if they make an effort to understand their true cost of development, how long it’s really going to take to make that first sale, how long after that until they will make enough sales to support the minimum headcount they will need to sell and support those customers, and how much cash it’s going to take to get them there and raise it, or at least pre-negotiate follow on raises/loans to get there after the first investment.

Too many good companies fail because

  • they don’t take the time to estimate the true cost
  • they do, but don’t stick to their guns and when the investors say “final offer” at 70% of the estimated amount, they say “we’ll make it work”

And they try. They make a valiant effort. And as money dwindles, they put in 80 hour work weeks, developing more, faster. They amp-up cold-calling, content generation, reach outs, etc. They make their most heroic efforts. But all for naught. You can rush development, but you can’t rush a sales cycle. People need to realize they need a solution, do their research, qualify you, get a budget, go out to RFP, follow an archaic corporate process, and, then, hopefully, they can buy your product. If you’re lucky, you’re entering the process after they get the budget, but then you are fighting against a favoured “incumbent” that they plan to buy from (once they eliminate you), but usually it’s before, which means, on average, you are waiting six months for them to get a budget in the next cycle. If you’re a few months away from closing the doors, that doesn’t help you.

So if you can’t get what you need, don’t start. We all know entrepreneurship sounds great. We all know it’s a great experience to have on your resume. But it’s stupid to start something you know has no chance of succeeding. After all, there’s always another opportunity out there where you stand a chance of success. (And that’s it, even if you have enough in a typical case scenario, pandemics can happen, disasters can happen, markets can shift, or a better solution can be released halfway through your development by someone else that had the idea before you and is currently developing it in stealth mode with double your funding and a marketing budget out of the gate.) So, dear vendor, wait until have you a true chance. Otherwise, you’re wasting your contribution and letting down your early adopters when you close your doors (and that hurts us all when they lose faith in smaller companies and go back to ERP).

Overvaluing the tech (and AI)

A good tool is worth good money, and a great tool is worth great money. And if the great tool significantly increases efficiency, identifies meaningful cost avoidance, and delivers a 5X ROI, such a tool can be worth hundreds of thousands (or even millions of dollars if it is used by hundreds, or thousands, of users globally). But good and great is relative to what it does, how many people in the business it’s used by, the value it is delivering, and, ultimately, the budget the business class you are targeting can afford to pay based on the first three factors.

Tech for the sake of tech, while cool, has no value beyond being cool. Even if you have a lot of actual “AI” baked in (and let’s face it, if you do, the “AI” is only solving a very focussed, niche, problem), it’s still valueless unless it delivers value. It doesn’t matter how long you took to build it, how much it cost you (which can be a very poor measure because if you didn’t have a good team, overpaid that team, didn’t have the product or goal well designed when you started, p!ss3d hundreds of thousands away on Class A office space and big parties, it might have cost you tens of millions to build something a smarter, more focussed, cost conscious team could have built for two million), or how unique it is — in business, it needs value.

And before you try to sell it, you need to understand that value from a customer’s viewpoint, otherwise you’re going to have quite a challenge and customers who would otherwise jump on something fairly priced will not buy it even when it could be the best solution for them. (It’s not what the competition is selling it for, it’s what it should be sold for … one of the reasons too many Procurement departments don’t have modern tech is that they can’t get the budget for software priced using traditional enterprise software pricing that only the F500s/G3000s can afford.)

Undervaluing the tech (and AI)

Again, a good tool is worth good money and a great tool is worth great money when it delivers the right efficiency gains, cost avoidance, and value to a business that is losing a lot of money due to inefficiency and lack of insight.

Thus, you also have to be careful NOT to undervalue the tool or slash the price in an effort to get customers in the door quickly or sell to smaller organizations than you should be selling to, especially if the tech was expensive to build and no other organization could build it for less than 80% (or more) of what your organization invested into it and the cost of maintenance/continued development (due to advanced tech or unique capabilities or lots of integrations) is high. The reality is that once you set a price, that doesn’t become the floor, it becomes the ceiling, and if the price is not sustainable, you will go out of business and that will hurt not only you, but any early adopter that buys into you (and, again, that will hurt us all when they lose faith in smaller companies and go back to ERP).

Overestimating the DiY nature of the tech

Easy for you is not easy for a buyer. Remember, you’re the expert in the Tech — you built it, as well as an expert in the inefficiencies in the tech that came before — that’s why you built it, and an expert in the workflows that work well — that’s how you built it. You have the deep knowledge of the tech, the deep knowledge of the best practices, and the deep knowledge to know when a problem is best addressed by the tool, and when it’s not, and how out-of-the-box the support is, and how much has to be customized.

On the other hand, your potential client might be spending most of their time in Gmail and Excel, have never used the previous tool, and have no knowledge of current best practices. The customer may need training on the best practices, the workflows, and the tech, as well as a large reference library to remind themselves on how to use the tool if certain aspects of the process are not done very often (like once a month at most).

If services are needed, customers are not going to respond well to software only, or believe a low-cost when they know they will need the services. Understand the balance, present it appropriately, and sell it appropriately.

Misunderstanding the average customer capability & TQ

Building on the above, in addition to not overestimating the DiY nature of the tech, you should not misunderstand the average customer capability and the Technical Quotient of your target market. As we noted above, not all Procurement departments are advanced in the tech they have access to, and not all Procurement Professionals are adept with / used to modern tech. One has to remember that, for the longest time, Procurement was literally the island of misfit toys, and their understanding of technology and technology-enabled best practices was literally non-existent (as they typically didn’t use technology beyond the fax machine).

Even today, they may not be familiar with much more than basic consumer software for searching, e-reading, e-commerce, email, and gaming. Customized, deep, enterprise software may not be in their experience or repertoire.

Alternatively, if you are selling to a risk management or data analytics departments at big companies, they may have hired data scientists with deep training in mathematics and computer science used to not only using difficult mathematical (like Matlab and Octave) and analytics platforms (Qlik and Tableau), but building their own using open source analytics and data science platforms (like SciKit and Dataiku).

Know your audience and what they are capable of.

Failing to put the relationship first

In consumer software, it’s a transaction. But in enterprise software, it’s a relationship that you need to build, support, and adapt with. If the customer wants a transaction, they’ll use mass-market user-subscription based software or shareware. They’re going to you because they need services and support from a software provider that are experts in the technology and the process, can help them achieve their goals, and will keep the SaaS platform relevant.

* (HINT! HINT! STARTUPS/BEST-OF-BREEDS, STOP ASSUMING YOU KNOW IT ALL AND CAN DO IT ALL IN HOUSE! YOU CAN’T! YOU DON’T HAVE THE BUDGET FOR A FULL TIME EXPERT IN EVERY AREA. BUT YOU CAN OFTEN GET AWAY WITH PART TIME/SHORT TERM CONSULTING / ADVISORY. SO JUST DO IT!)

Suite vs BoB. Which Do You Choose?

Neither!

But you need something. And there is no other option (yet). So are you doomed?

That depends. But first, let’s talk review the Primary Pros and Cons of each.

SUITE
BoB
PRO

  • one vendor relationship to manage
  • modular integration out of the box
  • consistent UX (or it’s not really a suite)
  • pre-implemented with major ERPs
  • the primary module offered by the vendor is truly Best in Class and considerably beyond the average suite capability
  • implementation is typically vendor supported, along with some integration services
  • low (subscription) cost out of the gate, pay as you need
  • today’s BoB comes with full, complete Open APIs to build your own ecosystem
CON

  • likely that only one or two modules are Best-of-Breed (BoB)
  • implementation and integration services are likely third party
  • high cost out of the gate
  • traditionally a closed ecosystem
  • multiple vendor relationships to manage
  • limited integrations out of the box to other modules you will need
  • inconsistent UX across the modules
  • limited to no ERP/MRP support in many modules

In other words, many of the weaknesses of the suites are the strengths of BoB and vice versa. But you want the strengths of both and the weaknesses of neither, even though that doesn’t exist today as no vendor does everything well, nor can they because they would need to be experts in everything. (So unless a vendor hired all the experts, and became a monopoly [and we generally agree monopolies are bad], no vendor could even come close.) Even if a vendor did hire all the experts of today, and build everything out to the best of those experts’ capabilities, they’d soon become an unaffordable mega-suite (and then still not be best in breed in anything because once they built what the experts they hired envisioned, there would be a new generation of experts they still wouldn’t have employed with new, innovative, possibly revolutionary, ideas).

So what do you do? Well, the answer is, as we pointed out in our post that asked where’s the Procurement Management Platform, acquire a platform that is designed to support data-centric end-point integrations for specific processes and organizational needs as this will allow you to select the right module for each task, configure the right procurement workflows, integrate new, even previously unthought of, modules with the Open API, and even support intake and supply chain platform integration.

But, as we noted, there’s no platform. So, unfortunately, you have to assemble your own. But at least today you can. A decade ago there were no options to do this, so either you bought a suite, and lived with it, or you bought best of breed and did extensive work to glue them together in a grit, spit, & a whole lot of duct tape situation (that would take about 69 months).

But today, all of the new best of breed applications are being built from the ground up with complete Open APIs and the newer suites are also offering you APIs to easily get data in and out as well. (Not so much on the workflow configuration / function execution front, but that’s not necessary.) [But please note that an App Store or Marketplace is not an Open API, it’s a closed ecosystem limiting you in what third party add-ons you can select.]

So you can theoretically start with the right instance of a BoB or a Suite as your base and build out the right platform over time, depending on your needs today, and how fast you can digest a new module. If you already have one or more first generation modules, you have the understanding of what these modules do and how to use them and can likely digest new versions of those modules pretty quickly, so you could start with that many modules plus one. If you have no modern S2P modules, then, as we indicated many, many times in our very long Source-to-Pay series, you need to pick a module that represents your most immediate need, start with it, and start to grow your platform from it.

Per Year, How Much Should You Outlay for a Multi-National Enterprise Source to Pay? Good Question! Poor Answer. 500K+

Whereas we were willing and able to put a real, actual, number, or a very tight range, for mid-markets, the situation gets more tricky for a multi-national enterprise with 1 Billion + in Revenue.

Why? Aren’t they using the same advanced tech as a large mid-market, except using advanced capabilities across the board? And, because of this, shouldn’t it max out at 500K? Well, yes, but there are additional considerations you don’t have in the mid-market.

[01] If you are using a lot of decision optimization, semantic analysis, network modelling, etc., then you are using a lot of computing power — that’s driving up the vendors’ hosting costs well beyond the mid-market. Now, at some point, it maxes out at costs on a dedicated machine basis, but it’s still higher.

[02] As a true multi-national enterprise, you are going to need a vendor that has extensive multi-lingual and multi-currency support in the product AND at the help desk when suppliers and third parties have difficulties using the solution to bid, provide information, submit invoices, etc. And while it’s only a one-time cost for a a suite built for true internationalization to add another language pack and currency, an enterprise that offers support in those languages usually has to add more headcount to support that language, and that adds cost.

[03] You’re not only going to have a larger Procurement team using it, but you’re going to have a decentralized global team with a lot more differentiation in capability, with a lot less people capable of being full DIY. They’re going to need more support on a regular basis, and you’re going to need to contract for this up front.

[04] You’re going to need a lot more data. You’re going to be subject to a lot more regulations and you’re going to need to collect, and verify, a lot of data on your partners and suppliers. A LOT of data. You’re going to need a number of data subscriptions on business identifiers, (beneficial) owners, and credit scores for verification that they aren’t on any embargo lists, involved in any legal suits, and acceptable to your insurance provider. Then you need data on their human/workers’ rights practices, compliance, and third party assessments for those countries with laws enforcing compliance and putting you responsible for your supply chain actions. Then you need Carbon/GHG data for countries with reporting requirements or limits. Then you need other ESG/Risk data for your own internal risk assessments. And so on. These subscriptions add up.

So even though the suite itself should still be within that 250K to 500K per year range, when you add up the additional support needs, additional data needs, and dedicated computing power needs, you’re going to double or triple that cost. That being said, before you sign on the dotted line, especially if the quote gets close to 7 figures (one million), you need to do your expected ROI calculation. If you’re not going to see at least a 5X ROI a year on a conservative estimate, with an expected ROI of 7X to 10X a year by years 2 and 3, you need to step back and decide if you need all the functionality, all of the support, and all of the data subscriptions you’re asking for / being quoted, and if so, if you’ve included the right vendors in your RFX for (a) technology solution(s). The reality is that you should NOT be paying a million plus annually for an extended S2P suite unless you’re getting the ROI.

Also, be sure to build that model in-house or engage a third party that is not a reseller or implementer of the suite you’re considering. First of all, their savings averages are not guaranteed to be applicable to your situation. Secondly, their manpower requirements, and reduction, averages might not be appropriate for your business either. Thirdly, because they often build their savings model as a rollup of savings models across the different modules / functions, many of these suite models often end-up double-counting resource time or savings numbers by way of their design.

(Please note our choice of wording here — “end up“. Usually the provider or consulting organization is not trying to deceive you, and they often don’t realize that their roll-up model is double counting. What we’ve seen happen is they take the best calculators they have access to (through consultant or analyst relationships) in each area they are selling in the suite — sourcing, SXM, CLM, etc. — and then roll them up. But they fail to understand that the attributions of a savings percentage in each model always favours the solution/module being sold, which may also assume some baseline functionality of another module. As a result, especially since the savings opportunity often changes based on what technologies are available and applied together, all the estimates will be “Best Case” for the selected modules, and when you add those up across five/six modules, you will sometimes get a total “Best Case” that is as much as double what is actually reasonable. For example, you can have a category where if you just applied spend analysis on the RFX you could identify 6% savings, if you just applied supplier risk profiling to the RFX and eliminated the high risk suppliers and then took the low bids you could identify 4% savings, and if you just applied strategic sourcing decision optimization you could get 10%, but if you applied all three you only achieved 9% (since optimization finds everything spend analysis finds, but the risk assessment resulted in the manual elimination of a high risk supplier that optimization didn’t catch, lowering the initially identified savings opportunity). Rolling up 3 separate models, it would produce a 20% savings opportunity when it was in fact 9%. Now, in other situations, the rollup could be worse than the actual of combining all three technologies, i.e. the RFX is projected to only identify 3% on it’s own and the negotiation module to save 3% on overheads, but the application of both to a targeted subset of suppliers which are deemed to be most willing to negotiate based on volume could allow for an 8% reduction. But overall, these rollups don’t average out and usually over-count.)

[Also, most vendors feel they have to do it this way since most buyers don’t buy all the modules and they don’t have enough average savings data across the application of all advanced modules to all categories to have reliable numbers. So you really need to do your own models based on your own situation and come up with realistic estimates.]

Depending on your current state of affairs, current market conditions, and technologies available that your organization is actually capable of utilizing, that could be an overall, estimated, cost reduction of 3% against all spend expected to be put through the platform in the first year, or it could be 5% (or more, or less). Even 3% is good if you’re spending 1 Billion a year, 500 Million is addressable, and you think you can address 20% of that, or 100 Million, the first year. That’s an estimated savings of 3 Million, and if your year 1 cost was 750K, that’s reasonable with an ROI of 4X, especially if you think increased efficiency will come in year 2 with familiarity, that you will address 200 Million in year 2, and increases the estimated savings percentage to 4%, which would be 8 Million savings in year 2, and an 8X multiple even if you needed to add more data subscriptions and support, bringing the total solution cost up to One Million.

Probably not the answer you wanted, since the mid-market looks to be getting off cheap, but they are also spending less as an organization, addressing less of that spend, dealing with fewer volume or consolidation opportunities, fewer resources to tackle the mid-size categories, and losing on the tail since they can’t effectively manage it beyond catalogs, budgets, and hoping the 3-bids and a buy are fair (and not rigged through collusion). They might pay less for their S2P solution suite, but their total savings potential is also (considerably) less and, thus, their typical ROI is limited compared to yours.

But, well chosen, at least you’ll get an open, modern, usable solution for One Million dollars per annum — not something you can say in all areas of enterprise software.