Monthly Archives: June 2024

Yes, Jon. Some Analyst Firms Do Stink!

Last Saturday, Jon The Revelator penned a piece on how Going “off-map” is the key to finding the best solution providers, which he correctly said was critical because Gartner reports that 85% of all AI and ML projects fail to produce a return for the business. As per a Forbes article, the reasons often cited for the high failure rate include poor scope definition, bad training data, organizational inertia, lack of process change, mission creep and insufficient experimentation and, in the doctor‘s view, should also include inappropriate (and sometimes just bad) technology.

The Revelator asked for thoughts and, of course, the doctor was happy to oblige.

Starting off with the observation that while it is impossible to give precise numbers since companies are always starting up, merging, getting acquired, and shutting their doors in our space, statistically, if we look at the average number of logos on a module-quadrant map (about 20), and the average number of providers with that module (about 100, ranging from about 50 for true analytics to about 200 for some variant of SXM), for every provider “paying” to get that shiny dot, there are 4 going overlooked. And given that the map represents “average”, that says, statistically, 2 of those providers are going to be better.

Furthermore, maps should NEVER be used for solution selection (for the many, many reasons the doctor has been continually putting forth here on SI, heck just search for any post with “analyst” in the title over the past year). A good map can be used to discover vendors with comparable solutions, and nothing more.

The Revelator replied to this that the first thought that came to his mind was the urgency with which we buy the dots on the map without realizing that many have paid a considerable sum to get the logo spot and recounted the story of why he sold his company in 2001 after he was approached by Meta (an analyst firm eventually acquired by Gartner), in response to the successful results of his first big implementation, who said his company was on the leading edge and that they wanted to “cover” his company. The short story was that, when he said it sounded great, the Meta rep said “terrific, let’s get started right away and the next step is you sign a contract and pay the $20,000 [$36,000 today] invoice that we will send immediately and then we can begin“. Not something easy to swallow for a small company, and even less easy than today when it now costs at least 50% more (in today’s dollars) according to some of the small companies he’s talked to if they want Gartner attention. [And that’s just for basic coverage. Guaranteed inclusion on some of the big firm maps generally requires a “client” relationship that runs 150,000 or more!]

the doctor‘s response to this is that it’s still definitely a pay-to-play game with most of these firms, as per his recent posts where he noted that dozens of the smaller vendors that he talked to this year (who keep asking “so, what’s the catch?” when the doctor says he wants to cover them on SI) said they were being quoted between 50K and 70K for any sort of coverage. Wow!

Furthermore, while Duncan Jones insists it is likely just a few bad apples, those bad apples are so rotten that many of these smaller firms steadfastly believed they couldn’t even brief an analyst if they didn’t pay up (as the rep wouldn’t let them). And it wasn’t just one firm whose name the doctor heard over and over … 4 (four) different firms got over 3 (three), sometimes very angry, mentions across the two to three dozen mentions where the smaller vendor was willing to indicate which firm was quoting them 50,000 to 70,000+ or not willing to talk to them unless they signed a client agreement. (the doctor has reached out to over 100 small companies over the past year, and almost every response indicated that they expected there would be a fee for coverage based on their analyst firm interactions, and when he asked why, the majority of them said they were quoted (high) fees by one or more other firms that said they wanted to “cover” them.)

So yes, most of the smaller firms without big bank accounts aren’t making it on to these maps (because they hired people who could actually build products vs. people who could bullsh!t investors and raise the money to pay these “analyst” firms). (Especially since an analyst from at least one firm has admitted that they were only allowed to feature clients in their maps, and an analyst from another firm has admitted that they had to design the criteria for inclusion to maximize client exposure and minimize the chances of a non-client from qualifying, as they were limited in the non-clients they include in the map [to low single digits].)

And, furthermore, when you look at those vendors that did make it, The Revelator is correct in the implication that some of them can’t carry more than a tune or two (despite claiming to carry 20).

And it seems that the doctor‘s punch hit a little harder than The Revelator expected because he followed it up with a post on Monday where he asked us Is This True?!?, and of course the doctor, who already recounted his tales of rage on LinkedIn in response to his posts that asked Are Traditional Analyst and Consulting Models Outdated and/or Unethical? and Does it Matter if Analyst Firms Aren’t Entirely Pay-to-Play if the Procurement Space Thinks They Are, couldn’t let this one go (because, to be even more blunt, he doesn’t like being accused of being an unethical jackass just because he’s an analyst, because not all jackasses are unethical, and things would be different if all analysts and consultants were as honest and hardworking as a real jackass [can you say foreshadowing?]) and responded thusly:

Well, this was the first year doing his (own) reach-outs [as the client relations team did them at Spend Matters, so he really hasn’t done many since 2017] where a few companies said they wouldn’t talk to him and/or show him anything because if they weren’t being charged, then the doctor is just going to “steal their information and sell it to their competitors“, like a certain other analyst firm whose name won’t be mentioned.

Yes, the doctor is getting a lot more “what’s the catch?” than he ever did! Apparently analysts/bloggers don’t do anything out of the goodness of their hearts anymore, there’s always a price. (Even when the doctor tells them the catch is “the doctor chooses who he covers, when, and DOES NOT actively promote the piece since no one is paying for it“, some still don’t believe him (even when he follows it up with a further explanation that even if he covers you, he likely won’t cover you again for at least two years because he wants to give all innovative or underrepresented vendors a chance, and may even ignore them completely during that time, even if they reach out, because, again, they’re not a client and his goal is to give a shot to as many companies trying to offer as he can).

Also, in the doctor‘s view, this is a big reason that analyst firms need to step up and help fix the Procurement Stink, but you can guess the response he received to the following post on The Procurement Stink (and if you can’t, ask the crickets).

The Revelator concluded his question with a reference to a Jon Oliver assertion about McKinsey, a firm that bluntly stated, “We don’t learn from clients. Their standards aren’t high enough. We learn from other McKinsey partners.” (and if this isn’t a reason to never use McKinsey again, then what is?) and asked if this was true across the analyst and consulting space. the doctor‘s response was that the Jon Oliver assertion was representative of a different problem. The Big X consultancies are too busy sniffing their own smug to realize that, hey, sometimes their clients are smarter (and so are a few analysts as well, but the problem is not nearly as common in analyst firms as it is in Big X consulting firms).

Our problem as independent analysts who try to be fair and ethical is that a few of these big analyst firm sales reps are ruining our reputation. And the fact that these big firms don’t immediately throw out the rotting trash that these sales reps are is why some analyst firms do stink!

To this The Revelator promptly replied that as always, the doctor isn’t pulling his punches, which is true because …

we’re getting older. We don’t have the stamina to dance around all day pulling punches. Only to hit fast and hard, especially since that’s the only chance of pulling some of these young whipper-snappers out of the daze they are in as a result of the market madness and inflated investments (ridiculous 10X to 20X+ valuations were not that uncommon during COVID when everyone was looking for SaaS to take business online).

Not to mention, we’ve heard and seen it all at least twice before, probably three times on The Revelator‘s end (sorry!), and we know that there is very little that’s truly new in our space under the sun. With most companies, it’s just the new spin they manage to find every few years to bamboozle the market into thinking their solution will find considerably more value (with less functionality) than the less glitzy solution that came before (and which has already been proven to work, used correctly, at dozens of clients).

Of course, we first have to accept there is no big red easy button and that, gasp, we have to go back to actually TRAINING people on what needs to be done!

Another problem we have is that when the Big X listen to Marvin Gaye and Tammi Terrel, they hear:

𝘓𝘪𝘴𝘵𝘦𝘯 𝘣𝘢𝘣𝘺, 𝘢𝘪𝘯’𝘵 𝘯𝘰 𝘮𝘰𝘶𝘯𝘵𝘢𝘪𝘯 𝘩𝘪𝘨𝘩
a𝘪𝘯’𝘵 𝘯𝘰 𝘷𝘢𝘭𝘭𝘦𝘺 𝘭𝘰𝘸, 𝘢𝘪𝘯’𝘵 𝘯𝘰 𝘳𝘪𝘷𝘦𝘳 𝘸𝘪𝘥𝘦 𝘦𝘯𝘰𝘶𝘨𝘩, 𝘤𝘭𝘪𝘦𝘯𝘵
𝘧𝘰𝘳 𝘮𝘦 𝘵𝘰 𝘴𝘢𝘺 𝘵𝘩𝘢𝘵 𝘺𝘰𝘶 𝘮𝘪𝘨𝘩𝘵 𝘬𝘯𝘰𝘸 𝘴𝘰𝘮𝘦𝘵𝘩𝘪𝘯𝘨 𝘐 𝘥𝘰𝘯’𝘵
𝘕𝘰 𝘮𝘢𝘵𝘵𝘦𝘳 𝘩𝘰𝘸 𝘴𝘮𝘢𝘳𝘵 𝘺𝘰𝘶 𝘳𝘦𝘢𝘭𝘭𝘺 𝘢𝘳𝘦
s𝘪𝘯𝘤𝘦 𝘵𝘩𝘦𝘳𝘦’𝘴 𝘯𝘰 𝘨𝘳𝘦𝘦𝘥, 𝘵𝘩𝘢𝘵 𝘳𝘶𝘯𝘴 𝘯𝘦𝘢𝘳𝘭𝘺 𝘢𝘴 𝘥𝘦𝘦𝘱
𝘢𝘴 𝘵𝘩𝘦 𝘨𝘳𝘦𝘦𝘥 𝘪𝘯 𝘰𝘶𝘳 𝘩𝘦𝘢𝘳𝘵𝘴

The whole reason of their existence for a Big X firm is to continually sell you on what they know, whether it’s better or worse, because, once they have a foot in the door, you’re their cash cow … and the sooner they can convince you that you’re dependent on them, the better. Remember, they’ve stuffed their rafters with young turkeys that they need to get off the bench (or fall prey to the consulting bloodbath described by THE PROPHET in the linked article), and the best way to keep them off the bench is to make you reliant on them.

Unlike independent consultants like us (or small niche, specialist consultancies with limited resources), they don’t want to go in, do the job, deliver a result, and move on to something better (or a new project where they can create additional value for a client) … these Big X consultancies want to get in, put dozens of resources in a shared services center, and bill you 3X on them for life.

(If the doctor or The Revelator sticks with you for more than 12 to 24 months, it’s because we keep moving onto new projects that deliver new sources of value, not because we want to monitor your invoice processing for the rest of our lives, or star in the remake of “Just Shoot Me!”)

(And to those of you who told the doctor he was mean, he’d like to point out that while he was, and is, being brutally honest because that IS the modus operandi of the Big X consultancies, he wasn’t mean as he didn’t call them, or anyone they employ, a F6ckW@d. [That requires more than just following your playbook that is well known to anyone who wants to do their research.])

This brought the reply from The Revelator that:

In another article, he discussed the fact that VP Sales and Marketing people change jobs every two to three years.

As far as The Revelator could tell, they are put in an unwinnable position of hitting unreasonable targets based on transactional numbers rather than developing relationships and solving client problems. That is not a fair position because the focus shifts from what’s working for the client to hitting quarterly targets where, in many instances, the only client success is found in the press release.

The Revelator remembers many years ago talking with a top sales rep from Oracle who said that with his company you are only as good as your last quarter. When he said good, he was really talking about job security.

Ultimately, the most powerful testimony regarding the inherent flaws of the above approach is Gartner’s recent report that 85% of all AI and ML initiatives fail.

To this the doctor could only respond:

He can’t argue that. This is one of the problems with taking VC/PE money at ridiculous valuations (of more than 5X to 6X, which is the max that one can expect to recoup in 5 (five) years at an achievable year-over-year growth rate of roughly 40% without significantly increasing price tags for no additional functionality). The problem for Sales and Marketing is they now have to now tell the market that, suddenly, their product is now worth 3X what they quoted before the company took the VC/PE money at the ridiculous multiple and that if the customer pays only 2X (for NO new functionality, FYI), the customer is getting a deal. The problem is that the investors expect their money back in a short time frame WHILE significantly bumping up overhead (on overpriced Sales and Marketing), which is not achievable unless the company can double or triple the price of what it sells. This is often just not doable even by the best of marketers or sales people, which forces them out the door on a 2 to 3 year cycle. (Because, as you noted, as soon as they manage to get a few sales at that price tag, suddenly, as the investors realize they also need to add more implementation and support personnel as well, increasing overhead further, the Sales and Marketing rep quotas go up even more and all of them will eventually break if they don’t get out and go somewhere else just before they just can’t hit the unachievable target.)


The Revelator also noted that he now makes money through a low monthly fee that includes his experience and sales expertise, and that it’s a model he first used when he started his blog because making money is not bad when it is reasonably priced in relation to the services and value being delivered, and the doctor wholeheartedly agrees

in fact, the doctor used to do that too … but good luck finding more than one or two companies these days that honestly care about reader education and not just pushing their marketing message down a target’s throat … that’s why SI sponsorships are still suspended (and will be until he finds 4 companies that are willing to return to the gold old days where education came first — which, FYI, is one of the keys to long term success).

SI sponsorships included a day of advisory every quarter and a post covering the vendor’s solution (in the doctor‘s words, not theirs), which could be updated semi-annually if warranted and a new article whenever the vendor released a new module or significant new functionality (and it was the doctor‘s call as to what significant was).

At least The Revelator can still go back to Coupa or Zycus for sponsorship … EVERY SINGLE SPONSOR SI had pre-Spend Matters (when sponsorships were suspended on SI for obvious reasons) was eventually acquired (and why they all eventually dropped off).

(Just to be clear, the doctor is NOT saying it was the SI sponsorship, or even the doctor‘s advisory that resulted in their success [although he hopes it contributed], but he is saying that companies who are willing to listen and learn from experts, and who care more about educating and helping clients then just shoving a message down their throat, tend to do very well in the long run. Very, very well.

This is something the new generation of know-it-all thirty-somethings popping up start-ups in our space every other week don’t seem to get yet and likely won’t until they have their first failure! It’s just too bad they are going to take good investors, good employees, and beta/early clients down with them when there is no need.)

The Gen AI Fallacy

For going on 7 (seven) decades, AI cult members have been telling us if they just had more computing power, they’d solve the problem of AI. For going on (seven) 7 decades, they haven’t.

They won’t as long as we don’t fundamentally understand intelligence, the brain, or what is needed to make a computer brain.

Computing will continue to get exponentially more powerful, but it’s not just a matter of more powerful computing. The first AI program had a single core to run on. Today’s AI program have 10,000 core super clusters. The first AI programmer had only his salary and elbow grease to code, and train the model. Today’s AI companies have hundreds of employees and Billions in funding and have spent 200M to train a single model … which told us we should all eat one rock per day upon release to the public. (Which shouldn’t be unexpected as the number of cores we have today powering a single model is still less than the number of neurons in a pond snail.)

Similarly, the “models” will get “better”, relatively speaking (just like deep neural nets got better over time), but if they are not 100% reliable, they can never be used in critical applications, especially when you can’t even reliably predict confidence. (Or, even worse, you can’t even have confidence the result won’t be 100% fabrication.)

When the focus was narrow machine learning/focussed applications and accepting the limitations we had, progress was slow, but it was there, was steady, and the capabilities, and solutions improved yearly.

Now the average “enterprise” solution is decreasing in quality and application, which is going to erase decades of building trust in the cloud and reliable AI.

And that’s the fallacy. Adding more cores and more data just accelerates the capacity for error, not improvement.

Even a smart Google Engineer said so. (Source)

opstream: taking the orchestration dream direct!

INTRODUCTION

opstream was founded in 2021 to bring enterprise level orchestration, previously only available to IT, to Procurement in a manner that supported all of Procurement’s needs regardless of what that need was. In the beginning, this was allowing Procurement to not only create end-to-end workflows from request through requisition through reaping, but also create workflows out into logistics tracking, inventory management, and supply chain visibility, as required. Workflows that incorporated not just forms, but processes, intelligence, and, most importantly, team collaboration.

In other words, not much different than most of the other intake-to-orchestration platforms that were popping up faster than the moles in whack-a-mole at an initial analysis, especially since every platform was, and still is, claiming full process support, limitless integration, and endless collaboration while bringing intelligence to your Procurement function. (There was one big difference, and we’ll get to that later.)

However, unlike many founders who came from Procurement and assumed they knew everything that was needed, or from enterprise IT orchestration solutions who thought they knew everything Procurement needed, the founders of Opsteam admitted day one they didn’t know Procurement, interviewed over 400 professionals during development, and also realized the one thing that their orchestration platform had to do when it came to Procurement intake and orchestration, if the platform was to be valuable to their target customers, was direct. And they were 100% right. Of the 700+ solutions out there in S2P+ (see the Sourcing Innovation MegaMap), less than 1/20th address direct in any significant capacity, and none of the current intake-to-orchestrate platforms were designed to support direct from the ground up on day one.

So what does opstream do?

SOLUTION SUMMARY

As with any other intake-to-orchestrate platform, the platform has two parts, the user-based “intake” and the admin-based “orchestrate”.

We’ll start with the primary components of the admin-based orchestrate solution.

Intake Editor

The intake editor manages the intake schemas that define the intake workflows which are defined by request type. In opstream, an intake workflow will contain a workflow process for the requester, approver(s), vendor, and the opstream automation engine as well as a section to define the extent to which the approvers can impact the workflow.

The workflow builder is form based and allows the form builder to build as many steps as needed, with as many questions of any type as needed, using pre-built blocks that can accelerate the process, any and all available data sources for request creation and validation, and any of these sources, from all integrated systems, can also be used for conditional logic validations. This logic can be used to determine whether or not a step, or a question, is shown, or what values are accepted, or if it will influence a later question on the form.

Building the workflow is as easy as building an RFX as a user is just selecting from a set of basic elements such as a text block, numeric field, date/time object, multiple choice, data source, logic block, etc.

The data source allows a user to select any object definition in the data source, select an associated column, and restrict values to entries in that column. And the form will dynamically update and adjust as the underlying data source adjusts.

In addition to having workflows that adjust as related systems and data sources change, the platform also has one other unique capability when it comes to building workflows for Procurement requests. It understands multiple item types: inventory item, non-inventory item, and service — which are understood by many platforms — other charge, which is a capability for capturing non PO spend that only a few deep Procurement platforms understand, and, most importantly, assembly/bill of materials, which is an option the doctor hasn’t seen yet (and which enables true direct support).

As long as the organization has an ERP/MRP or similar system that defines a bill of materials, then the opstream platform can model that bill of materials and allow the administrators to build workflows where the manufacturing department can request orders, or reorders, against part, or all, of the bill of material.

In addition, if the organization orders a lot of products that need to be customized, such as computer/server builds, 3D printer assemblies, or fleet vehicles, the admins can define special assembly / configurator workflows that can allow the user to specify every option they need on a product or service when they make the request.

The approval workflows can be as sparse or as detailed as the request process, and can have as few or as many checks as desired. This can include verifications against budgets, policies, and data in any integrated system. As with any good procurement systems, approvals can be sequential or parallel, restricted to users or opened to teams, and can be short circuited by super-approvers.

In addition, workflows can also be setup for vendors to get requests from the buyer, provide information, and execute their parts of the workflow, including providing integration information to their systems for automatic e-document receipt, transmission, update, and verification.

Finally, the automation workflow can be setup to automate the creation and distribution of complete requisitions for approval, complete purchase orders for deliveries, the receipt and acknowledgement of vendor order acknowledgements and advanced shipping notices and invoices, the auto-transmission of ok-to-pay and payment verifications.

But it doesn’t have to stop there. One big differentiator of the opstream platform is because it was built to be an enterprise integration platform at the core is that — as we’ve already hinted about in our discussion of how, unlike pretty much every other intake/orchestrate platform, it supports assembly/bill of materials out of the box by integrating with ERPs/MRPs — it doesn’t have to stop at the “pay” in source-to-pay. It can pull in the logistics management/monitoring system to track shipments and inventory en route. It can be integrated with the inventory management system to track current inventory and help a Procurement organization manage requisitions against inventory and guide buyers when inventory needs to be replenished. It can also integrate with quality management and service tracking systems to track the expected quality and lifespan of the products that come from inventory and warn buyers if the quality or number of service issues is increasing at the time of requisition or reorder.

Data Source Manager

opstream comes with hundreds of systems integrated out of the box, but it’s trivial for opstream to add more platforms as needed (as long as those platforms have an open API) as the opsteam platform has built-in data model discovery capabilities and support for the standard web connection protocols. This means that adding a new data source is simply a matter of specifying the connection strings and parameters and the source will be integrated and the public data source auto-discovered. The admin can then configure exactly what is available for use in the opstream solution, who can see/use what, and the synch interval.

Now we’ll discuss the primary components of the buyer-based orchestration solution.

Requests

The request screen centralizes all of the requests a user has access to which include, but are not limited to, requests created by, assigned to, archived by, and departmentally associated with the user. They can be filtered by creator, assignee, status, request type, category, time left, date, and other key fields defined by the end user organization.

Creating a new request is simple. The user simply selects a request type from a predefined lists and steps through the workflow. The workflows can be built very intelligently such that whenever the user selects an option, all other options are filtered accordingly. If the user selects a product that can only be supplied by three vendors, only those vendors will be available in the requested vendor drop down. Alternatively, if a vendor is selected first, only the products the vendor offers will be available for selection. Products can be limited to budget ranges, vendors to preferred status, and so on. Every piece of information is used to determine what is, and is not, needed and make it as simple as possible to the user. If the vendor is not preferred or the product not preferred, and there is a preferred vendor or product, the workflow can be coded to proactively alert before the request is made. The buyer can also define any quotes, certifications, surveys, or other documentation required by the supplier before the PO is cut. (And then, once the requisition is approved, the vendor work stream will kick-off). And, once the vendor work stream is complete, the final approvals can be made and the system will automatically send the purchase order and push the right documentation into the right systems.

Vendors

Vendors provides the user with central access to all loaded organizational vendors and provides a quick summary of onboarding/active status, preferred status, number of associated products, # of associated requests, and total spend. Additional summary fields can be added as required by the buying organization.

Documents

Documents act as a central document repository for all relevant vendor, product, and Procurement related information from request through receipt, vendor onboarding through offboarding, and product identification through end-of-life retirement of the final unit. Documents have categories, associated requests, vendors, and/or products, status information, dates of validity and other metadata relevant to the organization. Documents can be categorized into any categorization scheme the buying organization wants and can include compliance, insurance, NDAS, contracts, security reports, product specifications, product certifications, sustainability reports, and so on.

Analytics

The analytics component presents a slew of ready made dashboards that summarize the key process, spend, risk, compliance, and supply chain inventory metrics that aren’t available in any individual platform. Right now, there is no DIY capability to build the dashboards and all have to be created by opstream, but opstream can create new custom dashboards really fast during rollout and you can get cross-platform insights that can include, but not be limited to:

  • process time from purchase order (e-Pro) to carrier pickup to warehouse arrival (TMS) to distribution time to the retail outlets (WIMS)
  • contract price (CMS) to ePro (invoice) to payment (AP) as well as logistics cost (invoice) to tax (AP) to build total cost pictures for key products relative to negotiated unit prices
  • risk pictures on a supplier that include financial (D&B), sustainability (Ecovadis), quality (QMS), failure rate (customer support), geolocation (supply chain risk), geopolitical risk (supply chain risk), transportation risk (OTD from the TMS), etc.
  • compliance pictures that pull data from the insurer, regulatory agencies, internal compliance department, and third party auditor
  • supply chain inventory metrics that include contractual commitment (CLM), orders (ePro), fulfillments (inventory management), current inventory (inventory management), commitments (ERP/MRP), etc.

In addition, since all data is available through the built-in data bus, if the user wants to build her own dashboards, she can push all of the data into a (spend) analytics application to do her own analysis, and with opstream‘s ability to embed third party analytics apps (PowerBI for now, more coming), the user can even see the analytics inside the opstream platform.

This is the second main differentiator of opstream a user will notice. The founders realized that not only is data key, so is integrated analytics and they built a foundation to enable it.

Which leads us to the third and final differentiator you don’t see, the data model. The data model is automatically discovered and built for each organization as their systems are integrated. Beyond a few core entities and core identifiers upon which the data model is automatically built and generated, opstream doesn’t fix a rigid data model that all pieces of data need to map to (or get left out of the system). This ensures that an organization always has full access to all of their integrated data upon which to do cross-platform analytics on process, spend, inventory, and risk.

CONCLUSION

opstream understands there is no value in intake or orchestration on its own, and that for it to be relevant to Procurement, they have to do more than just connect indirect S2P systems together. As a result, they have built in support for direct, dynamic data model discovery, and integration with end-to-end enterprise systems that power the supply chain, allowing an organization to go beyond simple S2P and identify value not identifiable in S2P systems alone. As a result, they should definitely be on your shortlist if you are looking for an integration/orchestration platform (to connect your last generation systems to next generation systems through the cloud) that will allow you to increase overall system value (vs. just increasing overall system cost).

Challenging the Data Foundation ROI Paradigm

Creactives SpA recently published a great article Challenging the ROI Paradigm: Is Calculating ROI on Data Foundation a Valid Measure, which was made even greater by the fact that they are technically a Data Foundation company!

In a nutshell, Creactives is claiming that trying to calculate direct ROI on investments in data quality itself as a standalone business case is absurd. And they are totally right. As they say, the ROI should be calculated based on the total investment in data foundation and the analytics it powers.

The explanation they give cuts straight to the point.

It is as if we demand an ROI from the construction of an industrial shed that ensures the protection of business production but is obviously not directly income-generating. ROI should be calculated based on the total investment, that is, the production machines and the shed.

In other words, there’s no ROI on Clean Data or on Analytics on their own.

And they are entirely correct — and this is true whether you are providing a data foundation for spend analysis, supplier discovery and management, or compliance. If you are not actually doing something with that data that benefits from better data and better foundations, then the ROI of the data foundation is ZERO.

Creactives is helping to bringing to light three fallacies that the doctor sees all the time in this space. (This is very brave of them considering that they are the first data foundation company to admit that their value is zero unless embedded in a process that will require other solutions.)

Fallacy #1. A data cleansing/enrichment solution on its own delivers ROI.

Fallacy #2. You need totally cleansed data before you can deploy a solution.

Fallacy #3. Conversely, you can get ROI from an analytics solution on whatever data you have.

And all of these are, as stated, false!

ROI is generated from analytics on cleansed and enriched data. And that holds true regardless of the type of analytics being performed (spend, process, compliance, risk, discovery, etc.).

And that’s okay, because is a situation where the ROI from both is often exponential, and considerably more than the sum of its parts. Especially since analytics on bad data sometimes delivers a negative return! What the analytics companies don’t tell you is that the quality of the result is fully dependent on the quality, and completeness, of the input. Garbage in, garbage out. (Unless, of course, you are using AI, in which case, especially if Gen-AI is any part of that equation, it’s garbage in, hazardous waste out.)

So compute the return on both. (And it’s easy to partition the ROI by investment. If the data foundation is 60% of the investment, it is responsible for 60% of the return, and the ROI is simply 0.6 Return/Investment.)

Then, find additional analytics-based applications that you can run on the clean data, increase the ROI exponentially (while decreasing the cost of the data foundation in the overall equation), and watch the value of the total solution package soar!

Procurement 2024 or Procurement’s Greatest Hits? McKinsey’s on the money, but … Part 4

… in some cases this is money you should have been on a decade ago!

Let’s backtrack. As we noted in Part 1, McKinsey ended Q1 by publishing a piece on Procurement 2024: The next ten CPO actions to meet today’s toughest challenges which had some great advice, but in some cases these were actions that your Procurement organization should have been taking five, if not ten years ago. And, if your organization was doing so in these cases, should be moving on to true next actions the article didn’t even address.

So, as you probably guessed, we’re in the midst of discussing each one, giving credit where credit is due (they are pretty good at strategy after all), and indicating where they missed a bit and tell you what to do next if you are already doing the actions you should have been doing years ago. And, just like we did to THE PROPHET‘s predictions, grade them. In this third instalment, we’ll tackle the next three actions, which they group under the heading of:

OPERATING MODEL OF THE FUTURE

9. Digitize end-to-end procurement processes. A-

This is yet another action that you should have been working on since the first Procurement platform hit the market over 25 years ago, but an action that you likely couldn’t have completed until recently when the introduction of orchestration platforms made the interconnection of all systems used by Procurement affordable for the average company, even when the company needed connectivity with systems in Finance, Logistics, Risk, and Supply Chain to access necessary pieces of information for Procurement to adequately do its job. (Before these systems, you needed to be a large enterprise with a huge IT budget to afford the integration work to attempt this.) Plus, the “AI” tools you need to digitize paper documents used by old-school, classify data, process contracts to identify potential issues, etc. used to be too pricey — now that seemingly every vendor has them, they are affordable to.

10. Build new capabilities for the buyer of the future. A+

Prepare the organization for Procurement’s future by investing in new abilities for advanced market research, integrated technology, and talent development. Equip procurement professionals with deep insights and tools to understand and address supply market dynamics, risks, economics, and ESG. Given how supply chains have been in constant flux since the start of COVID, with no end in sight as a result of geo-political conflict, natural events (drought in the Panamanian canal) and disasters, supply shortages, rampant cost increases, and so on, the organization is going to need every capability available today, and a few not invented yet, to survive tomorrow. Start researching, testing, and developing now … before it’s too late. (Just leave Gen-AI out of the picture!)

So, putting it all together, the grades were B, B, A, B+, A-, A, B+, A+, A-, A+. Not a bad report card.