CF Suite for your Consumer-Friendly Source-to-Contract Needs

Founded in 2004 to help public and private sector companies save money through reverse auctions, the Curtis Fitch Solution has expanded since then to offer a source-to-contract procurement solution, which includes extensive supplier onboarding evaluation, performance management, contract lifecycle management, and spend and performance management. Curtis Fitch offers the following capabilities in its solution.

Supplier Insight

CF Supplier Insights is their supplier registration, onboarding, information, and relationship management solution. It supports the creation and delivery of customized questionnaires, which can be associated with organizational categories anywhere in the 4-level hierarchy supported, so that suppliers are only asked to provide information that the organization needs for their qualification. You can track insurance and key certification requirements, with due dates for auto-reminders, to enable suppliers to self-serve. Supplier Insights offers task-oriented dashboards to help a buyer or evaluator focus in on what needs to be done.

The supplier management module presents supplier profiles in a clear and easy to view way, showing company details, registration audit, location, and contact information, etc.. You can quickly view an audit trail of any activity that the supplier is linked to in CF Suite, including access to onboarding questionnaires, insurance and certification documents, events they were involved in, quotes they provided, contracts that were awarded, categories they are associated with, and balanced scorecards.

When insurance and certifications are requested, so is the associated metadata like coverage, award date, expiry date, and insurer/granter. This information is monitored, and both the buyer and supplier are alerted when the expiration date is approaching. The system defines default metadata for all suppliers, but buyers can add their own fields as needed.

It’s easy to search for suppliers by name, status, workflow stage, and location, or simply scan through them by name. The buyer can choose to “hide” suppliers that have not completed the registration process and they will not be available for sourcing events or contracting.


CF eSourcing is their sourcing project management and RFx platform where a user can define event and RFx templates, create multi-round sourcing projects, evaluate the responses using weighted scoring and multi-party ratings, define awards, and track procurement spend against savings. Also, all of the metadata is available for scorecards, contracting, and event creation, so if a supplier doesn’t have the necessary coverage or certification, the supplier can be filtered out of the event, or the buyer can proactively ensure they are not invited.

Events can be created from scratch but are usually created from templates to support standardization across the business. An RFx template can define stakeholders, suppliers (or categories), and any sourcing information, including important documentation. In addition, a procurement workplan can be designed to reflect any sign off gates as necessary when supporting the appropriate public sector requirements some buying organizations must adhere to.

Building RFx templates is easy to do and there’s a variety of question styles available, depending on the response required from the vendor (i.e. free text, multichoice, file upload, financial etc.) RFx’s can be built by importing question sets, linking to supplier onboarding information, or via a template. The tool offers tender evaluation with auto-weighting and scoring functionality (based on values or pre-defined option selections). Their clients’ buyers can invite stakeholders to evaluate a tender and what the evaluator scores can be pre-defined. In addition, when it comes to RFQs for gathering the quotes, it supports total cost breakdowns and arbitrary formulas. Supplier submissions and quotes can be exported to Excel, including any supplier document.

The one potential limitation is that there is not a lot of built in analysis / side-by-side comparison for price analysis in Sourcing, as most buyers prefer to either do their analysis in Excel or use custom dashboards in analytics.

In addition, e-Sourcing events can be organized into projects that can not only group related sourcing events, and provide an overarching workflow, but can also be used to track actuals against the historical baseline and forecasted actuals for a realized savings calculation.


CF Suite also includes CF Auctions. There are four styles of auction available for running both forward and reverse auctions; English, Sequential, Dutch, and Japanese auctions, which can all be executed and managed in real time. Auctions are easy to define and very easy to monitor by the buying organization as they can see the current bid for each supplier and associated baseline and target information that is hidden from the suppliers, allowing them to track progress against not only starting bids, but goals and see a real-time evaluation of the benefit associated with a bid.

Suppliers get easy to use bidding views, and depending on the settings, suppliers will either see their current rank or distance from lowest bid and can easily update their submissions or ask questions. Buyers can respond to suppliers one-on-one or send messages to all suppliers during the auction.

In addition, if something goes wrong, buyers can manage the event in real time and pause it, extend it, change owners, change supplier reps, and so on to ensure a successful auction.

Contract Management

CF Contracts Contract management enables procurement to build high churn contracts with limited and / or no clause changes, for example, NDAs or Terms of Service. CF Contracts has a clause library, workflow for internal sign off, and integrated redline tracking. Procurement can negotiate with suppliers through the tool, and once a contract has been drafted in CF Suite, the platform can be used to track versions, see redlines, accept a version for signing, and manage the e-Signature process. If CF Suite was used for sourcing, then if a contract is awarded off the back of an event, the contract can be linked with the award information from the sourcing module.

Most of their clients focus on using contracts as a central contract repository database to improve visibility of key contract information, and to feed into reporting outputs to support the management of the contract pipeline, including contract spend and contract renewals.

The contract database includes a pool of common fields (i.e. contract title, start and end dates, contract values etc.) and their clients can create custom fields to ensure the contract records align with their business data. Buyers can create automated contract renewal alerts that can be shared with the contract manager, business stakeholders or the contract management team, as one would expect from a contract management module.

Supplier Scorecards

CF Scorecards is their compliance, risk, and performance management solution that collates ongoing supplier risk management information into a central location. CF Suite uses all of this data to create a 360 degree supplier scorecard for managing risk, performance and development on an ongoing basis.

The great thing about scorecards is that you can select the questionnaires and third-party data you want to include, define the weightings, define the stakeholders who will be scoring the responses that can’t be auto-scored, and get a truly custom 360-degree scorecard on risk, compliance, and/or performance. You can attach associated documents, contracts, supplier onboarding questionnaires, third party assessments, and audits as desired to back up the scorecard, which provides a solid foundation for supplier performance, risk, and compliance management and development plan creation.

Data Analytics

Powered by Qlik, CF Analytics provides out-of-the-box dashboards and reports to help analyze spend, manage contract pipelines and lifecycles, track supplier onboarding workflow and status, and manage ongoing supplier risk . Client organizations can also create their own dashboards and reports as required, or Curtis Fitch can create additional dashboards and reports for the client on implementation. Curtis Fitch has API integrations available as standard for those clients that wish to analyse data in their preferred business tool, like Power BI, or Tableau.

The out-of-the-box dashboards and reports are well designed and take full advantage of the Qlik tool. The process management, contract/supplier status dashboard, and performance management dashboards are especially well thought out and designed. For example, the project management dashboard will show you the status of each sourcing project by stage and task, how many tasks are coming due and overdue, the total value of projects in each stage, and so on. Other process-oriented dashboards for contracts and supplier management are equally well done. For example, the contract management dashboard allows you to filter in by supplier category, or contract grouping and see upcoming milestones in the next 30 days, 60 days, and 90 days as well as overdue milestones.

The spend dashboards include all the standard dashboards you’d expect in a suite, and they are very easy to use with built-in filtering capability to quickly drill down to the precise spend you are interested in. The only down-side is they are OLAP based, and updates are daily. However, they are considering adding support for one or more BoB spend analysis platforms for those that want more advanced analytics capability.


It’s clear that the Curtis Fitch platform is a mature, well thought out, fleshed out platform for source to contract for indirect and direct services in both the public and private sector and a great solution not only for the global FTSE 100 companies they support, but the mid-market and enterprise market. It’s also very likely to be adopted, a key factor for success, because, as we pointed out in our headline, it’s very consumer friendly. While the UI design might look a bit dated (just like the design of Sourcing Innovation), it was designed that way because it’s extremely usable and, thus, very consumer friendly.

Curtis Fitch have an active roadmap, following development best practices, alongside scoping workshops, where they partner with their clients to ensure new features and benefits are based on user requirements. Many modern applications with flashy UIs, modern hieroglyphs, and text-based conversational interfaces might look cool, but at the end of the day sourcing professionals want to get the job done and don’t want to be blinded by vast swathes of functionality when looking for a specific feature. Procurement professionals want a well-designed, intuitive, guided workflow, a ‘3-clicks and I’m there’ style application that will get the job done efficiently and effectively. This is what CF Suite offers.


While there are some limitations in award analysis (as most users prefer to do that in Excel) and analytics (as it’s built on QlikSense), and not a lot of functionality that is truly unique if you compare it to functionality in the market overall, it is one of the broadest and deepest mid-market+ suites out there and can provide a lot of value to a lot of organizations. In addition, Curtis Fitch also offers consulting and managed auction/RFX services which can be very helpful to an understaffed organization as they can get some staff augmentation / event support while also having full visibility into the process and the ability to take over fully when they are ready. If you’re looking for a tightly integrated, highly useable, easily adopted Source-to-Contract platform with more contract and supplier management ability than you might expect, include CF Suite in the RFP. It’s certainly worth an investigation.

Yes, Jon. Some Analyst Firms Do Stink!

Last Saturday, Jon The Revelator penned a piece on how Going “off-map” is the key to finding the best solution providers, which he correctly said was critical because Gartner reports that 85% of all AI and ML projects fail to produce a return for the business. As per a Forbes article, the reasons often cited for the high failure rate include poor scope definition, bad training data, organizational inertia, lack of process change, mission creep and insufficient experimentation and, in the doctor‘s view, should also include inappropriate (and sometimes just bad) technology.

The Revelator asked for thoughts and, of course, the doctor was happy to oblige.

Starting off with the observation that while it is impossible to give precise numbers since companies are always starting up, merging, getting acquired, and shutting their doors in our space, statistically, if we look at the average number of logos on a module-quadrant map (about 20), and the average number of providers with that module (about 100, ranging from about 50 for true analytics to about 200 for some variant of SXM), for every provider “paying” to get that shiny dot, there are 4 going overlooked. And given that the map represents “average”, that says, statistically, 2 of those providers are going to be better.

Furthermore, maps should NEVER be used for solution selection (for the many, many reasons the doctor has been continually putting forth here on SI, heck just search for any post with “analyst” in the title over the past year). A good map can be used to discover vendors with comparable solutions, and nothing more.

The Revelator replied to this that the first thought that came to his mind was the urgency with which we buy the dots on the map without realizing that many have paid a considerable sum to get the logo spot and recounted the story of why he sold his company in 2001 after he was approached by Meta (an analyst firm eventually acquired by Gartner), in response to the successful results of his first big implementation, who said his company was on the leading edge and that they wanted to “cover” his company. The short story was that, when he said it sounded great, the Meta rep said “terrific, let’s get started right away and the next step is you sign a contract and pay the $20,000 [$36,000 today] invoice that we will send immediately and then we can begin“. Not something easy to swallow for a small company, and even less easy than today when it now costs at least 50% more (in today’s dollars) according to some of the small companies he’s talked to if they want Gartner attention. [And that’s just for basic coverage. Guaranteed inclusion on some of the big firm maps generally requires a “client” relationship that runs 150,000 or more!]

the doctor‘s response to this is that it’s still definitely a pay-to-play game with most of these firms, as per his recent posts where he noted that dozens of the smaller vendors that he talked to this year (who keep asking “so, what’s the catch?” when the doctor says he wants to cover them on SI) said they were being quoted between 50K and 70K for any sort of coverage. Wow!

Furthermore, while Duncan Jones insists it is likely just a few bad apples, those bad apples are so rotten that many of these smaller firms steadfastly believed they couldn’t even brief an analyst if they didn’t pay up (as the rep wouldn’t let them). And it wasn’t just one firm whose name the doctor heard over and over … 4 (four) different firms got over 3 (three), sometimes very angry, mentions across the two to three dozen mentions where the smaller vendor was willing to indicate which firm was quoting them 50,000 to 70,000+ or not willing to talk to them unless they signed a client agreement. (the doctor has reached out to over 100 small companies over the past year, and almost every response indicated that they expected there would be a fee for coverage based on their analyst firm interactions, and when he asked why, the majority of them said they were quoted (high) fees by one or more other firms that said they wanted to “cover” them.)

So yes, most of the smaller firms without big bank accounts aren’t making it on to these maps (because they hired people who could actually build products vs. people who could bullsh!t investors and raise the money to pay these “analyst” firms). (Especially since an analyst from at least one firm has admitted that they were only allowed to feature clients in their maps, and an analyst from another firm has admitted that they had to design the criteria for inclusion to maximize client exposure and minimize the chances of a non-client from qualifying, as they were limited in the non-clients they include in the map [to low single digits].)

And, furthermore, when you look at those vendors that did make it, The Revelator is correct in the implication that some of them can’t carry more than a tune or two (despite claiming to carry 20).

And it seems that the doctor‘s punch hit a little harder than The Revelator expected because he followed it up with a post on Monday where he asked us Is This True?!?, and of course the doctor, who already recounted his tales of rage on LinkedIn in response to his posts that asked Are Traditional Analyst and Consulting Models Outdated and/or Unethical? and Does it Matter if Analyst Firms Aren’t Entirely Pay-to-Play if the Procurement Space Thinks They Are, couldn’t let this one go (because, to be even more blunt, he doesn’t like being accused of being an unethical jackass just because he’s an analyst, because not all jackasses are unethical, and things would be different if all analysts and consultants were as honest and hardworking as a real jackass [can you say foreshadowing?]) and responded thusly:

Well, this was the first year doing his (own) reach-outs [as the client relations team did them at Spend Matters, so he really hasn’t done many since 2017] where a few companies said they wouldn’t talk to him and/or show him anything because if they weren’t being charged, then the doctor is just going to “steal their information and sell it to their competitors“, like a certain other analyst firm whose name won’t be mentioned.

Yes, the doctor is getting a lot more “what’s the catch?” than he ever did! Apparently analysts/bloggers don’t do anything out of the goodness of their hearts anymore, there’s always a price. (Even when the doctor tells them the catch is “the doctor chooses who he covers, when, and DOES NOT actively promote the piece since no one is paying for it“, some still don’t believe him (even when he follows it up with a further explanation that even if he covers you, he likely won’t cover you again for at least two years because he wants to give all innovative or underrepresented vendors a chance, and may even ignore them completely during that time, even if they reach out, because, again, they’re not a client and his goal is to give a shot to as many companies trying to offer as he can).

Also, in the doctor‘s view, this is a big reason that analyst firms need to step up and help fix the Procurement Stink, but you can guess the response he received to the following post on The Procurement Stink (and if you can’t, ask the crickets).

The Revelator concluded his question with a reference to a Jon Oliver assertion about McKinsey, a firm that bluntly stated, “We don’t learn from clients. Their standards aren’t high enough. We learn from other McKinsey partners.” (and if this isn’t a reason to never use McKinsey again, then what is?) and asked if this was true across the analyst and consulting space. the doctor‘s response was that the Jon Oliver assertion was representative of a different problem. The Big X consultancies are too busy sniffing their own smug to realize that, hey, sometimes their clients are smarter (and so are a few analysts as well, but the problem is not nearly as common in analyst firms as it is in Big X consulting firms).

Our problem as independent analysts who try to be fair and ethical is that a few of these big analyst firm sales reps are ruining our reputation. And the fact that these big firms don’t immediately throw out the rotting trash that these sales reps are is why some analyst firms do stink!

To this The Revelator promptly replied that as always, the doctor isn’t pulling his punches, which is true because …

we’re getting older. We don’t have the stamina to dance around all day pulling punches. Only to hit fast and hard, especially since that’s the only chance of pulling some of these young whipper-snappers out of the daze they are in as a result of the market madness and inflated investments (ridiculous 10X to 20X+ valuations were not that uncommon during COVID when everyone was looking for SaaS to take business online).

Not to mention, we’ve heard and seen it all at least twice before, probably three times on The Revelator‘s end (sorry!), and we know that there is very little that’s truly new in our space under the sun. With most companies, it’s just the new spin they manage to find every few years to bamboozle the market into thinking their solution will find considerably more value (with less functionality) than the less glitzy solution that came before (and which has already been proven to work, used correctly, at dozens of clients).

Of course, we first have to accept there is no big red easy button and that, gasp, we have to go back to actually TRAINING people on what needs to be done!

Another problem we have is that when the Big X listen to Marvin Gaye and Tammi Terrel, they hear:

𝘓𝘪𝘴𝘵𝘦𝘯 𝘣𝘢𝘣𝘺, 𝘢𝘪𝘯’𝘵 𝘯𝘰 𝘮𝘰𝘶𝘯𝘵𝘢𝘪𝘯 𝘩𝘪𝘨𝘩
a𝘪𝘯’𝘵 𝘯𝘰 𝘷𝘢𝘭𝘭𝘦𝘺 𝘭𝘰𝘸, 𝘢𝘪𝘯’𝘵 𝘯𝘰 𝘳𝘪𝘷𝘦𝘳 𝘸𝘪𝘥𝘦 𝘦𝘯𝘰𝘶𝘨𝘩, 𝘤𝘭𝘪𝘦𝘯𝘵
𝘧𝘰𝘳 𝘮𝘦 𝘵𝘰 𝘴𝘢𝘺 𝘵𝘩𝘢𝘵 𝘺𝘰𝘶 𝘮𝘪𝘨𝘩𝘵 𝘬𝘯𝘰𝘸 𝘴𝘰𝘮𝘦𝘵𝘩𝘪𝘯𝘨 𝘐 𝘥𝘰𝘯’𝘵
𝘕𝘰 𝘮𝘢𝘵𝘵𝘦𝘳 𝘩𝘰𝘸 𝘴𝘮𝘢𝘳𝘵 𝘺𝘰𝘶 𝘳𝘦𝘢𝘭𝘭𝘺 𝘢𝘳𝘦
s𝘪𝘯𝘤𝘦 𝘵𝘩𝘦𝘳𝘦’𝘴 𝘯𝘰 𝘨𝘳𝘦𝘦𝘥, 𝘵𝘩𝘢𝘵 𝘳𝘶𝘯𝘴 𝘯𝘦𝘢𝘳𝘭𝘺 𝘢𝘴 𝘥𝘦𝘦𝘱
𝘢𝘴 𝘵𝘩𝘦 𝘨𝘳𝘦𝘦𝘥 𝘪𝘯 𝘰𝘶𝘳 𝘩𝘦𝘢𝘳𝘵𝘴

The whole reason of their existence for a Big X firm is to continually sell you on what they know, whether it’s better or worse, because, once they have a foot in the door, you’re their cash cow … and the sooner they can convince you that you’re dependent on them, the better. Remember, they’ve stuffed their rafters with young turkeys that they need to get off the bench (or fall prey to the consulting bloodbath described by THE PROPHET in the linked article), and the best way to keep them off the bench is to make you reliant on them.

Unlike independent consultants like us (or small niche, specialist consultancies with limited resources), they don’t want to go in, do the job, deliver a result, and move on to something better (or a new project where they can create additional value for a client) … these Big X consultancies want to get in, put dozens of resources in a shared services center, and bill you 3X on them for life.

(If the doctor or The Revelator sticks with you for more than 12 to 24 months, it’s because we keep moving onto new projects that deliver new sources of value, not because we want to monitor your invoice processing for the rest of our lives, or star in the remake of “Just Shoot Me!”)

(And to those of you who told the doctor he was mean, he’d like to point out that while he was, and is, being brutally honest because that IS the modus operandi of the Big X consultancies, he wasn’t mean as he didn’t call them, or anyone they employ, a F6ckW@d. [That requires more than just following your playbook that is well known to anyone who wants to do their research.])

This brought the reply from The Revelator that:

In another article, he discussed the fact that VP Sales and Marketing people change jobs every two to three years.

As far as The Revelator could tell, they are put in an unwinnable position of hitting unreasonable targets based on transactional numbers rather than developing relationships and solving client problems. That is not a fair position because the focus shifts from what’s working for the client to hitting quarterly targets where, in many instances, the only client success is found in the press release.

The Revelator remembers many years ago talking with a top sales rep from Oracle who said that with his company you are only as good as your last quarter. When he said good, he was really talking about job security.

Ultimately, the most powerful testimony regarding the inherent flaws of the above approach is Gartner’s recent report that 85% of all AI and ML initiatives fail.

To this the doctor could only respond:

He can’t argue that. This is one of the problems with taking VC/PE money at ridiculous valuations (of more than 5X to 6X, which is the max that one can expect to recoup in 5 (five) years at an achievable year-over-year growth rate of roughly 40% without significantly increasing price tags for no additional functionality). The problem for Sales and Marketing is they now have to now tell the market that, suddenly, their product is now worth 3X what they quoted before the company took the VC/PE money at the ridiculous multiple and that if the customer pays only 2X (for NO new functionality, FYI), the customer is getting a deal. The problem is that the investors expect their money back in a short time frame WHILE significantly bumping up overhead (on overpriced Sales and Marketing), which is not achievable unless the company can double or triple the price of what it sells. This is often just not doable even by the best of marketers or sales people, which forces them out the door on a 2 to 3 year cycle. (Because, as you noted, as soon as they manage to get a few sales at that price tag, suddenly, as the investors realize they also need to add more implementation and support personnel as well, increasing overhead further, the Sales and Marketing rep quotas go up even more and all of them will eventually break if they don’t get out and go somewhere else just before they just can’t hit the unachievable target.)

The Revelator also noted that he now makes money through a low monthly fee that includes his experience and sales expertise, and that it’s a model he first used when he started his blog because making money is not bad when it is reasonably priced in relation to the services and value being delivered, and the doctor wholeheartedly agrees

in fact, the doctor used to do that too … but good luck finding more than one or two companies these days that honestly care about reader education and not just pushing their marketing message down a target’s throat … that’s why SI sponsorships are still suspended (and will be until he finds 4 companies that are willing to return to the gold old days where education came first — which, FYI, is one of the keys to long term success).

SI sponsorships included a day of advisory every quarter and a post covering the vendor’s solution (in the doctor‘s words, not theirs), which could be updated semi-annually if warranted and a new article whenever the vendor released a new module or significant new functionality (and it was the doctor‘s call as to what significant was).

At least The Revelator can still go back to Coupa or Zycus for sponsorship … EVERY SINGLE SPONSOR SI had pre-Spend Matters (when sponsorships were suspended on SI for obvious reasons) was eventually acquired (and why they all eventually dropped off).

(Just to be clear, the doctor is NOT saying it was the SI sponsorship, or even the doctor‘s advisory that resulted in their success [although he hopes it contributed], but he is saying that companies who are willing to listen and learn from experts, and who care more about educating and helping clients then just shoving a message down their throat, tend to do very well in the long run. Very, very well.

This is something the new generation of know-it-all thirty-somethings popping up start-ups in our space every other week don’t seem to get yet and likely won’t until they have their first failure! It’s just too bad they are going to take good investors, good employees, and beta/early clients down with them when there is no need.)

The Gen AI Fallacy

For going on 7 (seven) decades, AI cult members have been telling us if they just had more computing power, they’d solve the problem of AI. For going on (seven) 7 decades, they haven’t.

They won’t as long as we don’t fundamentally understand intelligence, the brain, or what is needed to make a computer brain.

Computing will continue to get exponentially more powerful, but it’s not just a matter of more powerful computing. The first AI program had a single core to run on. Today’s AI program have 10,000 core super clusters. The first AI programmer had only his salary and elbow grease to code, and train the model. Today’s AI companies have hundreds of employees and Billions in funding and have spent 200M to train a single model … which told us we should all eat one rock per day upon release to the public. (Which shouldn’t be unexpected as the number of cores we have today powering a single model is still less than the number of neurons in a pond snail.)

Similarly, the “models” will get “better”, relatively speaking (just like deep neural nets got better over time), but if they are not 100% reliable, they can never be used in critical applications, especially when you can’t even reliably predict confidence. (Or, even worse, you can’t even have confidence the result won’t be 100% fabrication.)

When the focus was narrow machine learning/focussed applications and accepting the limitations we had, progress was slow, but it was there, was steady, and the capabilities, and solutions improved yearly.

Now the average “enterprise” solution is decreasing in quality and application, which is going to erase decades of building trust in the cloud and reliable AI.

And that’s the fallacy. Adding more cores and more data just accelerates the capacity for error, not improvement.

Even a smart Google Engineer said so. (Source)

opstream: taking the orchestration dream direct!


opstream was founded in 2021 to bring enterprise level orchestration, previously only available to IT, to Procurement in a manner that supported all of Procurement’s needs regardless of what that need was. In the beginning, this was allowing Procurement to not only create end-to-end workflows from request through requisition through reaping, but also create workflows out into logistics tracking, inventory management, and supply chain visibility, as required. Workflows that incorporated not just forms, but processes, intelligence, and, most importantly, team collaboration.

In other words, not much different than most of the other intake-to-orchestration platforms that were popping up faster than the moles in whack-a-mole at an initial analysis, especially since every platform was, and still is, claiming full process support, limitless integration, and endless collaboration while bringing intelligence to your Procurement function. (There was one big difference, and we’ll get to that later.)

However, unlike many founders who came from Procurement and assumed they knew everything that was needed, or from enterprise IT orchestration solutions who thought they knew everything Procurement needed, the founders of Opsteam admitted day one they didn’t know Procurement, interviewed over 400 professionals during development, and also realized the one thing that their orchestration platform had to do when it came to Procurement intake and orchestration, if the platform was to be valuable to their target customers, was direct. And they were 100% right. Of the 700+ solutions out there in S2P+ (see the Sourcing Innovation MegaMap), less than 1/20th address direct in any significant capacity, and none of the current intake-to-orchestrate platforms were designed to support direct from the ground up on day one.

So what does opstream do?


As with any other intake-to-orchestrate platform, the platform has two parts, the user-based “intake” and the admin-based “orchestrate”.

We’ll start with the primary components of the admin-based orchestrate solution.

Intake Editor

The intake editor manages the intake schemas that define the intake workflows which are defined by request type. In opstream, an intake workflow will contain a workflow process for the requester, approver(s), vendor, and the opstream automation engine as well as a section to define the extent to which the approvers can impact the workflow.

The workflow builder is form based and allows the form builder to build as many steps as needed, with as many questions of any type as needed, using pre-built blocks that can accelerate the process, any and all available data sources for request creation and validation, and any of these sources, from all integrated systems, can also be used for conditional logic validations. This logic can be used to determine whether or not a step, or a question, is shown, or what values are accepted, or if it will influence a later question on the form.

Building the workflow is as easy as building an RFX as a user is just selecting from a set of basic elements such as a text block, numeric field, date/time object, multiple choice, data source, logic block, etc.

The data source allows a user to select any object definition in the data source, select an associated column, and restrict values to entries in that column. And the form will dynamically update and adjust as the underlying data source adjusts.

In addition to having workflows that adjust as related systems and data sources change, the platform also has one other unique capability when it comes to building workflows for Procurement requests. It understands multiple item types: inventory item, non-inventory item, and service — which are understood by many platforms — other charge, which is a capability for capturing non PO spend that only a few deep Procurement platforms understand, and, most importantly, assembly/bill of materials, which is an option the doctor hasn’t seen yet (and which enables true direct support).

As long as the organization has an ERP/MRP or similar system that defines a bill of materials, then the opstream platform can model that bill of materials and allow the administrators to build workflows where the manufacturing department can request orders, or reorders, against part, or all, of the bill of material.

In addition, if the organization orders a lot of products that need to be customized, such as computer/server builds, 3D printer assemblies, or fleet vehicles, the admins can define special assembly / configurator workflows that can allow the user to specify every option they need on a product or service when they make the request.

The approval workflows can be as sparse or as detailed as the request process, and can have as few or as many checks as desired. This can include verifications against budgets, policies, and data in any integrated system. As with any good procurement systems, approvals can be sequential or parallel, restricted to users or opened to teams, and can be short circuited by super-approvers.

In addition, workflows can also be setup for vendors to get requests from the buyer, provide information, and execute their parts of the workflow, including providing integration information to their systems for automatic e-document receipt, transmission, update, and verification.

Finally, the automation workflow can be setup to automate the creation and distribution of complete requisitions for approval, complete purchase orders for deliveries, the receipt and acknowledgement of vendor order acknowledgements and advanced shipping notices and invoices, the auto-transmission of ok-to-pay and payment verifications.

But it doesn’t have to stop there. One big differentiator of the opstream platform is because it was built to be an enterprise integration platform at the core is that — as we’ve already hinted about in our discussion of how, unlike pretty much every other intake/orchestrate platform, it supports assembly/bill of materials out of the box by integrating with ERPs/MRPs — it doesn’t have to stop at the “pay” in source-to-pay. It can pull in the logistics management/monitoring system to track shipments and inventory en route. It can be integrated with the inventory management system to track current inventory and help a Procurement organization manage requisitions against inventory and guide buyers when inventory needs to be replenished. It can also integrate with quality management and service tracking systems to track the expected quality and lifespan of the products that come from inventory and warn buyers if the quality or number of service issues is increasing at the time of requisition or reorder.

Data Source Manager

opstream comes with hundreds of systems integrated out of the box, but it’s trivial for opstream to add more platforms as needed (as long as those platforms have an open API) as the opsteam platform has built-in data model discovery capabilities and support for the standard web connection protocols. This means that adding a new data source is simply a matter of specifying the connection strings and parameters and the source will be integrated and the public data source auto-discovered. The admin can then configure exactly what is available for use in the opstream solution, who can see/use what, and the synch interval.

Now we’ll discuss the primary components of the buyer-based orchestration solution.


The request screen centralizes all of the requests a user has access to which include, but are not limited to, requests created by, assigned to, archived by, and departmentally associated with the user. They can be filtered by creator, assignee, status, request type, category, time left, date, and other key fields defined by the end user organization.

Creating a new request is simple. The user simply selects a request type from a predefined lists and steps through the workflow. The workflows can be built very intelligently such that whenever the user selects an option, all other options are filtered accordingly. If the user selects a product that can only be supplied by three vendors, only those vendors will be available in the requested vendor drop down. Alternatively, if a vendor is selected first, only the products the vendor offers will be available for selection. Products can be limited to budget ranges, vendors to preferred status, and so on. Every piece of information is used to determine what is, and is not, needed and make it as simple as possible to the user. If the vendor is not preferred or the product not preferred, and there is a preferred vendor or product, the workflow can be coded to proactively alert before the request is made. The buyer can also define any quotes, certifications, surveys, or other documentation required by the supplier before the PO is cut. (And then, once the requisition is approved, the vendor work stream will kick-off). And, once the vendor work stream is complete, the final approvals can be made and the system will automatically send the purchase order and push the right documentation into the right systems.


Vendors provides the user with central access to all loaded organizational vendors and provides a quick summary of onboarding/active status, preferred status, number of associated products, # of associated requests, and total spend. Additional summary fields can be added as required by the buying organization.


Documents act as a central document repository for all relevant vendor, product, and Procurement related information from request through receipt, vendor onboarding through offboarding, and product identification through end-of-life retirement of the final unit. Documents have categories, associated requests, vendors, and/or products, status information, dates of validity and other metadata relevant to the organization. Documents can be categorized into any categorization scheme the buying organization wants and can include compliance, insurance, NDAS, contracts, security reports, product specifications, product certifications, sustainability reports, and so on.


The analytics component presents a slew of ready made dashboards that summarize the key process, spend, risk, compliance, and supply chain inventory metrics that aren’t available in any individual platform. Right now, there is no DIY capability to build the dashboards and all have to be created by opstream, but opstream can create new custom dashboards really fast during rollout and you can get cross-platform insights that can include, but not be limited to:

  • process time from purchase order (e-Pro) to carrier pickup to warehouse arrival (TMS) to distribution time to the retail outlets (WIMS)
  • contract price (CMS) to ePro (invoice) to payment (AP) as well as logistics cost (invoice) to tax (AP) to build total cost pictures for key products relative to negotiated unit prices
  • risk pictures on a supplier that include financial (D&B), sustainability (Ecovadis), quality (QMS), failure rate (customer support), geolocation (supply chain risk), geopolitical risk (supply chain risk), transportation risk (OTD from the TMS), etc.
  • compliance pictures that pull data from the insurer, regulatory agencies, internal compliance department, and third party auditor
  • supply chain inventory metrics that include contractual commitment (CLM), orders (ePro), fulfillments (inventory management), current inventory (inventory management), commitments (ERP/MRP), etc.

In addition, since all data is available through the built-in data bus, if the user wants to build her own dashboards, she can push all of the data into a (spend) analytics application to do her own analysis, and with opstream‘s ability to embed third party analytics apps (PowerBI for now, more coming), the user can even see the analytics inside the opstream platform.

This is the second main differentiator of opstream a user will notice. The founders realized that not only is data key, so is integrated analytics and they built a foundation to enable it.

Which leads us to the third and final differentiator you don’t see, the data model. The data model is automatically discovered and built for each organization as their systems are integrated. Beyond a few core entities and core identifiers upon which the data model is automatically built and generated, opstream doesn’t fix a rigid data model that all pieces of data need to map to (or get left out of the system). This ensures that an organization always has full access to all of their integrated data upon which to do cross-platform analytics on process, spend, inventory, and risk.


opstream understands there is no value in intake or orchestration on its own, and that for it to be relevant to Procurement, they have to do more than just connect indirect S2P systems together. As a result, they have built in support for direct, dynamic data model discovery, and integration with end-to-end enterprise systems that power the supply chain, allowing an organization to go beyond simple S2P and identify value not identifiable in S2P systems alone. As a result, they should definitely be on your shortlist if you are looking for an integration/orchestration platform (to connect your last generation systems to next generation systems through the cloud) that will allow you to increase overall system value (vs. just increasing overall system cost).

Challenging the Data Foundation ROI Paradigm

Creactives SpA recently published a great article Challenging the ROI Paradigm: Is Calculating ROI on Data Foundation a Valid Measure, which was made even greater by the fact that they are technically a Data Foundation company!

In a nutshell, Creactives is claiming that trying to calculate direct ROI on investments in data quality itself as a standalone business case is absurd. And they are totally right. As they say, the ROI should be calculated based on the total investment in data foundation and the analytics it powers.

The explanation they give cuts straight to the point.

It is as if we demand an ROI from the construction of an industrial shed that ensures the protection of business production but is obviously not directly income-generating. ROI should be calculated based on the total investment, that is, the production machines and the shed.

In other words, there’s no ROI on Clean Data or on Analytics on their own.

And they are entirely correct — and this is true whether you are providing a data foundation for spend analysis, supplier discovery and management, or compliance. If you are not actually doing something with that data that benefits from better data and better foundations, then the ROI of the data foundation is ZERO.

Creactives is helping to bringing to light three fallacies that the doctor sees all the time in this space. (This is very brave of them considering that they are the first data foundation company to admit that their value is zero unless embedded in a process that will require other solutions.)

Fallacy #1. A data cleansing/enrichment solution on its own delivers ROI.

Fallacy #2. You need totally cleansed data before you can deploy a solution.

Fallacy #3. Conversely, you can get ROI from an analytics solution on whatever data you have.

And all of these are, as stated, false!

ROI is generated from analytics on cleansed and enriched data. And that holds true regardless of the type of analytics being performed (spend, process, compliance, risk, discovery, etc.).

And that’s okay, because is a situation where the ROI from both is often exponential, and considerably more than the sum of its parts. Especially since analytics on bad data sometimes delivers a negative return! What the analytics companies don’t tell you is that the quality of the result is fully dependent on the quality, and completeness, of the input. Garbage in, garbage out. (Unless, of course, you are using AI, in which case, especially if Gen-AI is any part of that equation, it’s garbage in, hazardous waste out.)

So compute the return on both. (And it’s easy to partition the ROI by investment. If the data foundation is 60% of the investment, it is responsible for 60% of the return, and the ROI is simply 0.6 Return/Investment.)

Then, find additional analytics-based applications that you can run on the clean data, increase the ROI exponentially (while decreasing the cost of the data foundation in the overall equation), and watch the value of the total solution package soar!