Category Archives: Technology

Have all the Big X fallen for Gen-AI? Or is this their new insidious plan to hook you for life?

McKinsey. Accenture. Cap Gemini. KPMG. Deloitte. Kearney. BCG. etc. etc. etc.

Every single one is putting “Gen-AI” adoption in their top 10 (or top 5) strategic imperatives for Procurement, and its future, and that it’s essential for analytics (gasp) and automation (WTF?!?). One of these firms even announced they are going to train 80,000 f6ckw@ds on this bullcr@p.

It’s absolutely insane. First of all there are almost no valid uses for Gen-AI in business (unless, of course, your corporation is owned by Dr. Evil), and even less valid uses for Gen-AI in Procurement.

Secondly, the “Gen” in “Gen” AI stands for “Generative” which literally means MAKE STUFF UP. It DOES NOT analyze anything. Furthermore, automation is about predictability and consistency, Gen-AI gives you neither! How the heck could you automate anything. You CAN NOT! Automation requires a completely different AI technology built on classical (and predictable) machine learning (where you can accurately calculate confidences and break/stop when the confidence falls below a threshold).

Which begs the question, are they complete idiots who have completely fallen for the marketing bullcr@p? Or is this their new insidious plan to get you on a never-ending work order? After all, when it inevitably fails a few days after implementation, they have their excuses ready to go (which are the same excuses being given by these companies spending tens of millions on marketing) which are the same excuses that have been given to us since Neural Nets were invented: “it just needs more content for training“, “it just needs better prompting“, “it just needs more integration with your internal data sources“, rinse, lather, and repeat … ad infinitum. And, every year it will get a few percentage points better, but if it gets only 2% better per year, and the best Gen-AI instance now is scoring (slightly) less than 34% on the SOTA scale, it will be (at least) 9 (NINE) years before you reach 40% accuracy. In comparison, if you had an intern who only performed a task acceptably 40% of the time, how long would he last? Maybe 3 weeks. But these Big X know that once you sink seven (7) figures on a license, implementation, integration, and custom training, you’re hooked and you will keep pumping in six to seven figures a year even though you should have dropped the smelly rotten hot potato the minute you saw the demo.

So, maybe they aren’t stupid when it comes to Gen-AI. Maybe they are just evil because it’s their biggest opportunity to hook you for life since McKinsey convinced you that you should outsource for “labour arbitrage” and “currency exchange” (and not materials / products you can’t get / make at home) and other bullsh!t arguments that no society in the history of the world EVER outsourced for. (EVER!) Because if you install this bullcr@p and get to the point of “sunk cost”, you will continue to sink money into it. And they know it.

(Yet another reason you should be very, very careful about selecting a Big X for something that is NOT their forte.)

Remember that AI, and Gen-AI in particular, is a fallacy.

The Gen AI Fallacy

For going on 7 (seven) decades, AI cult members have been telling us if they just had more computing power, they’d solve the problem of AI. For going on (seven) 7 decades, they haven’t.

They won’t as long as we don’t fundamentally understand intelligence, the brain, or what is needed to make a computer brain.

Computing will continue to get exponentially more powerful, but it’s not just a matter of more powerful computing. The first AI program had a single core to run on. Today’s AI program have 10,000 core super clusters. The first AI programmer had only his salary and elbow grease to code, and train the model. Today’s AI companies have hundreds of employees and Billions in funding and have spent 200M to train a single model … which told us we should all eat one rock per day upon release to the public. (Which shouldn’t be unexpected as the number of cores we have today powering a single model is still less than the number of neurons in a pond snail.)

Similarly, the “models” will get “better”, relatively speaking (just like deep neural nets got better over time), but if they are not 100% reliable, they can never be used in critical applications, especially when you can’t even reliably predict confidence. (Or, even worse, you can’t even have confidence the result won’t be 100% fabrication.)

When the focus was narrow machine learning/focussed applications and accepting the limitations we had, progress was slow, but it was there, was steady, and the capabilities, and solutions improved yearly.

Now the average “enterprise” solution is decreasing in quality and application, which is going to erase decades of building trust in the cloud and reliable AI.

And that’s the fallacy. Adding more cores and more data just accelerates the capacity for error, not improvement.

Even a smart Google Engineer said so. (Source)

opstream: taking the orchestration dream direct!

INTRODUCTION

opstream was founded in 2021 to bring enterprise level orchestration, previously only available to IT, to Procurement in a manner that supported all of Procurement’s needs regardless of what that need was. In the beginning, this was allowing Procurement to not only create end-to-end workflows from request through requisition through reaping, but also create workflows out into logistics tracking, inventory management, and supply chain visibility, as required. Workflows that incorporated not just forms, but processes, intelligence, and, most importantly, team collaboration.

In other words, not much different than most of the other intake-to-orchestration platforms that were popping up faster than the moles in whack-a-mole at an initial analysis, especially since every platform was, and still is, claiming full process support, limitless integration, and endless collaboration while bringing intelligence to your Procurement function. (There was one big difference, and we’ll get to that later.)

However, unlike many founders who came from Procurement and assumed they knew everything that was needed, or from enterprise IT orchestration solutions who thought they knew everything Procurement needed, the founders of Opsteam admitted day one they didn’t know Procurement, interviewed over 400 professionals during development, and also realized the one thing that their orchestration platform had to do when it came to Procurement intake and orchestration, if the platform was to be valuable to their target customers, was direct. And they were 100% right. Of the 700+ solutions out there in S2P+ (see the Sourcing Innovation MegaMap), less than 1/20th address direct in any significant capacity, and none of the current intake-to-orchestrate platforms were designed to support direct from the ground up on day one.

So what does opstream do?

SOLUTION SUMMARY

As with any other intake-to-orchestrate platform, the platform has two parts, the user-based “intake” and the admin-based “orchestrate”.

We’ll start with the primary components of the admin-based orchestrate solution.

Intake Editor

The intake editor manages the intake schemas that define the intake workflows which are defined by request type. In opstream, an intake workflow will contain a workflow process for the requester, approver(s), vendor, and the opstream automation engine as well as a section to define the extent to which the approvers can impact the workflow.

The workflow builder is form based and allows the form builder to build as many steps as needed, with as many questions of any type as needed, using pre-built blocks that can accelerate the process, any and all available data sources for request creation and validation, and any of these sources, from all integrated systems, can also be used for conditional logic validations. This logic can be used to determine whether or not a step, or a question, is shown, or what values are accepted, or if it will influence a later question on the form.

Building the workflow is as easy as building an RFX as a user is just selecting from a set of basic elements such as a text block, numeric field, date/time object, multiple choice, data source, logic block, etc.

The data source allows a user to select any object definition in the data source, select an associated column, and restrict values to entries in that column. And the form will dynamically update and adjust as the underlying data source adjusts.

In addition to having workflows that adjust as related systems and data sources change, the platform also has one other unique capability when it comes to building workflows for Procurement requests. It understands multiple item types: inventory item, non-inventory item, and service — which are understood by many platforms — other charge, which is a capability for capturing non PO spend that only a few deep Procurement platforms understand, and, most importantly, assembly/bill of materials, which is an option the doctor hasn’t seen yet (and which enables true direct support).

As long as the organization has an ERP/MRP or similar system that defines a bill of materials, then the opstream platform can model that bill of materials and allow the administrators to build workflows where the manufacturing department can request orders, or reorders, against part, or all, of the bill of material.

In addition, if the organization orders a lot of products that need to be customized, such as computer/server builds, 3D printer assemblies, or fleet vehicles, the admins can define special assembly / configurator workflows that can allow the user to specify every option they need on a product or service when they make the request.

The approval workflows can be as sparse or as detailed as the request process, and can have as few or as many checks as desired. This can include verifications against budgets, policies, and data in any integrated system. As with any good procurement systems, approvals can be sequential or parallel, restricted to users or opened to teams, and can be short circuited by super-approvers.

In addition, workflows can also be setup for vendors to get requests from the buyer, provide information, and execute their parts of the workflow, including providing integration information to their systems for automatic e-document receipt, transmission, update, and verification.

Finally, the automation workflow can be setup to automate the creation and distribution of complete requisitions for approval, complete purchase orders for deliveries, the receipt and acknowledgement of vendor order acknowledgements and advanced shipping notices and invoices, the auto-transmission of ok-to-pay and payment verifications.

But it doesn’t have to stop there. One big differentiator of the opstream platform is because it was built to be an enterprise integration platform at the core is that — as we’ve already hinted about in our discussion of how, unlike pretty much every other intake/orchestrate platform, it supports assembly/bill of materials out of the box by integrating with ERPs/MRPs — it doesn’t have to stop at the “pay” in source-to-pay. It can pull in the logistics management/monitoring system to track shipments and inventory en route. It can be integrated with the inventory management system to track current inventory and help a Procurement organization manage requisitions against inventory and guide buyers when inventory needs to be replenished. It can also integrate with quality management and service tracking systems to track the expected quality and lifespan of the products that come from inventory and warn buyers if the quality or number of service issues is increasing at the time of requisition or reorder.

Data Source Manager

opstream comes with hundreds of systems integrated out of the box, but it’s trivial for opstream to add more platforms as needed (as long as those platforms have an open API) as the opsteam platform has built-in data model discovery capabilities and support for the standard web connection protocols. This means that adding a new data source is simply a matter of specifying the connection strings and parameters and the source will be integrated and the public data source auto-discovered. The admin can then configure exactly what is available for use in the opstream solution, who can see/use what, and the synch interval.

Now we’ll discuss the primary components of the buyer-based orchestration solution.

Requests

The request screen centralizes all of the requests a user has access to which include, but are not limited to, requests created by, assigned to, archived by, and departmentally associated with the user. They can be filtered by creator, assignee, status, request type, category, time left, date, and other key fields defined by the end user organization.

Creating a new request is simple. The user simply selects a request type from a predefined lists and steps through the workflow. The workflows can be built very intelligently such that whenever the user selects an option, all other options are filtered accordingly. If the user selects a product that can only be supplied by three vendors, only those vendors will be available in the requested vendor drop down. Alternatively, if a vendor is selected first, only the products the vendor offers will be available for selection. Products can be limited to budget ranges, vendors to preferred status, and so on. Every piece of information is used to determine what is, and is not, needed and make it as simple as possible to the user. If the vendor is not preferred or the product not preferred, and there is a preferred vendor or product, the workflow can be coded to proactively alert before the request is made. The buyer can also define any quotes, certifications, surveys, or other documentation required by the supplier before the PO is cut. (And then, once the requisition is approved, the vendor work stream will kick-off). And, once the vendor work stream is complete, the final approvals can be made and the system will automatically send the purchase order and push the right documentation into the right systems.

Vendors

Vendors provides the user with central access to all loaded organizational vendors and provides a quick summary of onboarding/active status, preferred status, number of associated products, # of associated requests, and total spend. Additional summary fields can be added as required by the buying organization.

Documents

Documents act as a central document repository for all relevant vendor, product, and Procurement related information from request through receipt, vendor onboarding through offboarding, and product identification through end-of-life retirement of the final unit. Documents have categories, associated requests, vendors, and/or products, status information, dates of validity and other metadata relevant to the organization. Documents can be categorized into any categorization scheme the buying organization wants and can include compliance, insurance, NDAS, contracts, security reports, product specifications, product certifications, sustainability reports, and so on.

Analytics

The analytics component presents a slew of ready made dashboards that summarize the key process, spend, risk, compliance, and supply chain inventory metrics that aren’t available in any individual platform. Right now, there is no DIY capability to build the dashboards and all have to be created by opstream, but opstream can create new custom dashboards really fast during rollout and you can get cross-platform insights that can include, but not be limited to:

  • process time from purchase order (e-Pro) to carrier pickup to warehouse arrival (TMS) to distribution time to the retail outlets (WIMS)
  • contract price (CMS) to ePro (invoice) to payment (AP) as well as logistics cost (invoice) to tax (AP) to build total cost pictures for key products relative to negotiated unit prices
  • risk pictures on a supplier that include financial (D&B), sustainability (Ecovadis), quality (QMS), failure rate (customer support), geolocation (supply chain risk), geopolitical risk (supply chain risk), transportation risk (OTD from the TMS), etc.
  • compliance pictures that pull data from the insurer, regulatory agencies, internal compliance department, and third party auditor
  • supply chain inventory metrics that include contractual commitment (CLM), orders (ePro), fulfillments (inventory management), current inventory (inventory management), commitments (ERP/MRP), etc.

In addition, since all data is available through the built-in data bus, if the user wants to build her own dashboards, she can push all of the data into a (spend) analytics application to do her own analysis, and with opstream‘s ability to embed third party analytics apps (PowerBI for now, more coming), the user can even see the analytics inside the opstream platform.

This is the second main differentiator of opstream a user will notice. The founders realized that not only is data key, so is integrated analytics and they built a foundation to enable it.

Which leads us to the third and final differentiator you don’t see, the data model. The data model is automatically discovered and built for each organization as their systems are integrated. Beyond a few core entities and core identifiers upon which the data model is automatically built and generated, opstream doesn’t fix a rigid data model that all pieces of data need to map to (or get left out of the system). This ensures that an organization always has full access to all of their integrated data upon which to do cross-platform analytics on process, spend, inventory, and risk.

CONCLUSION

opstream understands there is no value in intake or orchestration on its own, and that for it to be relevant to Procurement, they have to do more than just connect indirect S2P systems together. As a result, they have built in support for direct, dynamic data model discovery, and integration with end-to-end enterprise systems that power the supply chain, allowing an organization to go beyond simple S2P and identify value not identifiable in S2P systems alone. As a result, they should definitely be on your shortlist if you are looking for an integration/orchestration platform (to connect your last generation systems to next generation systems through the cloud) that will allow you to increase overall system value (vs. just increasing overall system cost).

Challenging the Data Foundation ROI Paradigm

Creactives SpA recently published a great article Challenging the ROI Paradigm: Is Calculating ROI on Data Foundation a Valid Measure, which was made even greater by the fact that they are technically a Data Foundation company!

In a nutshell, Creactives is claiming that trying to calculate direct ROI on investments in data quality itself as a standalone business case is absurd. And they are totally right. As they say, the ROI should be calculated based on the total investment in data foundation and the analytics it powers.

The explanation they give cuts straight to the point.

It is as if we demand an ROI from the construction of an industrial shed that ensures the protection of business production but is obviously not directly income-generating. ROI should be calculated based on the total investment, that is, the production machines and the shed.

In other words, there’s no ROI on Clean Data or on Analytics on their own.

And they are entirely correct — and this is true whether you are providing a data foundation for spend analysis, supplier discovery and management, or compliance. If you are not actually doing something with that data that benefits from better data and better foundations, then the ROI of the data foundation is ZERO.

Creactives is helping to bringing to light three fallacies that the doctor sees all the time in this space. (This is very brave of them considering that they are the first data foundation company to admit that their value is zero unless embedded in a process that will require other solutions.)

Fallacy #1. A data cleansing/enrichment solution on its own delivers ROI.

Fallacy #2. You need totally cleansed data before you can deploy a solution.

Fallacy #3. Conversely, you can get ROI from an analytics solution on whatever data you have.

And all of these are, as stated, false!

ROI is generated from analytics on cleansed and enriched data. And that holds true regardless of the type of analytics being performed (spend, process, compliance, risk, discovery, etc.).

And that’s okay, because is a situation where the ROI from both is often exponential, and considerably more than the sum of its parts. Especially since analytics on bad data sometimes delivers a negative return! What the analytics companies don’t tell you is that the quality of the result is fully dependent on the quality, and completeness, of the input. Garbage in, garbage out. (Unless, of course, you are using AI, in which case, especially if Gen-AI is any part of that equation, it’s garbage in, hazardous waste out.)

So compute the return on both. (And it’s easy to partition the ROI by investment. If the data foundation is 60% of the investment, it is responsible for 60% of the return, and the ROI is simply 0.6 Return/Investment.)

Then, find additional analytics-based applications that you can run on the clean data, increase the ROI exponentially (while decreasing the cost of the data foundation in the overall equation), and watch the value of the total solution package soar!

Solution Smash-Up! PROPHETic Vision or Magic 8-Ball!

A few weeks ago, THE PROPHET, who noted he was often asked about which disparate providers and/or solutions might work well together (as part of his strategy and M&A work), said that the answer(s) always depend on hard dollars and common sense in a recent article on LinkedIn.

He noted that there were questions that could be asked to help make the determination between any two specific providers and/or solutions, which included:

  • From a TAM perspective, will it increase the TAM beyond 1+2=2?
  • Does it add additional ideal customer profiles or elevate the solutions to the C-Suite?
  • Does it open up additional GTM strategies and channels?

… but also noted that you can go beyond just payments with AP (traditionally Treasury and Accounting), and provided five examples of solution smash-ups that were a bit more “radical”. In a nutshell, with only minor paraphrasing, these were:

1. Intake Management, “light” e-Pro, and GPO.

This makes perfect sense — there’s a reason intake pre-dates stand-alone intake solutions (Zycus launched iRequest back in 2015, almost nine years ago), and that’s because intake and e-Pro go well together; adding in the GPO allows the organization to take advantage of better prices for regular purchases and makes sense.

2. Contract Management and Price Compliance.

The whole point of contracts is to lock in commitments, which are useless if not realized. Integrating contract management into a price monitoring solution, be it part of e-Pro or AP or payments, is a great choice.

3. Third-Party Risk and Working Capital Management.

Before a cash outlay, or an agreement thereto, it’s a good idea to understand the risk.

4. Spend Analytics and BOM/Part-Level Management.

Well, this already exists in some specialists — mainly in electronics (think Levadata and SupplyFrame), but other players are popping up in other verticals as well. (Sievo and Scalue do a great job of doing direct material or part analysis; and Scalue’s material categorization is great for direct management.)

5. Solve Supplier Supervison Sheol

A few companies are starting to make good progress here on “on-boarding, 3PRM, cyber, GRC, and ESG in one place”. Think Brooklyn Solutions, for example.

So, 3 for 5 on new ideas for solution smash up.

The real question is, what solutions could we smash-up that that, on an initial analysis, shouldn’t increase the TAM, elevate the sale, or open up obvious new GTM solutions … because that’s the smash-up no one will see coming, that we won’t see twenty new entrants next year (where ten will ultimately fail), and that will create the new unicorn. And for this, we’ll need to extend Source-to-Play further into the enterprise.

Here are three smash-ups that might seem strange on the surface, but if look deep, and innovate, you can see how they might just be one of the next break-out solutions.

A. Payroll, Benefits, CLM, SOW, and Sourcing Optimization

Manage all people-related spend in one application to balance employees vs. contractors vs. services firms to balance cost vs. risk (of knowledge walking out the door, resources not being available, etc.)

B. WIMS, Distributor Marketplace, and central e-Procurement Catalog

Optimize not only inventory balance between the local office/warehouse/retail outlet, central warehouse, and distributors and guide the buyer to the right inventory at the right time, auto-replenishing as needed.

C. MRP, Assembly Line Control, Quality Control, and Order Management

Continuously monitor materials coming in, used, defect rate, and intelligently re-order against an existing contract as needed.

Of course, if you want to be the next magical unicorn, you’ll have to get even more radical. Anyone have an idea for a solution smashup that makes almost no sense on the surface but, if you get radical, could revolutionize the space? (If so, and you need a prescription to help flesh it out, you know who to call.)