Category Archives: SaaS

Enterprises have a Data Problem. And they will until they accept they need to do E-MDM, and it will cost them!

This originally published on April (29) 2024.  It is being reposted because MDM is becoming more essential by the day, especially since AI doesn’t work without good, clean, data.

insideBIGDATA recently published an article on The Impact of Data Analytics Integration Mismatch on Business Technology Advancements which did a rather good job on highlighting all of the problems with bad integrations (which happen every day [and just result in you contributing to the half a TRILLION dollars that will be wasted on SaaS Spend this year and the one TRILLION that will be wasted on IT Services]), and an okay job of advising you how to prevent them. But the problem is much larger than the article lets on, and we need to discuss that.

But first, let’s summarize the major impacts outlined in the article (which you should click to and read before continuing on in this article):

  • Higher Operational Expenses
  • Poor Business Outcomes
  • Delayed Decision Making
  • Competitive Disadvantages
  • Missed Business Opportunities

And then add the following critical impacts (which is not a complete list by any stretch of the imagination) when your supplier, product, and supply chain data isn’t up to snuff:

  • Fines for failing to comply with filings and appropriate trade restrictions
  • Product seizures when products violate certain regulations (like ROHS, WEEE, etc.)
  • Lost Funds and Liabilities when incomplete/compromised data results in payments to the wrong/fraudulent entities
  • Massive disruption risks when you don’t get notifications of major supply chain incidents when the right locations and suppliers are not being monitored (multiple tiers down in your supply chain)
  • Massive lawsuits when data isn’t properly encrypted and secured and personal data gets compromised in a cyberattack

You need good data. You need secure data. You need actionable data. And you won’t have any of that without the right integration.

The article says to ensure good integration you should:

  • mitigate low-quality data before integration (since cleansing and enrichment might not even be possible)
  • adopt uniformity and standardized data formats and structures across systems
  • phase out outdated technology

which is all fine and dandy, but misses the core of the problem:

Data is bad (often very, very bad), because the organizations don’t have an enterprise data management strategy. That’s the first step. Furthermore this E-MDM strategy needs to define:

  1. the master schema with all of the core data objects (records) that need to be shared organizational wide
  2. the common data format (for ids, names, keys, etc.) (that every system will need to map to)
  3. the master data encoding standard

With a properly defined schema, there is less of a need to adopt uniformity across data formats and structures across the enterprise systems (which will not always be possible if an organization needs to maintain outdated technology either because a former manager entered into a 10 year agreement just to be rid of the problem or it would be too expensive to migrate to another system at the present time) or to phase out outdated technology (which, if it’s the ERP or AP, will likely not be possible) since the organization just needs to ensure that all data exchanges are in the common data format and use the master data encoding standard.

Moreover, once you have the E-MDM strategy, it’s easy to flush out the HR-MDM, Supplier/SupplyChain-MDM, and Finance-MDM strategies and get them right.

As THE PROPHET has said, data will be your best friend in procurement and supply chain in 2024 if you give it a chance.

Or, you can cover your eyes and ears and sing the same old tune that you’ve been singing since your organization acquired its first computer and built it’s first “database”:

Well …
I have a little data
I store it on my drive
And when it’s old and flawed
The data I’ll archive

Oh, data, data, data
I store it on my drive
And when it’s old and flawed
The data I’ll archive

It has nonstandard fields
The records short and lank
When I try to read it
The blocks all come back blank

I have a little data
I store it on my drive
And when it’s old and flawed
The data I’ll archive

My data is so ancient
Drive sectors start to rot
I try to read my data
The effort comes to naught

Oh, data, data, data
I store it on my drive
And when it’s old and flawed
The data I’ll archive

666+ S2P+ Solutions … But Key Problems Are Still Not Addressed!

You’ve seen the Mega Map and the 666 solution logos on it.

You’ve heard the doctor and THE REVELATOR say repeatedly that another massive purge is coming to our space over the next 18 to 24 months (which will be the greatest since 2009-2011 where hundreds of companies were acquired, merged, or went insolvent), and that it’s already starting (with a few notable insolvencies, at least as far as the doctor is concerned, already occurring).

You’ve heard us say that there isn’t room for this many companies because even if you account for market size and vertical, we still only need so many solutions of the following varieties:

  • Sourcing
  • Supplier Management
  • Spend Analysis
  • Contract Mangement
  • e-Procurement
  • Invoice to Pay (I2P) / Accounts Payable (AP)

And even when you consider the wide variety of needs across all possible size – vertical -region combination, two to three dozen solutions in any category is more than enough to handle all of the complexities when you take even the most varied companies into account, but we now have over a hundred options in some of these categories. Only the strong, sorry, the smart, will survive … and only if they have enough money to do so (and enough control to make smart decisions, i.e. if they are controlled by greedy investors who double and triple prices that force them out of their target market, they will be the next casualty).

But even with all these solutions, core needs are not met. The reason being: in today’s business environment that is seeing a return to protectionism, sanctions, and border closings; a continual rise in natural disasters (thanks to global warming that will once again reign unchecked under administrations coming into power in multiple “first world” countries); and a continual disruption in logistics (due to epidemics, pandemics, reduced capacity, Panamanian droughts, and Houthis in the Red Sea), solutions are needed that go beyond siloed Procurement.

Back in 2022, THE PROPHET first tried to get the message out there with his proclamation that alt-suites would rise. (They still haven’t, but we do need new types of cross-functional applications.) He also made five predictions. They varied in terms of usefulness and vision (in the doctor‘s view, two in particular are desperately needed, although one of these needs to be broader than defined; one is nothing more than just an enhanced dashboard across various S2P applications and needs to be rethought, and two aren’t quite right [but contain ideas that can be built on]).

But THE PROPHET was right in that we need to rethink Procurement Technology in some organizations, who needs to contribute to Procurement, and how Procurement Process fits into overall operational processes. The solutions that worked for the last 20 years aren’t always enough anymore, and it’s not just a question of “intake” (which is not new despite what the providers will have you believe, see our prior posts on the subject) or “orchestration” (which is just a fancy term for SaaS middleware).

Here are three solutions that are needed now more than ever:

1. Design for Supply (DFS)

THE PROPHET was right on the money here. Not only is 80% of the cost locked in during design, but so is 80% of the risk. You not only need cost control, but you need supply assurance. This means that R&D needs to work with Procurement during design to ensure the products can be sourced affordably at low risk, and that Procurement needs to work with Supply Chain / Logistics to make sure the products can be reliably sourced in a timely manner (and the organization won’t have to stock months and months of inventory). Product design and development organizations need integrated DFS solutions that span R&D, Procurement, and Supply Chain.

2. Supply Chain Sourcing (SCS)

In the world of Direct, when organizations need to source for BOMs (Bill of Materials), they need to do it Supply Chain Aware. Under pressure, Procurement will always search for the lowest cost — but what if that is from a supplier in an unstable region; that is not part of the current, optimized, supply network; that can’t offer timely and secure delivery? Sourcing needs to be supply chain aware. And Supply Chain needs to be aware of what Sourcing is looking at so they can do network planning if the current supply network is not sufficient.

In fact, it would be even better if the DFS and SCS solutions were hosted on the same underlying platform.

3. Risk 360

This was the second platform where THE PROPHET was almost right on the money as well with his Assess-to-Monitor alt-suite. Risk is everywhere, both inside and outside the organization, inside and outside your partners’ organizations, inside and outside your suppliers’ organization, and its everywhere your physical, financial, and digital supply chains touch. Supplier risk, supply chain risk, cybersecurity risk, personnel risk, etc. can’t all be separate solutions. They need to be one integrated platform that constantly monitors, assesses, and protects your organization.

There are, and will continue to be, a need for new solution types, in S2P+, but these would be a great start!

It’s Not AI (First,Led,Powered,etc.) or Autonomous. It is Solution with Augmented Intelligence!

By now you know our stance on Gen-AI (and how it should be relegated to the rubbish heap from which it came) because it’s not about “AI”, it’s about outcome. And outcome requires a real, predictable, usable solution that helps Human Intelligence (HI!) make the right decision. Such a solution is one that uses tried and true algorithms that support tried and true processes that provide a human with the insight needed to make the right decision at the time, every time a decision needs to be made.

This requires a solution that walks the human user through the process, step by step, and presents them with the information required to make a decision as to whether to progress to another step, what the next step is, and any conditions that need to be put on that next step. This requires a solution that automatically runs all of the typically relevant analysis, on all of the available data, and presents the insight, along with any typical decisions (as [a] default recommendation[s]) made on any similar situations that can be found in the organizational history.

Automation should only occur in situations the organization has defined as acceptable according to well defined, human reviewed, and verified rules. Not default vendor rules or unverified probabilities or unverified random computations from a random algorithm. A good solution is one that walks a user through the process, often allowing each step to be completed with a single choice or click. It’s not one that makes the choice for the user, which may or may not be the right one, but one that helps the user makes the right choice. It might seem like a subtle difference, but it is a very important one.

Even though an AI-powered autonomous solution might seem to make the right decision over 90% (or 95%) of the time, it doesn’t mean it actually is. If it looks right, it might be a good decision, but it doesn’t mean it’s a good decision for the organization at the time, or the best decision that can be made. Only human review, at the time, can make that decision. A good solution runs all the analysis it can, summarizes the results, and lets a human verify the data for any recommendation made by the system.

To better understand the the subtlety, consider a situation where the organization lets the system automatically re-auction all regularly purchased products and commodities for manufacturing or MRO where the price is typically constant over time using a lowest bidder takes all e-Auction that results in the auto-generation and auto e-Signature of a one year contract. Now, most of the time this is probably going to work okay, but imagine you let it run on full auto-pilot and in the e-Auction queue is your regular RAM contract that expired three days after a major RAM plant factory fire (that happens about once every decade if you trace back through the last forty years), and prices have just skyrocketed about 50%. Prices which would drop back down as soon as the plant comes back online in three months. Locking in a full year contract would result in excessive cost overruns on the items for almost nine months longer than necessary, instead of just three months or so. A human would know to buy the bare minimum on the spot market at overly inflated rates and wait until the market stabilized before running an e-Auction to lock in the next contract. But a system told to just re-auction and re-order at every contract expiration would do this that. It wouldn’t know that the current market rates are just temporary, why, and how to change course. This is just one example where over-automation and AI will lead to failure without Human Intervention.

A good system presents the user with the products/commodities that are typically automatically auctioned, the history of costs, the current market costs, the recommendation for auto-sourcing and term, the expected results, and whether the recommendation is for the auction to auto-award and contract or, when the auction is complete, pause and include a human in the loop to make a final decision. A well designed system minimizes the work and input required by a human, eliminating all the tactical data analysis and e-paperwork, making it easy to make the right strategic decision without a lot of effort. Technology isn’t about trying to replace human intelligence (which it can’t), but about eliminating unnecessary drudgery or computation (“thunking”) that humans are not good at (or don’t have the time for), so that humans can focus on strategic decisions and value add.

That’s why the right answer is always a solution with augmented intelligence. Not autonomous AI solutions.

You Should Never Build Your Own ProcureTech Solution! Ever!

Integrate your own custom suite to suit your processes, maybe, but never build from scratch. (And we should not have to be talking about this again after just publishing on the subject two weeks ago, but too many conversations are indicating that we still need to shout this loud and clear!)

For some reason, this comes up every decade, usually after a hype cycle has peaked, marketers have switched from focussing on solutions to sound bites from a suite of providers who have released products that don’t meet customer needs, the implementation failure rate has edged back up to the 80%+ range, and customers have gotten absolutely positively fed up with the whole situation.

Customers, fed up with the valueless hype, marketing sound bites, high failure rate, and utter lack of solutions from the vendors targeting them on a daily basis, start to think that the right solution is to build their own.

Sourcing Innovation tackled this subject in depth back in 2015 when it wrote a 4-part series on why you should NOT build your own e-Sourcing solution, followed by an explanation of why you should not build your own Contract Management and e-Procurement platform. (links here)

That’s why we are both repeating and elaborating on last Friday’s Rant on why A Company Should Never Build It’s Own Enterprise Software Systems.

Not only do we have the situation where:

  • the company is not an expert in building software products
  • the company is not an expert in best practices across all of its processes
  • by the time a custom solution is developed, it’s out of date
  • it’s not about the product, it’s about the process you should be working toward and, most importantly,
  • it’s about the data that drives the process!

But we have the situation where, as highlighted in THE REVELATOR‘s article:

1. Developing your own is NOT being an early adopter! (Which is what many companies considering build-your-own think they are.)

Early adopter means someone who adopts leading edge technology from a third party, not someone trying to fast track their digitization effort with custom built tech. This is just high risk with little chance of reward for all the reasons mentioned in all of our prior articles.

2. They think Gen-AI will fix their data problem and allow them to develop their own!

If you’re read anything on Gen-AI on this blog you know that’s the last thing it will do. For Gen-AI to have any chance of working at all, it needs a huge amount of good, clean, data. Otherwise, it’s garbage in, hazardous waste out. No technology has ever needed such large amounts of near-perfect data to have even an abysmal chance of working, and the fact that the marketing madness has convince many CPOs that Gen-AI can fix a data problem is downright terrifying!

3. They obviously think that the initial quote will be close to the final cost.

No where are cost overruns more extreme than in custom development by a non-software organization that contracts a Big X with poor specifications that look easy, and that, due to lack of manpower, sends The C-Team (if you are lucky) because it’s just another instance of system X (when it’s not).

To be honest, in this situation, if the costs ends up being only 3X to get something usable (but still not what you wanted), given the high technology failure rates, that would be amazing.

We know it’s hard to find appropriate solutions given all the noise out there, and the overabundance of vendors that all look, sound, and go all in on useless Gen-AI the same, as it just takes one glance at the Mega Map to figure that out, but that doesn’t mean there aren’t vendors out there appropriate for you. Vendors that put solutions, not tech first, that built affordable tech that works (and didn’t take too much money from investors who then insisted on quadrupling the price), and that will work in an ecosystem with out vendors to solve your problems.

You just have to look hard. Real hard. Probably harder than you’ve ever had to look before. (Expect to eliminate 6 out of every vendors you look at for short list consideration and probably go through 20 to find 3.) But trust us, when you find the right vendor, it will be worth it. The solution will work, will configure to your liking, will be extremely usable for the problems your team faces every day, and will be one where the provider will grow with you for the decade to come.

Good things come to those who wait to find the right vendor. (Even if they have to crawl through multiple pig sties to do so.)

ketteQ: An Adaptive Supply Chain Planning Solution Founded in the Modern Age

As per yesterday’s post, any supply chain planning solution developed before 2010 isn’t necessarily built on a modern multi-tenant cloud-ready SaaS stack (as such a stack didn’t exist, and it would have had to be partially to fully re-platformed to be modern multi-tenant cloud-ready SaaS). Any solution built after was much more likely to be built on a modern multi-tenant cloud-ready SaaS stack. Not guaranteed, but more likely.

KetteQ‘s Adaptive Supply Chain Planning Solution is one of these solutions that was built in the modern age on a fully modern multi-tenant cloud-native SaaS stack, and one that has some advantages you won’t find in most of the competition. I was able to get an early view of the latest product which was released last week. Founded in 2018, ketteQ was built from the ground up to embody all of the lessons learned from the founders’ 100+ successful supply chain planning solution implementations across industries and systems, and the wisdom gained from building two prior supply chain companies, with the goal of addressing all of the issues they encountered with previous generation solutions. The modern architecture was purpose built to fully utilize the transformational power of optimization and machine learning. It was a tall feat, and while still a work in progress (as they admit they currently only have three mature core modules on par with their peers in depth and breadth [although all inherit the advantages of their modern stack and solver architecture]), but one they have pulled off as they can also address a number of other areas with their other, newer modules, and integration to third party systems (particularly for order management, production scheduling, and transportation management) and address End-to-End (E2E) supply chain planning, with native Integrated Business Planning (IBP) across demand, inventory, and supply — which are their core modules, along with a module for Service Parts Planning and S&OP Planning.

In addition to this solid IBP core, they also have capabilities across cost & price management, asset management, fulfillment & allocation, work order management, and service parts delivery. And all of this can be accessed and controlled through a central control tower.

And most importantly, the entire solution is cloud native, designed to leverage horizontal scalability and connectivity, and built for scale. The solution is enabled by a single data model that can be stored in an easily accessible open SQL database, in a contemporary architecture that supports all solutions. The solution is extendable to support scalability, multiple models, multiple scenarios per model, and a new, highly scalable solver that can perform thousands of heuristic tests and apply a genetic algorithm with machine learning to find a solution by testing all demand ranges against all supply options to find a solution that minimizes cost / maximizes margin against potential demand changes and fill rates.

Of course, the ketteQ platform comes with a whole repertoire of applied Optimization/ML/Genetic/Heuristic models for
demand planning, inventory planning, and supply planning, as well as S&OP. In addition, because of its extensible architecture, instead of manually running single scenarios at a time, it can run up tothousands of scenarios for multiple models simultaneously, and present the results that best meet the goal or the best trade-off between multiple goals.

KetteQ does all of this in a platform that is, compared to older generation solutions:

  • fast(er) to deploy — the engine was built for configuration, their scalable data model and data architecture make it easy to transform and integrate data, and they can customize the UX quickly as well
  • easy to use — every screen is configured precisely to efficiently support the task at hand, and the UX can be deployed standalone or as a Salesforce front end
  • cost-effective — since the platform was built from the ground up to be a true multi-tenant solution using a centralized, extensible, data architecture, each instance can spin off multiple models, which can spin off multiple scenarios, each of which only requires the additional processing requirement for that scenario instance and only the data required by that scenario; and as more computing power is required, it supports automatic horizontal scaling in the cloud.
  • better performing — since it can run more scenarios in more models using modern multi-pass algorithms that combine traditional machine learning with genetic algorithms and multi-pass heuristics that go broad and deep at the same time to find solutions that can withstand perturbations while maximizing the defined goals using whatever weighting the customer desires (cost, delivery time, carbon footprint, etc.)
  • more insightful — the package includes a full suite of analytics built on Python that are easily configured, extended, and integrated with AI engines (including Gen-AI if you so desire), which allows data scientists to add their own favorite forecasting, optimization, analytics, and AI algorithms; in addition, it can easily be configured to run and display best-fit forecasts at any level of hierarchy and automatically pull in and correlate external indicators as well
  • more automated — the platform can be configured to automatically run through thousands of scenarios up and down the demand, supply, and inventory forecasts on demand as data changes, so the platform always has the best recommendation on the most recent data; these scenarios can include multiple sourcing, logistics, and even bills of material; and they can be consolidated meta-scenarios for end-to-end integrated S&OP across demand, supply, and inventory
  • seamless Salesforce integration — takes you from customer demand all the way down to supply chain availability; seamless collaboration workflow with Salesforce forecast, pipeline, and order objects in the Salesforce front end
  • AWS nativity — for full leverage of horizontal scalability and serverless computing, multi-tenant optimization and analytics, and single-tenant customer data. Moreover, the solution is also available on the AWS marketplace.

In this coverage, we are going to primarily focus on demand and supply (planning) as that is the most relevant from a sourcing perspective. Both of these heavily depend on the platform’s forecasting ability. So we’ll start there.

Forecasting

In the ketteQ platform, forecasts, which power demand and supply planning,

  • can be by day, week, month, or other time period of interest
  • can be global, regional, local, at any level of the (geo) hierarchy you want
  • can be category, product line, and individual product
  • can be business unit, customer, channel
  • can be computed using sales data/forecasts, finance data, marketing data/forecasts, baselines, and consensus
  • can use a plethora of models (including, but not limited to Arima[Multivariate], Average, Croston, DES, ExtraTrees, Lasso[variants], etc.), as well as user defined models in Python
  • can be configured to select the best fit algorithm automatically based on historical data, based on just POS data, POS data augmented with economic indicators, external data (where insufficient POS data), etc.

These models, like all models in the platform, can be set up using a very flexible and responsive hierarchy approach, with each model automatically pulling in the model above it and then altering it as necessary (simply by modifying constraints, goals, data [sources], etc.). In the creation of models, restore points can be defined at any level before new data or new scenarios are run so the analyst can backtrack at any time.

Demand Planning

The demand planning module in ketteQ can compute demand plans that take into account:

  • market intelligence input to refine the forecast (which can include thousands of indicators across 196 countries from Trading Economics as well as your own data feeds) (and which can include, or not, correlation factors for correlation analysis)
  • demand sensing across business units, channels, customers, and any other data sources that are available to be integrated into the platform
  • priorities across channels, customers, divisions, and departments
  • multiple “what if” scenarios (simultaneously), as defined by the user
  • consensus demand forecasts across multiple forecasts and accepted what-ifs

The module can then display demand (plans) in units or value across actuals, sales forecasts, finance forecasts, marketing forecasts, baseline(s), and consensus.

In addition to this demand planning capability and all of the standard capabilities you would expect from a demand planning solution, the platform also allows you to:

  • Prioritize demand for planning and fulfillment
  • Track demand plan metrics
  • Consolidate market demand plans
  • Handle NPI & transition planning
  • Define user-specific workflows

Supply Planning

The reciprocal of the demand planning module, the supply planning module in ketteQ leverages what they call the PolymatiQ solver. (See their latest whitepaper at this link.)

Their capabilities for product and material planning includes the ability to:

  • compute plans by the day, week, month, or any other time frame of interest
  • do so globally, regionally, locally, or at any level of the hierarchy you want
  • and do so for all regional, local, or any other subset of suppliers of interest, as well as view by customer-focused dimensions such as channel, business unit and customer
  • use the current demand forecast, modifications, and taking into account current and projected supply availability, safety stock, inventory levels, forecasted consumption rates, expected defect rates, rotatable pools, and current supplier commitments, among other variables
  • run scenarios that optimize for cost and service
  • coordinate raw and pack material requirements for each facility
  • support collaboration with suppliers and manufacturing
  • manage sourcing options and alternates (source/routes) for make, buy, repair and transfers

Moreover, supply plans, like demand plans, can be plotted over time based on any factors or factor pair of interest, such as supply by time frame, sourcing cost vs fill rate, etc.

In addition, the supply planning module for distribution requirements can:

  • develop daily deployment plans
  • develop time-phased fulfillment and allocation plans
  • manage exceptions and risks
  • conduct what-if scenario analysis
  • execute short-term plans
  • track obsolescence and perform aging analysis/tracking

Inventory Planning

We did not see or review the inventory planning module in depth, even though it is one of their three core modules, so all we can tell you is that it has most of the standard functionality one would expect, and given the founder’s heritage in the service parts planning world, you know it can handle complex multi-echelon / multi-item planning. Capabilities include:

  • manage raw, pack and finished goods inventory
  • set and manage dynamic safety stock, EOQ, ROP levels and policies
  • ensure inventory balance and execution and support for ASL (authorized stocking list), time-phased, and trigger planning by segment
  • support parametric optimization for cost and service balancing
  • the ability to minimize supply chain losses through better inventory management
  • the ability to optimize service levels relative to goals

Salesforce: IBP

As we noted, the ketteQ platform supports native Salesforce integration, and you can do full IBP through the custom front-end built in Salesforce CRM, which allows you to seamlessly jump back and forth between your CRM and SCM, following the funnel from customer order to factory supply and back again.

The Salesforce front-end, which is very extensive, supports the typical seven-step IBP process:

  1. Demand Plan
  2. Demand Review
  3. Supply Plan
  4. Pre IBP Review
  5. Executive IBP Review
  6. Operational Plan
  7. Finalization

… and allows it to be done easily in Salesforce design style, with walk-through tab-based processes and sub-tabs to go from summary to detail to related information. Moreover, the UI can be configured to only include relevant widgets, etc.

In addition, users can easily select an IBP Cycle; drill into orders and track order status; define custom alerts; subscribe to plans, updates, and related reports; follow sales processes including the identification and tracking of opportunities; jump into their purchase orders (on the supply side); track assets; manage programs; and access control tower functionality.

As a result of the integration with Salesforce objects, including Pipeline and Orders, the solution helps bridge the gap between sales and supply chain organizations, enabling executive-driven process change. As an advanced supply chain solution on the Salesforce Appexchange, it enables the broad base of Salesforce customers on the manufacturing cloud a slew of unique integration possibilities.
And, of course, if you don’t have Salesforce, you still have all this functionality (and more) in the ketteQ front-end.

Finally, the platform can do much more as it also has modules, as we noted, for service parts planning, service parts delivery, sales and operations planning, cost and price management, fulfillment & allocation, asset management, clinical demand management, and a control tower. It is a fundamentally modern approach to planning that is worth exploring for companies that are challenged to adapt in today’s disruptive supply chain environment. For a deeper dive into these modules and capabilities, check out their website or reach out to them for a demo. This is a recommendation for ANY mid-sized or larger manufacturing (related) organization looking for a truly modern supply chain planning solution.