Daily Archives: December 6, 2023

Source-to-Pay+ Part 8: Analytics / Control Center

In Part 1 we noted that Risk Management went much beyond Supplier Risk, and the primitive Supplier “Risk” Management application that is bundled in many S2P suites. Then, in Part 2, we noted that there are risks in every supply chain entity; with the people and materials used; and with the locales they operate in. In Part 3 we moved onto an overview of Corporate Risk, in Part 4 we took on Third Party Risk (in Part 4A and Part 4B), in Part 5 we laid the foundation for Supply Chain Risk (Generic), in Part 6 we addressed the first major supply chain risk: in-transport, followed by the second major supply chain risk: lack of multi-tier visibility in Part 7.

In almost every article to date, we’ve highlighted that a key aspect of every risk management system is good analytics, and, in particular, a good control centre to manage the data, the analytics, and the insights gained from the analytics (as well as the plans created around those insights).

Capability Description
Graph (Analytics) Support Standard analytics based on numeric data is not enough. As we have illustrated through this series, risk is more than numbers, roll ups of numbers, and trends on numbers. Risk is relationships, risk is connections, risk is propagation, risk is feedback. You have to be able to track the impacts across chains that span entities, geography, and time.

The risk application must natively support graphs, graph algorithms, and graph analytics. It must be able to count the number of impacted nodes up and down a BoM, multiple BoMs, a chain, and multiple chains. From this, it must be able to calculate an impact of a delay, a shortage, and a catastrophic failure based on BoM requirements, production times, costs, and margins.

Multi-level Metrics and Trend Analysis Even though graph analytics is key for supply chain risk analysis, good old fashioned metrics and KPIs are still key for analyzing risk potential at a point in time, and over time based on changes (and comparison to past trends that have led to risk and failure). For example, an increase in delivery times in every shipment, decreasing raw material supplies going into a source supplier that provides a refined version of that raw material, increasing failure in key components, etc. all indicate increased risk.

The application must support the definition of metrics based on arbitrary formulas, roll ups, and drill downs. It should also support basic trend analysis, allowing for comparison between time periods, similar trends, and historical trends of interest. it should also be capable of projecting the trend for an arbitrary time period in the future based upon the current trend progression and the most likely continuation based upon correlation with similar and historical trends.

Real-time Data Monitoring & Automation The application needs to integrate with third party data feeds, get (near) real-time updates, update all of the metrics the data relates to, monitor the changes against alerts, update the trends, and determine if any updates indicate trends of interest, significance, or concern. This all needs to happen automatically.

The application must support an open API, support standard data formats, be aware of standard data records used in direct supply chain, integrate with third party data feeds for all types of supply chain (risk) data out of the box, and be able to normalize all of this data into a standard data store (warehouse, lake, lakehouse, etc.). It must support rules-based alerts, integrations, monitors, and workflows to allow for appropriate automation support.

Mitigation Plans The platform must support the definition of mitigation plans, with individual actions, objectives, and impacts. Mitigation plans should support multiple stages, actions should support detailed definitions and expected outcomes, objectives should support a metric-based definition, and impacts should support detailed cost definitions.

It should be easy to instantiate an instance of a plan when a risk event is detected or defined by a user, track updates in real time as new data comes in or users define new data, track the impact of a recovery action (if it decreases the time to recovery, etc.), and auto-generate progress reports on a regular basis, as well as roll up all of the impacts, and recoveries, for users who need it. It should also support the creation of what-if scenarios to calculate the potential impacts of a potential action (in a given timeframe), and allow for cost vs impact vs margin/profit improvement calculations to help an organization determine if the action could be worth it, especially if the associated chance of success is limited.

Surveys The platform also needs to support the creation of surveys that can be distributed to multiple parties up and down the chain to collect data for analysis purposes.

The surveys must be capable of collecting numeric, type-valued, and open-valued data, as required.