Category Archives: Spend Analysis

You Don’t Need Gen-AI to Revolutionize Procurement and Supply Chain Management — Classic Analytics, Optimization, and Machine Learning that You Have Been Ignoring for Two Decades Will Do Just Fine!

Open Gen-AI technology may be about as reliable as a career politician managing your Nigerian bank account, but somehow it’s won the PR war (since there is longer any requirement to speak the truth or state actual facts in sales and marketing in most “first” world countries [where they believe Alternative Math is a real thing … and that’s why they can’t balance their budgets, FYI]) as every Big X, Mid-Sized Consultancy, and the majority of software vendors are pushing Open Gen-AI as the greatest revolution in technology since the abacus. the doctor shouldn’t be surprised, given that most of the turkeys on their rafters can’t even do basic math* (but yet profess to deeply understand this technology) and thus believe the hype (and downplay the serious risks, which we summarized in this article, where we didn’t even mention the quality of the results when you unexpectedly get a result that doesn’t exhibit any of the six major issues).

The Power of Real Spend Analysis

If you have a real Spend Analysis tool, like Spendata (The Spend Analysis Power Tool), simple data exploration will find you a 10% or more savings opportunity in just a few days (well, maybe a few weeks, but that’s still just a matter of days). It’s one of only two technologies that has been demonstrated, when properly deployed and used, to identify returns of 10% or more, year after year after year, since the mid 2000s (when the technology wasn’t nearly as good as it is today), and it can be used by any Procurement or Finance Analyst that has a basic understanding of their data.

When you have a tool that will let you analyze data around any dimension of interest — supplier, category, product — restrict it to any subset of interest — timeframe, geographic location, off-contract spend — and roll-up, compare against, and drill down by variance — the opportunities you will find will be considerable. Even in the best sourced top spend categories, you’ll usually find 2% to 3%, in the mid-spend likely 5% or more, in the tail, likely 15% or more … and that’s before you identify unexpected opportunities by division (who aren’t adhering to the new contracts), geography (where a new local supplier can slash transportation costs), product line (where subtle shifts in pricing — and yes, real spend analysis can also handle sales and pricing data — lead to unexpected sales increases and greater savings when you bump your orders to the next discount level), and even in warranty costs (when you identify that a certain supplier location is continually delivering low quality goods compared to its peers).

And that’s just the Procurement spend … it can also handle the supply chain spend, logistics spend, warranty spend, utility and HR spend — and while you can’t control the HR spend, you can get a handle on your average cost by position by location and possibly restructure your hubs during expansion time to where resources are lower cost! Savings, savings, savings … you’ll find them ’round the clock … savings, savings, savings … analytics rocks!

The Power of Strategic Sourcing Decision Optimization

Decision optimization has been around in the Procurement space for almost 25 years, but it still has less than 10% penetration! This is utterly abysmal. It’s not only the only other technology that has been generating returns of 10% or more, in good times and bad, for any leading organization that consistently uses it, but the only technology that the doctor has seen that has consistently generated 20% to 30% savings opportunities on large multi-national complex categories that just can’t be solved with RFQ and a spreadsheet, no matter how hard you try. (But if you want to pay them, an expert consultant will still claim they can with the old college try if you pay their top analyst’s salary for a few months … and at, say, 5K a day, there goes three times any savings they identify.)

Examples where the doctor has repeatedly seen stellar results include:

  • national service provider contract optimization across national, regional, and local providers where rates, expected utilization, and all-in costs for remote resources are considered; With just an RFX solution, the usual solution is to go to all the relevant Big X and Mid-Sized Bodyshops and get their rate cards by role by location by base rate (with expenses picked up by the org) and all-in rate; calc. the expected local overhead rate by location; then, for each Big X / Mid-Size- role – location, determine if the Big X all-in rate or the Big X base rate plus their overhead is cheaper and select that as the final bid for analysis; then mark the lowest bid for each role-location and determine the three top providers; then distribute the award between the three “top” providers in the lowest cost fashion; and, in big companies using a lot of contract labour, leave millions on the table because 1) sometimes the cheapest 3 will actually be the providers with the middle of the road bids across the board and 2) for some areas/roles, regional, and definitely local, providers will often be cheaper — but since the complexity is beyond manageable, this isn’t done, even though the doctor has seen multiple real-world events generate 30% to 40% savings since optimization can handle hundreds of suppliers and tens of thousands of bids and find the perfect mix (even while limiting the number of global providers and the number of providers who can service a location)
  • global mailer / catalog production —
    paper won’t go away, and when you have to balance inks, papers, printing, distribution, and mailing — it’s not always local or one country in a region that minimizes costs, it’s a very complex sourcing AND logistics distribution that optimizes costs … and the real-world model gets dizzying fast unless you use optimization, which will find 10% or more savings beyond your current best efforts
  • build-to-order assembly — don’t just leave that to the contract manufacturer, when you can simultaneously analyze the entire BoM and supply chain, which can easily dwarf the above two models if you have 50 or more items, as savings will just appear when you do so

… but yet, because it’s “math”, it doesn’t get used, even though you don’t have to do the math — the platform does!

Curve Fitting Trend Analysis

Dozens (and dozens) of “AI” models have been developed over the past few years to provide you with “predictive” forecasts, insights, and analytics, but guess what? Not a SINGLE model has outdone classical curve-fitting trend analysis — and NOT a single model ever will. (This is because all these fancy-smancy black box solutions do is attempt to identify the record/transaction “fingerprint” that contains the most relevant data and then attempt to identify the “curve” or “line” to fit it too all at once, which means the upper bound is a classical model that uses the right data and fits to the right curve from the beginning, without wasting an entire plant’s worth of energy powering entire data centers as the algorithm repeatedly guesses random fingerprints and models until one seems to work well.)

And the reality is that these standard techniques (which have been refined since the 60s and 70s), which now run blindingly fast on large data sets thanks to today’s computing, can achieve 95% to 98% accuracy in some domains, with no misfires. A 95% accurate forecast on inventory, sales, etc. is pretty damn good and minimizes the buffer stock, and lead time, you need. Detailed, fine tuned, correlation analysis can accurately predict the impact of sales and industry events. And so on.

Going one step further, there exists a host of clustering techniques that can identify emergent trends in outlier behaviour as well as pockets of customers or demand. And so on. But chances are you aren’t using any of these techniques.

So given that most of you haven’t adopted any of this technology that has proven to be reliable, effective, and extremely valuable, why on earth would you want to adopt an unproven technology that hallucinates daily, might tell of your sensitive employees with hate speech, and even leak your data? It makes ZERO sense!

While we admit that someday semi-private LLMs will be an appropriate solution for certain areas of your business where large amount of textual analysis is required on a regular basis, even these are still iffy today and can’t always be trusted. And the doctor doesn’t care how slick that chatbot is because if you have to spend days learning how to expertly craft a prompt just to get a single result, you might as well just learn to code and use a classic open source Neural Net library — you’ll get better, more reliable, results faster.

Keep an eye on the tech if you like, but nothing stops you from using the tech that works. Let your peers be the test pilots. You really don’t want to be in the cockpit when it crashes.

* And if you don’t understand why a deep understand of university level mathematics, preferably at the graduate level, is important, then you shouldn’t be touching the turkey who touches the Gen-AI solution with a 10-foot pole!

Spendata: The Power Tool for the Power Spend Analyst — Now Usable By Apprentices as Well!

We haven’t covered Spendata much on Sourcing Innovation (SI), as it was only founded in 2015 and the doctor did a deep dive review on Spend Matters in 2018 when it launched (Part I and Part II, ContentHub subscription required), as well as a brief update here on SI where we said Don’t Throw Away that Old Spend Cube, Spendata Will Recover It For You!. the doctor did pen a 2020 follow up on Spend Matters on how Spendata was Rewriting Spend Analysis from the Ground Up, and that was the last major coverage. And even though the media has been a bit quiet, Spendata has been diligently working as hard on platform improvement over the last four years as they were the first four years and just released Version 2.2 (with a few new enhancements in the queue that they will roll out later this year). (Unlike some players which like to tack on a whole new version number after each minor update, or mini-module inclusion, Spendata only does a major version update when they do considerable revamping and expansion, recognizing that the reality is that most vendors only rewrite their solution from the ground up to be better, faster, and more powerful once a decade, and every other release is just an iteration, and incremental improvement of, the last one.)

So what’s new in Spendata V 2.2? A fair amount, but before we get to that, let’s quickly catch you up (and refer you to the linked articles above for a deep dive).

Spendata was built upon a post-modern view of spend analysis where a practitioner should be able to take immediate action on any data she can get her hands on whenever she can get her hands on it and derive whatever insights she can get for process (or spend) improvement. You never have perfect data, and waiting until Duey, Clutterbuck, and Howell1 get all your records in order to even run your first report when you have a dozen different systems to integrate data from, multiple data formats to map, millions of records to classify, cleanse and enrich, and third party data feeds to integrate will take many months, if not a year, and during that year where you quest for the mythical perfect cube you will continue to lose 5% due to process waste, abuse, and fraud, and 3% to 15% (or more) across spend categories where you don’t have good management but could stem the flow simply by identifying them and putting in place a few simple rules or processes. And you can identify some of these opportunities simply by analyzing one system, one category, and one set of suppliers. And then moving on to the next one. And, in the process, Spendata automatically creates and maintains the underlying schema as you slowly build up the dimensions, the mapping, cleansing, and categorization rules, and the basic reports and metrics you need to monitor spend and processes. And maybe you can only do 60% to 80% piecemeal, but during that “piecemeal year”, you can identify over half of your process and cost savings opportunities and start saving now, versus waiting a year to even start the effort. When it comes to spend (related) data analysis, no adage is more true than “don’t put off until tomorrow what you can do today” with Spendata, because, and especially when you start, you don’t need complete or perfect data … you’d be amazed how much insight you can get with 90% in a system or category, and then if the data is inconclusive, keeping drilling and mapping until you get into the 95% to 98% accuracy range.

Spendata was also designed from the ground up to run locally and entirely in the browser, because no one wants to wait for an overburdened server across a slow internet connection, and do so in real time … and by that we mean do real analysis in real time. Spendata can process millions of records a minute in the browser, which allows for real time data loads, cube definitions, category re-mappings, dynamically derived dimensions, roll-ups, and drill downs in real-time on any well-defined data set of interest. (Since most analysis should be department level, category level, regional, etc., and over a relevant time span, that should not include every transaction for the last 10 years because beyond a few years, it’s only the quarter over quarter or year over year totals that become relevant, most relevant data sets for meaningful analysis even for large companies are under a few million transactions.) The goal was to overcome the limitations of the first two generations of spend analysis solutions where the user was limited to drilling around in, and deriving summaries of, fixed (R)OLAP cubes and instead allow a user to define the segmentations they wanted, the way they wanted, on existing or newly loaded (or enriched federated data) in real time. Analysis is NOT a fixed report, it is the ability to look at data in various ways until you uncover an inefficiency or an opportunity. (Nor is it simply throwing a suite of AI tools against a data set — these tools can discover patterns and outliers, but still require a human to judge whether a process improvement can be made or a better contract secured.)

Spendata was built as a third generation spend analysis solution where

  • data can be loaded and processed at any point of the analysis
  • the schema is developed and modified on the fly
  • derived dimensions can be created instantly based on any combination of raw and previously defined derived dimensions
  • additional datasets from internal or external sources can be loaded as their own cubes, which can then be federated and (jointly) drilled for additional insight
  • new dimensions can be built and mapped across these federations that allow for meaningful linkages (such as commodities to cost drivers, savings results to contracts and purchasing projects, opportunities by size, complexity, or ABS analysis, etc.)
  • all existing objects — dimensions, dashboards, views (think dynamic reports that update with the data), and even workspaces can be cloned for easy experimentation
  • filters, which can define views, are their own objects, can be managed as their own objects, and can be, through Spendata‘s novel filter coin implementation, dragged between objects (and even used for easy multi-dimensional mapping)
  • all derivations are defined by rules and formula, and are automatically rederived when any of the underlying data changes
  • cubes can be defined as instances of other cubes, and automatically update when the source cube updates
  • infinite scrolling crosstabs with easy Excel workbook generation on any view and data subset for those who insist on looking at the data old school (as well as “walk downs” from a high-level “view” to a low-level drill-down that demonstrates precisely how an insight was found
  • functional widgets which are not just static or semi-dynamic reporting views, but programmable containers that can dynamically inject data into pre-defined analysis and dimension derivations that a user can use to generate what-if scenarios and custom views with a few quick clicks of the mouse
  • offline spend analysis is also available, in the browser (cached) or on Electron.js (where the later is preferred for Enterprise data analysis clients)

Furthermore, with reference to all of the above, analyst changes to the workspace, including new datasets, new dashboards and views, new dimensions, and so on are preserved across refresh, which is Spendata’s “inheritance” capability that allows individual analysts to create their own analyses and have them automatically updated with new data, without losing their work …

… and this was all in the initial release. (Which, FYI, no other vendor has yet caught up to. NONE of them have full inheritance or Spendata‘s security model. And this was the foundation for all of the advanced features Spendata has been building since its release six years ago.)

After that, as per our updates in 2018 and 2020, Spendata extended their platform with:

  • Unparalleled Security — as the Spendata server is designed to download ONLY the application to the browser, or Spendata‘s demo cubes and knowledge bases, it has no access to your enterprise data;
  • Cube subclassing & auto-rationalization — power users can securely setup derived cubes and sub-cubes off of the organizational master data cubes for the different types of organizational analysis that are required, and each of these sub-cubes can make changes to the default schema/taxonomy, mappings, and (derived) dimensions, and all auto-update when the master cube, or any parent cube in the hierarchy, is updated
  • AI-Based Mapping Rule Identification from Cube Reverse Engineering — Spendata can analyze your current cube (or even a report of vendor by commodity from your old consultant) and derive the rules that were used for mapping, which you can accept, edit, or reject — we all know black box mapping doesn’t work (no matter how much retraining you do, as every “fix” all of a sudden causes an older transaction to be misclassified); but generating the right rules that can be human understood and human maintained guarantees 100% correct classification 100% of the time
  • API access to all functions, including creating and building workspaces, adding datasets, building dimensions, filtering, and data export. All Spendata functions are scriptable and automatable (as opposed to BI tools with limited or nonexistent API support for key functions around building, distributing, and maintaining cubes).

However, as we noted in our introduction, even though this put Spendata leagues beyond the competition (as we still haven’t seen another solution with this level of security; cube subclassing with full inheritance; dynamic workspace, cube, and view creation; etc.), they didn’t stop there. In the rest of this article, we’ll discuss what’s new from the viewpoint of Spendata Competitors:

Spendata Competitors: 7 Things I Hate About You

Cue the Miley Cyrus, because if competitors weren’t scared of Spendata before, if they understand ANY of this, they’ll be scared now (as Spendata is a literal wrecking ball in analytic power). Spendata is now incredibly close to negating entire product lines of not just its competitors, but some of the biggest software enterprises on the planet, and 3.0 may trigger a seismic shift on how people define entire classes of applications. But that’s a post for a later day (but should cue you up for the post that will follow this on on just precisely what Spendata 2.2 really is and can do for you). For now, we’re just going to discuss seven (7) of the most significant enhancements since our last coverage of Spendata.

Dynamic Mapping

Filters can now be used for mapping — and as these filters update, the mapping updates dynamically. Real-time reclassify on the fly in a derived cube using any filter coin, including one dragged out of a drill down in a view. Analysis is now a truly continuous process as you never have to go back and change a rule, reload data, and rebuild a cube to make a correction or see what happens under a reclassification.

View-Based Measures

Integrate any rolled up result back into the base cube on the base transactions as a derived dimension. While this could be done using scripts in earlier versions, it required sophisticated coding skills. Now, it’s almost as easy as a drag-and-drop of a filter coin.

Hierarchical Dashboard Menus

Not only can you organize your dashboards in menus and submenus and sub-sub menus as needed, but you can easily bookmark drill downs and add them under a hierarchical menu — makes it super easy to create point-based walkthroughs that tell a story — and then output them all into a workbook using Spendata‘s capability to output any view, dashboard, or entire workspace as desired.

Search via Excel

While Spendata eliminates the need for Excel for Data Analysis, the reality is that is where most organizational data is (unfortunately) stored, how most data is submitted by vendors to Procurement, and where most Procurement Professionals are the most comfortable. Thus, in the latest version of Spendata, you can drag and drop groups of cells from Excel into Spendata and if you drag and drop them into the search field, it auto-creates a RegEx “OR” that maintains the inputs exactly and finds all matches in the cube you are searching against.

Perfect Star Schema Output

Even though Spendata can do everything any BI tool on the market can do, the reality is that many executives are used to their pretty PowerBI graphs and charts and want to see their (mostly static) reports in PowerBI. So, in order to appease the consultancies that had to support these executives that are (at least) a generation behind on analytics, they encoded the ability to output an entire workspace to a perfect star schema (where all keys are unique and numeric) that is so good that many users see a PowerBI speed up by a factor of almost 10. (As any analyst forced to use PowerBI will tell you, when you give PowerBI any data that is NOT in a perfect star schema, it may not even be able to load the data, and that it’s ability to work with non-numeric keys at a speed faster than you remember on an 8088 is nonexistent.)

Power Tags

You might be thinking “tags, so what“. And if you are equating tags with a hashtag or a dynamically defined user attribute, then we understand. However, Spendata has completely redefined what a tag is and what you can do with it. The best way to understand it is a Microsoft Excel Cell on Steroids. It can be a label. It can be a replica of a value in any view (that dynamically updates if the field in the view updates). It can be a button that links to another dashboard (or a bookmark to any drill-down filtered view in that dashboard). Or all of this. Or, in the next Spendata release, a value that forms the foundation for new derivations and measures in the workspace just like you can reference a random cell in an Excel function. In fact, using tags, you can already build very sophisticated what-if analysis on-the-fly that many providers have to custom build in their core solutions (and take weeks, if not months, to do so) using the seventh new capability of Spendata, and usually do it in hours (at most).

Embedded Applications

In the latest version of Spendata, you can embed custom applications into your workspace. These applications can contain custom scripts, functions, views, dashboards, and even entire datasets that can be used to instantly augment the workspace with new analytic capability, and if the appropriate core columns exist, even automatically federate data across the application datasets and the native workspace.

Need a custom set of preconfigured views and segments for that ABC Analysis? No sweat, just import the ABC Analysis application. Need to do a price variance analysis across products and geographies, along with category summaries? No problem. Just import the Price Variance and Category Analysis application. Need to identify opportunities for renegotiation post M&A, cost reduction through supply base consolidation, and new potential tail spend suppliers? No problem, just import the M&A Analysis app into the workspace for the company under consideration and let it do a company A vs B comparison by supplier, category, and product; generate the views where consolidation would more than double supplier spend, save more than 100K on switching a product from a current supplier to a lower cost supplier; and opportunities for bringing on new tail spend suppliers based upon potential cost reductions. All with one click. Not sure just what the applications can do? Start with the demo workspaces and apps, define your needs, and if the apps don’t exist in the Spendata library, a partner can quickly configure a custom app for you.

And this is just the beginning of what you can do with Spendata. Because Spedata is NOT a Spend Analysis tool. That’s just something it happens to do better than any other analysis tool on the market (in the hands of an analyst willing to truly understand what it does and how to use it — although with apps, drag-and-drop, and easy formula definition through wizardly pop-ups, it’s really not hard to learn how to do more with Spendata than any other analysis tool).

But more on this in our next article. For The Times They Are a-Changin’.

1 Duey, Clutterbuck, and Howell keeps Dewey, Cheatem, and Howe on retainer … it’s the only way they can make sure you pay the inflated invoices if you ever wake up and realize how much you’ve been fleeced for …

One of these things is not like the other — it’s the right choice!

Note the Sourcing Innovation Editorial Disclaimers and note this is a very opinionated rant!  Your mileage will vary!  (And not about any firm in particular.)

Three bids for that spend analytics project from the three leading Big X firms come in at 1 Million. One bid for that spend analytics project from a specialized niche consultancy you pulled out of the hat for bid diversity comes in at 250 Thousand. Which one is right?

Those of you who only partially paid attention to the education Sesame Street was trying to impart upon you when you were growing up will simply remember the “one of these things is not like the other” song and think that any of the bids from the Big X firm is right and the niche consultancy is wrong because it’s different, and therefore must be thrown out because it’s too low when, in fact, it’s just as likely that the three bids from the Big X firms that are wrong and the bid from the niche consultancy that was right.

Those of us who paid attention knew that Sesame Street was trying to show us how to detect underlying similarities so we could properly cluster objects for further analysis. What we should have learned is that the Big X bids were all the same, built on the same assumption, and can be compared equally. And that the outlier bid needed further investigation — a further investigation that can only be undertaken against an appropriately sized set of sample set of bids from other specialized niche consultancies to compare against. And without that sample set of bids, you can’t properly evaluate the lower bid, which, the doctor can tell you, is just as likely to be closer to correct than what could be wildly overpriced Big X bids.  (Newer firms often have newer tech and methods — and if these are the right methods and tech for your problem … )

As per our recent post, if you want to get analytics and AI right, most of these guys don’t have the breadth and depth of expertise they claim to have (as most don’t have the educational background to know just how broad, deep, and advanced AI and analytics can get, especially when you dig deep into the math and computer science and all of the variable models and strengths and weaknesses, and instead are trained on what is essentially marketing content from AI and analytics providers). In the group that sells you, there will be a leader who is a true expert (and worth his or her weight in platinum), a few handpicked lieutenants who are above average and run the projects, and a rafter of juniors straight out of private college with more training in how to dress, talk, and follow orders than training in actual analytics … and no guarantee they even have any real university level mathematics beyond basic analysis in operational research (and thus a knowledge of what analytics is and isn’t and can and can’t do).  And unless you know what you need, and why, you can’t judge the response.  (Furthermore, you can’t expect them to figure out your problem and goals with only partial information!)

While there was a time big analytics projects were (multi) million dollar projects, that was twenty years ago when Spend Analysis 1.0 was still hitting the market; when there were limited tools for data integration, mapping, cleansing, and enrichment; and when there weren’t a lot of statistics on average savings opportunities across internal and external spend categories. Now we have mature Spend Analysis 3.0 technologies (some taking steps towards spend analysis 4.0 technologies); advanced technologies for automatic data integration, mapping, cleansing, and even enrichment; deep databases on projects and results by vertical and industry size; extensive libraries for out-of-the-box analytics across categories and potential opportunities; and a whole toolkit for spend analysis that didn’t exist two decades ago. This new toolkit, built by best of breed vendors used, and sometimes [co-]owned by these best of breed niche consultancies (that don’t try to do everything, and definitely don’t pretend they can), allows modern spend analysis projects to be done ten times as efficiently and effectively, in the hands of a master — a master that isn’t necessarily on your project if you hire a Big X or Mid-Sized Consultancy without doing your homework, vetting the proposal, and vetting the people. [See when should you be using Big X.]

In contrast, a dedicated niche consultancy should have all these tools, and only have masters on the project who do these projects day in and day out. Compared to the bigger consultancies who don’t specialize in these projects, which will have a team of juniors using the manual playbook from the early 2000s, and one lieutenant to guide them. That’s often why sometimes their project bids are five times as much — and why you should be inviting multiple niche best-of-breed consultancies to bid on your project as well as multiple Big X consultancies (including those that are truly focusing on analytics and AI, and you can identify some of these by their recent acquisitions in the area) and be focusing in just as much on the six figure bids for the one that provides the best value, not just the seven figure Big X bids.  (And, FYI, if you invite enough Big X, you might find some come in at six figures and not seven because they have acquired the newer tech, took the time to understand your request, and figured out how they could get you the same value for less cost, leaving you funds for the follow on project where you should consider the Big X!)

(This is also the case for implementations. The Big X always have a rafter on the bench to assign to any project you give them, but there’s no guarantee any of them have ever implemented the system you chose before, or if they did, no guarantee they’ve ever connected it to the systems you need to connect to. You need specialists if you want a new system implemented as cost effectively as possible, especially if its a narrow focused specialist application and not a big enterprise application the Big X always implements. At the end of the day, even if you’re paying those specialists 500 or more an hour because getting a system up in 2 months at 40K is considerably better than a small team of juniors taking 4 months at 200 an hour and a total cost of 80K.  But again, mileage will vary — if the solution you select is a Big X partner, then the Big X will be best.  If it’s a solution they never heard of, you will need to evaluate multiple bids from multiple parties. )

Remember, where any group of vendors on the same page are concerned, All of us is as dumb as One of us!

Don’t fall for the Collectivism MindF6ck! that if multiple parties agree on something, that’s the right answer!  the doctor does NOT want to do say it again, but since a month still is not going by where he’s hearing about niche consultancies being thrown out for “being too cheap” or “obviously not understanding the problem” (which means the enterprise throwing them out is too uninformed and not recognizing that the Big X bids could just as likely the outliers because they aren’t inviting enough expert consultancies to the table), apparently he has to keep writing (and screaming) this truth. (the doctor isn’t saying that you can’t get a million dollars of value from some of these consultancies, just that you won’t by giving them a project they are not suited for;  again, see when should you use big X to identify when that million dollar project will generate a five million ROI — it’s people doing these projects at the end of the day, and where are those people?)

Remember, most of these firms got big in management, or accounting and tax, or marketing and sales consulting, not technology consulting. The only reason these big consultancies started offering these services is because of the amount of money flowing into technology, money which they want, but while the best of the best of the best in more traditional accounting, management, and marketing fields flocked to them, the best of the best in technology flocked to startups and c00l big tech firms  Now, some of these firms double downed, went and recruited those people, built small teams, learned, bought tech companies to expand the team, and now have great offerings in a number of areas.  But we have tens of thousands of tech companies for a reason, not everyone can build every type of technology, and not everyone can be an expert in every type of technology.  So while they will have expertise in some areas, they just can’t have expertise in all areas.  No one can.  Find the best provider for you.  Sometimes it will be Big X.  Sometimes Mid-Market.  Sometimes Niche.  It all depends on your problem at hand.)

And yes, sometimes the niche vendor will be wrong and woefully undersize the project or your needs.  But as per the above, if you don’t do give them a chance, and deep dive into their bid, how will you know?

 

Did you ever try eating a mitten? the doctor bets some of those clients did! (He feels you’re not all there if you think glorified reporting projects should still cost One Million Dollars by default and might actually try to eat your mittens! [Joking, but you get the point.]  Deep analytics projects that require the most advanced tech, especially AI tech, will cost a lot, but standard spend analysis, sales analysis, etc. where we have been iterating and improving on the technology for two decades should not.)

Source-to-Pay+ Part 8: Analytics / Control Center

In Part 1 we noted that Risk Management went much beyond Supplier Risk, and the primitive Supplier “Risk” Management application that is bundled in many S2P suites. Then, in Part 2, we noted that there are risks in every supply chain entity; with the people and materials used; and with the locales they operate in. In Part 3 we moved onto an overview of Corporate Risk, in Part 4 we took on Third Party Risk (in Part 4A and Part 4B), in Part 5 we laid the foundation for Supply Chain Risk (Generic), in Part 6 we addressed the first major supply chain risk: in-transport, followed by the second major supply chain risk: lack of multi-tier visibility in Part 7.

In almost every article to date, we’ve highlighted that a key aspect of every risk management system is good analytics, and, in particular, a good control centre to manage the data, the analytics, and the insights gained from the analytics (as well as the plans created around those insights).

Capability Description
Graph (Analytics) Support Standard analytics based on numeric data is not enough. As we have illustrated through this series, risk is more than numbers, roll ups of numbers, and trends on numbers. Risk is relationships, risk is connections, risk is propagation, risk is feedback. You have to be able to track the impacts across chains that span entities, geography, and time.

The risk application must natively support graphs, graph algorithms, and graph analytics. It must be able to count the number of impacted nodes up and down a BoM, multiple BoMs, a chain, and multiple chains. From this, it must be able to calculate an impact of a delay, a shortage, and a catastrophic failure based on BoM requirements, production times, costs, and margins.

Multi-level Metrics and Trend Analysis Even though graph analytics is key for supply chain risk analysis, good old fashioned metrics and KPIs are still key for analyzing risk potential at a point in time, and over time based on changes (and comparison to past trends that have led to risk and failure). For example, an increase in delivery times in every shipment, decreasing raw material supplies going into a source supplier that provides a refined version of that raw material, increasing failure in key components, etc. all indicate increased risk.

The application must support the definition of metrics based on arbitrary formulas, roll ups, and drill downs. It should also support basic trend analysis, allowing for comparison between time periods, similar trends, and historical trends of interest. it should also be capable of projecting the trend for an arbitrary time period in the future based upon the current trend progression and the most likely continuation based upon correlation with similar and historical trends.

Real-time Data Monitoring & Automation The application needs to integrate with third party data feeds, get (near) real-time updates, update all of the metrics the data relates to, monitor the changes against alerts, update the trends, and determine if any updates indicate trends of interest, significance, or concern. This all needs to happen automatically.

The application must support an open API, support standard data formats, be aware of standard data records used in direct supply chain, integrate with third party data feeds for all types of supply chain (risk) data out of the box, and be able to normalize all of this data into a standard data store (warehouse, lake, lakehouse, etc.). It must support rules-based alerts, integrations, monitors, and workflows to allow for appropriate automation support.

Mitigation Plans The platform must support the definition of mitigation plans, with individual actions, objectives, and impacts. Mitigation plans should support multiple stages, actions should support detailed definitions and expected outcomes, objectives should support a metric-based definition, and impacts should support detailed cost definitions.

It should be easy to instantiate an instance of a plan when a risk event is detected or defined by a user, track updates in real time as new data comes in or users define new data, track the impact of a recovery action (if it decreases the time to recovery, etc.), and auto-generate progress reports on a regular basis, as well as roll up all of the impacts, and recoveries, for users who need it. It should also support the creation of what-if scenarios to calculate the potential impacts of a potential action (in a given timeframe), and allow for cost vs impact vs margin/profit improvement calculations to help an organization determine if the action could be worth it, especially if the associated chance of success is limited.

Surveys The platform also needs to support the creation of surveys that can be distributed to multiple parties up and down the chain to collect data for analysis purposes.

The surveys must be capable of collecting numeric, type-valued, and open-valued data, as required.

There are NO Perils of Big Data in Procurement!

First of all, no organization has enough data, and those that come close don’t have big data.

Secondly, the more data you have, the better.

Third, if you think you have too much data, you’re not getting it!

So where’s this rant coming from? The rant-inducing headline du jour. The CIO Review recently published an article on The Perils of Big Data in Procurement which is complete non-sense, as there are no perils to having more data (because there’s never enough), unless it’s bad data (but the assumption in the article was that all the data was correct), just perils in terms of how that data is presented and accessed.

The perils in terms of how that data is presented and accessed can be significant, but that’s not due to having big data, that is due to poor system design — and that’s a different issue!

According to the article, buyers and procurement managers … have available a huge and unprecedented amount of data … [and] start to measure everything in order to manage it and that with this approach, several data lakes are created, feeding various dashboards, scorecards, reports, and metrics as procurement professionals try to understand spend analysis, price trends, market fluctuations, volume, cost savings, negotiation performance, and other essential factors. And this is true.

It goes on to say it is very easy for a person to be lost in the sea of numbers and details and miss the big picture entirely because you don’t know what is the crucial data that would give you critical insights. And if that wasn’t enough, it goes on to say it is the same as someone that enters the hospital with a broken leg but has everything else checked. WTF?

This is so dumb it makes you angry!

  1. If a person gets lost in the sea of details and numbers it’s because they don’t know what they should be looking for and how they should be looking for it, not because there’s too much data.
  2. If they don’t know what is crucial, it’s because they don’t know enough about the project they are doing to identify what’s critical and what’s not.
  3. What health practitioner is going to be so stupid as to not see a broken leg on a triage? Come on now! And what Procurement practitioner would check all but one dashboard randomly and then not check the last remaining dashboard? (And that’s what the article is implying with its ridiculous statement.)

In other words, the headline, and claim, is bullcr@p. Don’t blame a mountain of data for a lack of capability in your people, poor vendor technology choices (that bury you in useless dashboards), and your unwillingness to train your talent in modern technology and best practices so they can do their job properly.

And while the author is completely right in that you need to

  • understand what matters
  • start with a top-down view
  • have people who are good at interpreting the data

It still misses the point in that you need to, for any application you buy and any project you wish to undertake

  • define what’s relevant up front
  • find a solution that is configured/configurable to show that up front
  • make sure the data is easy to interpret, is accompanied with written guidance, and that your talent is trained on how to properly interpret the data and
  • if the goal is opportunity finding, the solution needs to identify and present the top opportunities across all of the analysis done, with deep supporting dashboards buried under the high level summary dashboard

More data is always better, especially if you want to use machine learning. In other words, it’s not the data, it’s the application, or the people, so don’t blame the data for your organization’s shortcomings.