Category Archives: Spend Analysis

4 Smart Technologies Modernizing Sourcing Strategy — Not Just Doctor Approved!

IBM recently published a great article on 4 smart technologies modernizing sourcing strategies that was great for two reasons. One, they are all technologies that will greatly improve your sourcing. We’ll explain why.

Automation

Business Process Automation (BPA, or RPA — Robotic Process Automation) can optimize sourcing workflows as well as procurement workflows. With good categorization, demand forecasting, inventory management, price intelligence, templates, strategies, situational analysis (that qualitatively and quantitatively define when a strategy should be applied), and workflow, you can automate sourcing just as much as you can automate Procurement. You can eliminate all of the tactical and focus solely on the strategic analysis and decision making.

Blockchain

If you need to record information in a manner that can be publicly accessed and verified, such as to ensure that records for traceability can be independently verified, or to publicly record ownership, blockchain is a great technology as its ultra secure. In Sourcing and Procurement, it can be used to track orders, payments, accounts, and more across global supply chains and multiple private and public parties.

Analytics

In addition to providing an organization with deep insights into their spend and (process level) performance, analytics engines and their “big data brains” provide real-time sourcing flexibility and visibility to enhance order management, inventory management, and logistics management. With proper intelligence, sourcing teams can understand and act on changes in the increasingly complex supply chain — as they happen.

AI

When deep data and analytics are paired with AI, the deep insights can improve forecasts, help identify risk, and provide suggestions for management.

And this brings us to the next great aspect of the article. Not once did it mention Gen-AI. Not once. As the doctor has been stating over and over, the classic analytics, optimization and machine learning you have been ignoring for almost two decades will do wonders for your supply chain. (Blockchain is not always necessary, but will help in the right situation.)

You Don’t Need Gen-AI to Revolutionize Procurement and Supply Chain Management — Classic Analytics, Optimization, and Machine Learning that You Have Been Ignoring for Two Decades Will Do Just Fine!

Open Gen-AI technology may be about as reliable as a career politician managing your Nigerian bank account, but somehow it’s won the PR war (since there is longer any requirement to speak the truth or state actual facts in sales and marketing in most “first” world countries [where they believe Alternative Math is a real thing … and that’s why they can’t balance their budgets, FYI]) as every Big X is pushing Open Gen-AI as the greatest revolution in technology since the abacus. the doctor shouldn’t be surprised, given that most of the turkeys on their rafters can’t even do basic math* (but yet profess to deeply understand this technology) and thus believe the hype (and downplay the serious risks, which we summarized in this article, where we didn’t even mention the quality of the results when you unexpectedly get a result that doesn’t exhibit any of the six major issues).

The Power of Real Spend Analysis

If you have a real Spend Analysis tool, like Spendata (The Spend Analysis Power Tool), simple data exploration will find you a 10% or more savings opportunity in just a few days (well, maybe a few weeks, but that’s still just a matter of days). It’s one of only two technologies that has been demonstrated, when properly deployed and used, to identify returns of 10% or more, year after year after year, since the mid 2000s (when the technology wasn’t nearly as good as it is today), and it can be used by any Procurement or Finance Analyst that has a basic understanding of their data.

When you have a tool that will let you analyze data around any dimension of interest — supplier, category, product — restrict it to any subset of interest — timeframe, geographic location, off-contract spend — and roll-up, compare against, and drill down by variance — the opportunities you will find will be considerable. Even in the best sourced top spend categories, you’ll usually find 2% to 3%, in the mid-spend likely 5% or more, in the tail, likely 15% or more … and that’s before you identify unexpected opportunities by division (who aren’t adhering to the new contracts), geography (where a new local supplier can slash transportation costs), product line (where subtle shifts in pricing — and yes, real spend analysis can also handle sales and pricing data — lead to unexpected sales increases and greater savings when you bump your orders to the next discount level), and even in warranty costs (when you identify that a certain supplier location is continually delivering low quality goods compared to its peers).

And that’s just the Procurement spend … it can also handle the supply chain spend, logistics spend, warranty spend, utility and HR spend — and while you can’t control the HR spend, you can get a handle on your average cost by position by location and possibly restructure your hubs during expansion time to where resources are lower cost! Savings, savings, savings … you’ll find them ’round the clock … savings, savings, savings … analytics rocks!

The Power of Strategic Sourcing Decision Optimization

Decision optimization has been around in the Procurement space for almost 25 years, but it still has less than 10% penetration! This is utterly abysmal. It’s not only the only other technology that has been generating returns of 10% or more, in good times and bad, for any leading organization that consistently uses it, but the only technology that the doctor has seen that has consistently generated 20% to 30% savings opportunities on large multi-national complex categories that just can’t be solved with RFQ and a spreadsheet, no matter how hard you try. (But if you want to pay them, a Big X will still claim they can with the old college try if you pay their top analyst’s salary for a few months … and at 5K a day, there goes three times any savings they identify.)

Examples where the doctor has repeatedly seen stellar results include:

  • national service provider contract optimization across national, regional, and local providers where rates, expected utilization, and all-in costs for remote resources are considered; With just an RFX solution, the usual solution is to go to all the relevant Big X Bodyshops and get their rate cards by role by location by base rate (with expenses picked up by the org) and all-in rate; calc. the expected local overhead rate by location; then, for each Big X – role – location, determine if the Big X all-in rate or the Big X base rate plus their overhead is cheaper and select that as the final bid for analysis; then mark the lowest bid for each role-location and determine the three top providers; then distribute the award between the three “top” providers in the lowest cost fashion; and, in big companies using a lot of contract labour, leave millions on the table because 1) sometimes the cheapest 3 will actually be the providers with the middle of the road bids across the board and 2) for some areas/roles, regional, and definitely local, providers will often be cheaper — but since the complexity is beyond manageable, this isn’t done, even though the doctor has seen multiple real-world events generate 30% to 40% savings since optimization can handle hundreds of suppliers and tens of thousands of bids and find the perfect mix (even while limiting the number of global providers and the number of providers who can service a location)
  • global mailer / catalog production —
    paper won’t go away, and when you have to balance inks, papers, printing, distribution, and mailing — it’s not always local or one country in a region that minimizes costs, it’s a very complex sourcing AND logistics distribution that optimizes costs … and the real-world model gets dizzying fast unless you use optimization, which will find 10% or more savings beyond your current best efforts
  • build-to-order assembly — don’t just leave that to the contract manufacturer, when you can simultaneously analyze the entire BoM and supply chain, which can easily dwarf the above two models if you have 50 or more items, as savings will just appear when you do so

… but yet, because it’s “math”, it doesn’t get used, even though you don’t have to do the math — the platform does!

Curve Fitting Trend Analysis

Dozens (and dozens) of “AI” models have been developed over the past few years to provide you with “predictive” forecasts, insights, and analytics, but guess what? Not a SINGLE model has outdone classical curve-fitting trend analysis — and NOT a single model ever will. (This is because all these fancy-smancy black box solutions do is attempt to identify the record/transaction “fingerprint” that contains the most relevant data and then attempt to identify the “curve” or “line” to fit it too all at once, which means the upper bound is a classical model that uses the right data and fits to the right curve from the beginning, without wasting an entire plant’s worth of energy powering entire data centers as the algorithm repeatedly guesses random fingerprints and models until one seems to work well.)

And the reality is that these standard techniques (which have been refined since the 60s and 70s), which now run blindingly fast on large data sets thanks to today’s computing, can achieve 95% to 98% accuracy in some domains, with no misfires. A 95% accurate forecast on inventory, sales, etc. is pretty damn good and minimizes the buffer stock, and lead time, you need. Detailed, fine tuned, correlation analysis can accurately predict the impact of sales and industry events. And so on.

Going one step further, there exists a host of clustering techniques that can identify emergent trends in outlier behaviour as well as pockets of customers or demand. And so on. But chances are you aren’t using any of these techniques.

So given that most of you haven’t adopted any of this technology that has proven to be reliable, effective, and extremely valuable, why on earth would you want to adopt an unproven technology that hallucinates daily, might tell of your sensitive employees with hate speech, and even leak your data? It makes ZERO sense!

While we admit that someday semi-private LLMs will be an appropriate solution for certain areas of your business where large amount of textual analysis is required on a regular basis, even these are still iffy today and can’t always be trusted. And the doctor doesn’t care how slick that chatbot is because if you have to spend days learning how to expertly craft a prompt just to get a single result, you might as well just learn to code and use a classic open source Neural Net library — you’ll get better, more reliable, results faster.

Keep an eye on the tech if you like, but nothing stops you from using the tech that works. Let your peers be the test pilots. You really don’t want to be in the cockpit when it crashes.

* And if you don’t understand why a deep understand of university level mathematics, preferably at the graduate level, is important, then you shouldn’t be touching the turkey who touches the Gen-AI solution with a 10-foot pole!

Spendata: The Power Tool for the Power Spend Analyst — Now Usable By Apprentices as Well!

We haven’t covered Spendata much on Sourcing Innovation (SI), as it was only founded in 2015 and the doctor did a deep dive review on Spend Matters in 2018 when it launched (Part I and Part II, ContentHub subscription required), as well as a brief update here on SI where we said Don’t Throw Away that Old Spend Cube, Spendata Will Recover It For You!. the doctor did pen a 2020 follow up on Spend Matters on how Spendata was Rewriting Spend Analysis from the Ground Up, and that was the last major coverage. And even though the media has been a bit quiet, Spendata has been diligently working as hard on platform improvement over the last four years as they were the first four years and just released Version 2.2 (with a few new enhancements in the queue that they will roll out later this year). (Unlike some players which like to tack on a whole new version number after each minor update, or mini-module inclusion, Spendata only does a major version update when they do considerable revamping and expansion, recognizing that the reality is that most vendors only rewrite their solution from the ground up to be better, faster, and more powerful once a decade, and every other release is just an iteration, and incremental improvement of, the last one.)

So what’s new in Spendata V 2.2? A fair amount, but before we get to that, let’s quickly catch you up (and refer you to the linked articles above for a deep dive).

Spendata was built upon a post-modern view of spend analysis where a practitioner should be able to take immediate action on any data she can get her hands on whenever she can get her hands on it and derive whatever insights she can get for process (or spend) improvement. You never have perfect data, and waiting until Duey, Clutterbuck, and Howell1 get all your records in order to even run your first report when you have a dozen different systems to integrate data from, multiple data formats to map, millions of records to classify, cleanse and enrich, and third party data feeds to integrate will take many months, if not a year, and during that year where you quest for the mythical perfect cube you will continue to lose 5% due to process waste, abuse, and fraud, and 3% to 15% (or more) across spend categories where you don’t have good management but could stem the flow simply by identifying them and putting in place a few simple rules or processes. And you can identify some of these opportunities simply by analyzing one system, one category, and one set of suppliers. And then moving on to the next one. And, in the process, Spendata automatically creates and maintains the underlying schema as you slowly build up the dimensions, the mapping, cleansing, and categorization rules, and the basic reports and metrics you need to monitor spend and processes. And maybe you can only do 60% to 80% piecemeal, but during that “piecemeal year”, you can identify over half of your process and cost savings opportunities and start saving now, versus waiting a year to even start the effort. When it comes to spend (related) data analysis, no adage is more true than “don’t put off until tomorrow what you can do today” with Spendata, because, and especially when you start, you don’t need complete or perfect data … you’d be amazed how much insight you can get with 90% in a system or category, and then if the data is inconclusive, keeping drilling and mapping until you get into the 95% to 98% accuracy range.

Spendata was also designed from the ground up to run locally and entirely in the browser, because no one wants to wait for an overburdened server across a slow internet connection, and do so in real time … and by that we mean do real analysis in real time. Spendata can process millions of records a minute in the browser, which allows for real time data loads, cube definitions, category re-mappings, dynamically derived dimensions, roll-ups, and drill downs in real-time on any well-defined data set of interest. (Since most analysis should be department level, category level, regional, etc., and over a relevant time span, that should not include every transaction for the last 10 years because beyond a few years, it’s only the quarter over quarter or year over year totals that become relevant, most relevant data sets for meaningful analysis even for large companies are under a few million transactions.) The goal was to overcome the limitations of the first two generations of spend analysis solutions where the user was limited to drilling around in, and deriving summaries of, fixed (R)OLAP cubes and instead allow a user to define the segmentations they wanted, the way they wanted, on existing or newly loaded (or enriched federated data) in real time. Analysis is NOT a fixed report, it is the ability to look at data in various ways until you uncover an inefficiency or an opportunity. (Nor is it simply throwing a suite of AI tools against a data set — these tools can discover patterns and outliers, but still require a human to judge whether a process improvement can be made or a better contract secured.)

Spendata was built as a third generation spend analysis solution where

  • data can be loaded and processed at any point of the analysis
  • the schema is developed and modified on the fly
  • derived dimensions can be created instantly based on any combination of raw and previously defined derived dimensions
  • additional datasets from internal or external sources can be loaded as their own cubes, which can then be federated and (jointly) drilled for additional insight
  • new dimensions can be built and mapped across these federations that allow for meaningful linkages (such as commodities to cost drivers, savings results to contracts and purchasing projects, opportunities by size, complexity, or ABS analysis, etc.)
  • all existing objects — dimensions, dashboards, views (think dynamic reports that update with the data), and even workspaces can be cloned for easy experimentation
  • filters, which can define views, are their own objects, can be managed as their own objects, and can be, through Spendata‘s novel filter coin implementation, dragged between objects (and even used for easy multi-dimensional mapping)
  • all derivations are defined by rules and formula, and are automatically rederived when any of the underlying data changes
  • cubes can be defined as instances of other cubes, and automatically update when the source cube updates
  • infinite scrolling crosstabs with easy Excel workbook generation on any view and data subset for those who insist on looking at the data old school (as well as “walk downs” from a high-level “view” to a low-level drill-down that demonstrates precisely how an insight was found
  • functional widgets which are not just static or semi-dynamic reporting views, but programmable containers that can dynamically inject data into pre-defined analysis and dimension derivations that a user can use to generate what-if scenarios and custom views with a few quick clicks of the mouse
  • offline spend analysis is also available, in the browser (cached) or on Electron.js (where the later is preferred for Enterprise data analysis clients)

Furthermore, with reference to all of the above, analyst changes to the workspace, including new datasets, new dashboards and views, new dimensions, and so on are preserved across refresh, which is Spendata’s “inheritance” capability that allows individual analysts to create their own analyses and have them automatically updated with new data, without losing their work …

… and this was all in the initial release. (Which, FYI, no other vendor has yet caught up to. NONE of them have full inheritance or Spendata‘s security model. And this was the foundation for all of the advanced features Spendata has been building since its release six years ago.)

After that, as per our updates in 2018 and 2020, Spendata extended their platform with:

  • Unparalleled Security — as the Spendata server is designed to download ONLY the application to the browser, or Spendata‘s demo cubes and knowledge bases, it has no access to your enterprise data;
  • Cube subclassing & auto-rationalization — power users can securely setup derived cubes and sub-cubes off of the organizational master data cubes for the different types of organizational analysis that are required, and each of these sub-cubes can make changes to the default schema/taxonomy, mappings, and (derived) dimensions, and all auto-update when the master cube, or any parent cube in the hierarchy, is updated
  • AI-Based Mapping Rule Identification from Cube Reverse Engineering — Spendata can analyze your current cube (or even a report of vendor by commodity from your old consultant) and derive the rules that were used for mapping, which you can accept, edit, or reject — we all know black box mapping doesn’t work (no matter how much retraining you do, as every “fix” all of a sudden causes an older transaction to be misclassified); but generating the right rules that can be human understood and human maintained guarantees 100% correct classification 100% of the time
  • API access to all functions, including creating and building workspaces, adding datasets, building dimensions, filtering, and data export. All Spendata functions are scriptable and automatable (as opposed to BI tools with limited or nonexistent API support for key functions around building, distributing, and maintaining cubes).

However, as we noted in our introduction, even though this put Spendata leagues beyond the competition (as we still haven’t seen another solution with this level of security; cube subclassing with full inheritance; dynamic workspace, cube, and view creation; etc.), they didn’t stop there. In the rest of this article, we’ll discuss what’s new from the viewpoint of Spendata Competitors:

Spendata Competitors: 7 Things I Hate About You

Cue the Miley Cyrus, because if competitors weren’t scared of Spendata before, if they understand ANY of this, they’ll be scared now (as Spendata is a literal wrecking ball in analytic power). Spendata is now incredibly close to negating entire product lines of not just its competitors, but some of the biggest software enterprises on the planet, and 3.0 may trigger a seismic shift on how people define entire classes of applications. But that’s a post for a later day (but should cue you up for the post that will follow this on on just precisely what Spendata 2.2 really is and can do for you). For now, we’re just going to discuss seven (7) of the most significant enhancements since our last coverage of Spendata.

Dynamic Mapping

Filters can now be used for mapping — and as these filters update, the mapping updates dynamically. Real-time reclassify on the fly in a derived cube using any filter coin, including one dragged out of a drill down in a view. Analysis is now a truly continuous process as you never have to go back and change a rule, reload data, and rebuild a cube to make a correction or see what happens under a reclassification.

View-Based Measures

Integrate any rolled up result back into the base cube on the base transactions as a derived dimension. While this could be done using scripts in earlier versions, it required sophisticated coding skills. Now, it’s almost as easy as a drag-and-drop of a filter coin.

Hierarchical Dashboard Menus

Not only can you organize your dashboards in menus and submenus and sub-sub menus as needed, but you can easily bookmark drill downs and add them under a hierarchical menu — makes it super easy to create point-based walkthroughs that tell a story — and then output them all into a workbook using Spendata‘s capability to output any view, dashboard, or entire workspace as desired.

Search via Excel

While Spendata eliminates the need for Excel for Data Analysis, the reality is that is where most organizational data is (unfortunately) stored, how most data is submitted by vendors to Procurement, and where most Procurement Professionals are the most comfortable. Thus, in the latest version of Spendata, you can drag and drop groups of cells from Excel into Spendata and if you drag and drop them into the search field, it auto-creates a RegEx “OR” that maintains the inputs exactly and finds all matches in the cube you are searching against.

Perfect Star Schema Output

Even though Spendata can do everything any BI tool on the market can do, the reality is that many executives are used to their pretty PowerBI graphs and charts and want to see their (mostly static) reports in PowerBI. So, in order to appease the consultancies that had to support these executives that are (at least) a generation behind on analytics, they encoded the ability to output an entire workspace to a perfect star schema (where all keys are unique and numeric) that is so good that many users see a PowerBI speed up by a factor of almost 10. (As any analyst forced to use PowerBI will tell you, when you give PowerBI any data that is NOT in a perfect star schema, it may not even be able to load the data, and that it’s ability to work with non-numeric keys at a speed faster than you remember on an 8088 is nonexistent.)

Power Tags

You might be thinking “tags, so what“. And if you are equating tags with a hashtag or a dynamically defined user attribute, then we understand. However, Spendata has completely redefined what a tag is and what you can do with it. The best way to understand it is a Microsoft Excel Cell on Steroids. It can be a label. It can be a replica of a value in any view (that dynamically updates if the field in the view updates). It can be a button that links to another dashboard (or a bookmark to any drill-down filtered view in that dashboard). Or all of this. Or, in the next Spendata release, a value that forms the foundation for new derivations and measures in the workspace just like you can reference a random cell in an Excel function. In fact, using tags, you can already build very sophisticated what-if analysis on-the-fly that many providers have to custom build in their core solutions (and take weeks, if not months, to do so) using the seventh new capability of Spendata, and usually do it in hours (at most).

Embedded Applications

In the latest version of Spendata, you can embed custom applications into your workspace. These applications can contain custom scripts, functions, views, dashboards, and even entire datasets that can be used to instantly augment the workspace with new analytic capability, and if the appropriate core columns exist, even automatically federate data across the application datasets and the native workspace.

Need a custom set of preconfigured views and segments for that ABC Analysis? No sweat, just import the ABC Analysis application. Need to do a price variance analysis across products and geographies, along with category summaries? No problem. Just import the Price Variance and Category Analysis application. Need to identify opportunities for renegotiation post M&A, cost reduction through supply base consolidation, and new potential tail spend suppliers? No problem, just import the M&A Analysis app into the workspace for the company under consideration and let it do a company A vs B comparison by supplier, category, and product; generate the views where consolidation would more than double supplier spend, save more than 100K on switching a product from a current supplier to a lower cost supplier; and opportunities for bringing on new tail spend suppliers based upon potential cost reductions. All with one click. Not sure just what the applications can do? Start with the demo workspaces and apps, define your needs, and if the apps don’t exist in the Spendata library, a partner can quickly configure a custom app for you.

And this is just the beginning of what you can do with Spendata. Because Spedata is NOT a Spend Analysis tool. That’s just something it happens to do better than any other analysis tool on the market (in the hands of an analyst willing to truly understand what it does and how to use it — although with apps, drag-and-drop, and easy formula definition through wizardly pop-ups, it’s really not hard to learn how to do more with Spendata than any other analysis tool).

But more on this in our next article. For The Times They Are a-Changin’.

1 Duey, Clutterbuck, and Howell keeps Dewey, Cheatem, and Howe on retainer … it’s the only way they can make sure you pay the inflated invoices if you ever wake up and realize how much you’ve been fleeced for …

Scalue Wants to Scale Up Your Strategic Procurement with Strategic Spend Analysis

Scalue is a spend analysis company that was founded in 2018 by veterans of Procurement with two decades of experience in Düsseldorf, Germany to help companies identify various, immediate, areas of potential savings, improve their overall purchasing processes, and, most importantly get started quickly (as many large organizations can take between 6 and 24 months just to get their data foundation in order if they take the traditional route and start with a consultancy partner that starts with a data cleansing, classification and enrichment project before building the first cube and starting the opportunity analysis).

Scalue was built to be ready to use the minute that you loaded the starting data set from either

  • a set of flat files or Excel workbook (which are auto-mapped if you use their data model and/or standard field names) or
  • the ERP/MRP (AP) (for which they have a library of pre-built integrations to the majority of the major ERP systems; they may need minor customizations, but those are usually quick to accomplish, and if you need something custom, they do have certified ERP integration partners).

Of course, how ready it is will depend on how good your classification is in the raw data you import. For the majority of companies just starting on their data foundations and/or spend analysis, chances are their data classification is very poor. Fortunately, classification in Scalue is quite easy and can be done by supplier, material group, material, GL coding (if available), invoice (line), or a combination thereof. Scalue typically begins an engagement with a working session to help, and guide the users on creating/updating their categorization and doing the initial spend mappings.

Updates are on your schedule. Most customers prefer monthly (so they can share and do consistent analysis), but they can retrieve updates weekly, daily, or even hourly if you want with a direct (ERP/MRP/AP) system integration or as often as you update an incremental file without an integration.

While Scalue has experimented with multiple AI technologies for classification, they do not use any technologies across the board, and instead use specific instances on specific use cases (for initial classification rule creation, but all AI mapping rules can be deleted or overridden), because they have found, as the doctor knows all too well:

  • classification accuracy in direct, especially when dealing with a multi-national enterprise that sources in different countries that use different SKUs and coded product descriptions for the same product and do so in multiple languages, is poor. Maybe 90%, but you really don’t want 10% of your data misclassified, especially if it relates to high spend transactions
  • classification consistency with the black box is poor, while retraining from corrected classifications will correct some classifications, others that were right are now wrong
  • while it sometimes can produce starting rules, you can’t always trust the confidence and still need to verify all the rules manually
  • a good mapping process will get a spend analysis / data management team to fairly high accuracy (90 to 95% +) in just a couple of days at most (even in large organizations) and the rules are 100% accurate and reliable

When the average direct buyer enters Scalue, the first thing they see is the Cockpit Dashboard in the Management module which helps them understand their spend and drill in to find immediate opportunities. The cockpit, like any good entry dashboard, summarizes spend volumes, suppliers, spend by material group, supplier, and measure for a time period and allows a user to drill in by any (pre)defined dimension or measure in each of these drill downs, and order the drill downs in any order they like. Each drill down brings up a dedicated report screen, where the user can not only drill, but select a dimension/measure subset as well. When a user identifies a high spend (sub) area that they want to address, they can kick off an initiative, which we will discuss later.

Scalue promises a return of 1% on total addressable spend in your first 12 months and has consistently delivered across its customer base for the last few years. This might sound low compared to the numbers quoted by indirect spend analysis providers, but one should remember the following:

  • while the average return on an indirect category that is strategically sourced during non-inflationary times will be 5% to 10% with a modern sourcing solution and good insight,
  • and the average return in the tail spend will be 10% to 15%,
  • the average return in a direct category is usually in the 3% range (as it’s much more carefully evaluated and managed by a company that needs to invest millions in materials and goods to serve its customers)
  • and contracts are usually three (3) to five (5) years.

Thus, if you save 1% a year despite much of the spend being locked in by existing 3+ year contracts, with a average ceiling of 3% on savings on (re)sourcing. that’s actually quite good — and for a company buying 500M in direct, that’s 5M straight to the bottom line in the first year on direct spend alone!

The next step for most users is either the ABC Analysis or the Business Development overview, which is where they will typically go next. The Business Development screen summarizes purchasing volume, regions of origin, and ABC analysis (by country by default) which can also help an analyst dive in to find immediate opportunities (by focussing in on high volume categories where the spend trend is going up, categories that are being sourced from too many regions and present a consolidation opportunity [without increasing risk], and high spend material groups that might be going unmanaged).

The ABC analysis, that the user can drill into from the Business Development tab (or jump straight to), allows the user to see the material group or supplier split by the top 80%, next 15%, and final 5% (or 70/20/10 or however they want to define the A,B,C spend ranges under their interpretation of the Pareto Principle) and just drill into the spend that matters the most.

In summary, users usually start with these three Management Dashboards

  • Cockpit – an overall spend overview: volume, cumulative cost variance, top suppliers, top measures, etc.
  • Business Development – looks at price spreads across all of your products to find immediate opportunities (by consolidating to lower cost parts where possible)
  • ABC Analysis – the standard ABC analysis that groups material groups into high spend (A), medium spend (B), low spend (C) using the modified 80/20 rule into 80/15/5 (which can be modified as desired by the customer); this allows an analyst to focus into the high spend / critical material groups first and then see what groups or suppliers are out of control in the tail

Once the initial exploration is done, most analysts will move to the Structure dashboards:

  • Invoice Compliance – how much spend is billed not using contract rates
  • Contract Compliance – how much spend is off-contract that should be on contract
  • Payment Terms Optimization – looks at payment terms and early payment (cash) discounts across suppliers and helps you optimize payment terms and time-frames
  • Delivery Time & Performance – average delivery time, on-time, late, by supplier
  • Terms & Conditions – where they can analyze payment terms and delivery terms across a supplier (cluster) or material (group); keeping on top of this is very important if your suppliers provide early payment (cash) discounts or charge interest for late payments (and hold your critical orders until invoices past due are paid); includes a portfolio view of associated value with each material group-term pairing

From there, they will usually progress into the Control Dashboards:

  • KPI Dashboard – a customizable dashboard that centralizes your KPIs of interest
  • Material Cost Variance – summarizes the cost variances across materials and material groups
  • Report Builder – allows an end user to build a report on set of dimensions and/or measures in the system

Once they have completed their analysis, the users will probably want to set up initiatives (projects) to (re)capture savings and hit the 1% reduction on total spend Scalue can deliver within the first 12 months. To do this, they will move over to the Action Hub:

  • Tracking – the main dashboard that provides an overview of all (open) initiatives including [savings] type, forecast, and captured to date
  • Approvals – the approvals dashboard where an admin can accept, and lock, dates, forecasts, entered amounts (to date), etc.
  • P&L Savings – the savings against the P&L by month for a given time-frame
  • KPIs – allows for a deep cross-initiative analysis that computes averages and statistics across initiatives, categories, manufacturing groups, and other KPIs of interest by time periods of interest
  • Admin – allows for customization of initiative management — the admin can define the phases, the employees who can edit initiatives, the priority classifications, the savings types, the project statuses, and other dimensions upon which the KPIs will be based

As with all spend analysis systems, the end user administrator can setup and maintain the category tree, material groups, supplier, and invoice categorizations completely self serve and inspect it at any time through the module for Data Health:

  • Material Clusters – group material groups by product lines, related uses, or another common denominator you want to be able to do analysis by; see the allocation by spend or volume, and drill into the percentages
  • Supplier Clusters – group suppliers by parent company, region, or another common denominator you want to be able to do analysis by; see the allocation by spend or volume, and drill into the percentages
  • Category Tree – define the category tree and the overarching material groups
  • Material Group Categorization – dive into a material group and map materials
  • Supplier Categorization – classify suppliers by material category
  • Material Categorization – supports product/SKU level mappings
  • Invoice Categorization – define line level overrides where needed

Once a company has mastered the basics and taken full advantage of the standard dashboard and analysis that deliver almost immediate payback, they can do the more advanced portfolio analysis that allows them to analyze portfolios by buying power, supply risk, ESG/CSR, etc. as long as they have the data to do so (and have defined the appropriate supplier/material group clusters). Scalue can pull in the data from the ERP if it exists, and if it doesn’t, it can build (or the end user can build) questionnaires to collect the data from organizational users. This advanced analysis is accomplished in the Strategy Hub:

  • Questionnaire – used to collect baseline data to provide a foundation for portfolio analysis
    (around ESG/CSR, Organizational Buying Power, Supply Risk, etc.)
  • Supplier Portfolio Specifications – used to collect specific data for portfolio segregation by supplier (cluster)
  • Supplier Portfolio Analysis – analyze the spend by the desired portfolio breakdown, dashboard is customizable on implementation
  • Material Portfolio Specifications – used to collect specific data for portfolio segregation by material group (cluster)
  • Material Group Portfolio Analysis – analyze the spend by the desired portfolio breakdown, dashboard is customizable on implementation
  • Combined Portfolio Analysis – see the portfolio analysis by supplier (cluster) and material group (cluster), dashboard is customizable on implementation

Finally, for ERP customers that have multiple years of data in their ERP, they also have a ProcessView module:

  • Dashboard – a summary of the process analysis which focuses on process discovery, lead time analysis, and a breakdown by lead time cluster
  • Statistics – summarizes statistics related to the different steps of your process around average time in the step, which can be broken down by supplier, material (group), user, etc.
  • Query Builder – build queries to answer questions not answered in the dashboard
  • Modeler – adjust the process model as required
  • Impact – The statistics and KPIs are converted into easily understandable process descriptions based on intelligent models to aid the analyst in interpreting key figures and estimating the initial impact.
  • Comparison – The results of a process comparison across suppliers and product (groups) to highlight why one supplier or product is better (or worse) than another.

As with any good platform, you can drill into any data set on any dimension, reorder the dimensions, and drill right down to the individual transactions. Also, since they have a number of implementation partners certified on the major ERP systems (SAP, Microsoft, etc.), you can have it implemented quite quickly, and the partners can work with you to get your data properly classified in a very short time frame as well. You can also export all of your data at any time.

When it comes to user administration, an admin user can grant other organizational users access rights done to the record level (and may only see some modules, dashboards, and menu items as well) and define new measures for report building.

The platform is very useable, but to ensure that all of their users can make the most of it, they have an extensive on-line education library in German and English on their Training Site. These courses go beyond platform basics and even include courses on negotiation, supplier consolidation, strategic category assessment, and so on.

If you’re doing a lot of direct spend and looking for a best of breed spend analysis solution, it’s one to include on the short list.

One of these things is not like the other — it’s the right choice!

Three bids for that spend analytics project from the three leading Big X firms come in at 1 Million. One bid for that spend analytics project from a specialized niche consultancy you pulled out of the hat for bid diversity comes in at 250 Thousand. Which one is right? Those of you who only partially paid attention to the education Sesame Street was trying to impart upon you when you were growing up will simply remember the “one of these things is not like the other” song and think that any of the bids from the Big X firm is right and the niche consultancy is wrong because it’s different, and therefore must be thrown out because it’s too low when, in fact, it’s the three bids from the Big X firms that are wrong and the bid from the niche consultancy that was right.

Those of us who paid attention knew that Sesame Street was trying to show us how to detect underlying similarities so we could properly cluster objects for further analysis. What we should have learned is that the Big X bids were all the same, built on the same assumption, and can be compared equally. And that the outlier bid needed further investigation — a further investigation that can only be undertaken against an appropriately sized set of sample set of bids from other specialized niche consultancies to compare against. And without that sample set of bids, you can’t properly evaluate the lower bid, which, the doctor can tell you, is likely closer to correct than the wildly overpriced Big X bids.

As per our recent post on don’t hire a F6ckw@d from a Big X if you want to get analytics and AI right, most of these guys don’t have the breadth of expertise they claim to have. In the group that sells you, there will be a leader who is a true expert (and worth his or her weight in platinum), a few handpicked lieutenants who are above average and run the projects, and a rafter of turkeys straight out of private college with more training in how to dress, talk, and follow orders than training in actual analytics … and no guarantee they even have any real university level mathematics (and thus a knowledge of what analytics is and isn’t and can and can’t do).

While there was a time big analytics projects were million dollar projects, that was twenty years ago when Spend Analysis 1.0 was still hitting the market; when there were limited tools for data integration, mapping, cleansing, and enrichment; and when there weren’t a lot of statistics on average savings opportunities across internal and external spend categories. Now we have mature Spend Analysis 3.0 technologies (some taking steps towards spend analysis 4.0 technologies); advanced technologies for automatic data integration, mapping, cleansing, and even enrichment; deep databases on projects and results by vertical and industry size; extensive libraries for out-of-the-box analytics across categories and potential opportunities; and a whole toolkit for spend analysis that didn’t exist two decades ago. This new toolkit, built by best of breed vendors used, and sometimes [co-]owned by these best of breed niche consultancies (that don’t try to do everything, and definitely don’t pretend they can), allows modern spend analysis projects to be done ten times as efficiently and effectively, in the hands of a master — a master that isn’t on your project if you hire a Big X. A niche consultancy will have all these tools, and only have masters on the project who do these projects day in and day out. Compared to the Big X, which will have a team of juniors using the manual playbook from the early 2000s, and one lieutenant to guide them. That’s why their project bids are five times as much — and why you should be inviting multiple niche best-of-breed consultancies to bid on your project and be focussing in on their six figure bids for the one that provides the best value, not the seven figure Big X bids.

(This is also the case for implementations. The Big X always have a rafter on the bench to assign to any project you give them, but there’s no guarantee any of them have ever implemented the system you chose before, or if they did, no guarantee they’ve ever connected it to the systems you need to connect to. You need specialists if you want that big new system implmented as cost effectively as possible. Even if you’re paying those specialists 500 or more an hour because getting a system up in 2 months at 40K is considerably better than a small team of turkeys taking 4 months at 250 an hour and a total cost of 100K.)

Remember, where Big X are concerned, All of us is as dumb as One of us! Don’t fall for the Big X Collectivism MindF6ck! the doctor does NOT want to do say it again, but since a month still is not going by where he’s hearing about niche consultancies being thrown out for “being too cheap” (which means the enterprise throwing them out is too uninformed and not recognizing that the Big X bids are the outliers because they aren’t inviting enough expert consultancies to the table), apparently he has to keep writing (and screaming) this truth. (the doctor isn’t saying that you can’t get a million dollars of value from some of these consultancies, just that you won’t by giving them these types of projects which they are not suited for and don’t have the expertise in. Remember, most of these firms got big in management, or accounting and tax, or marketing and sales consulting, not technology consulting. The only reason these big consultancies are offering these services is because of the amount of money flowing into technology, money which they want, but while the best of the best of the best in more traditional accounting, management, and marketing fields flocked to them, the best of the best in technology flocked to startups and c00l big tech firms. So they just don’t have the talent in tech.)


 

Did you ever try eating a mitten? the doctor bets they did! (He feels you’re not all there if you think glorified reporting projects still cost One Million Dollars and might actually try to eat your mittens!)