Category Archives: Best Practices

Enterprises have a Data Problem. And they will until they accept they need to do E-MDM, and it will cost them!

insideBIGDATA recently published an article on The Impact of Data Analytics Integration Mismatch on Business Technology Advancements which did a rather good job on highlighting all of the problems with bad integrations (which happen every day, and especially if you hire a f6ckw@d from a Big X [as that will just result in you contributing to the half a TRILLION dollars that will be wasted on SaaS Spend this year and the one TRILLION that will be wasted on IT Services]), and an okay job of advising you how to prevent them. But the problem is much larger than the article lets on, and we need to discuss that.

But first, let’s summarize the major impacts outlined in the article (which you should click to and read before continuing on in this article):

  • Higher Operational Expenses
  • Poor Business Outcomes
  • Delayed Decision Making
  • Competitive Disadvantages
  • Missed Business Opportunities

And then add the following critical impacts (which is not a complete list by any stretch of the imagination) when your supplier, product, and supply chain data isn’t up to snuff:

  • Fines for failing to comply with filings and appropriate trade restrictions
  • Product seizures when products violate certain regulations (like ROHS, WEEE, etc.)
  • Lost Funds and Liabilities when incomplete/compromised data results in payments to the wrong/fraudulent entities
  • Massive disruption risks when you don’t get notifications of major supply chain incidents when the right locations and suppliers are not being monitored (multiple tiers down in your supply chain)
  • Massive lawsuits when data isn’t properly encrypted and secured and personal data gets compromised in a cyberattack

You need good data. You need secure data. You need actionable data. And you won’t have any of that without the right integration.

The article says to ensure good integration you should:

  • mitigate low-quality data before integration (since cleansing and enrichment might not even be possible)
  • adopt uniformity and standardized data formats and structures across systems
  • phase out outdated technology

which is all fine and dandy, but misses the core of the problem:

Data is bad (often very, very bad), because the organizations don’t have an enterprise data management strategy. That’s the first step. Furthermore this E-MDM strategy needs to define:

  1. the master schema with all of the core data objects (records) that need to be shared organizational wide
  2. the common data format (for ids, names, keys, etc.) (that every system will need to map to)
  3. the master data encoding standard

With a properly defined schema, there is less of a need to adopt uniformity across data formats and structures across the enterprise systems (which will not always be possible if an organization needs to maintain outdated technology either because a former manager entered into a 10 year agreement just to be rid of the problem or it would be too expensive to migrate to another system at the present time) or to phase out outdated technology (which, if it’s the ERP or AP, will likely not be possible) since the organization just needs to ensure that all data exchanges are in the common data format and use the master data encoding standard.

Moreover, once you have the E-MDM strategy, it’s easy to flush out the HR-MDM, Supplier/SupplyChain-MDM, and Finance-MDM strategies and get them right.

As THE PROPHET has said, data will be your best friend in procurement and supply chain in 2024 if you give it a chance.

Or, you can cover your eyes and ears and sing the same old tune that you’ve been singing since your organization acquired its first computer and built it’s first “database”:

Well …
I have a little data
I store it on my drive
And when it’s old and flawed
The data I’ll archive

Oh, data, data, data
I store it on my drive
And when it’s old and flawed
The data I’ll archive

It has nonstandard fields
The records short and lank
When I try to read it
The blocks all come back blank

I have a little data
I store it on my drive
And when it’s old and flawed
The data I’ll archive

My data is so ancient
Drive sectors start to rot
I try to read my data
The effort comes to naught

Oh, data, data, data
I store it on my drive
And when it’s old and flawed
The data I’ll archive

Beware of Magical Thinking In Your Procurement!

Back in 2017 (yes, that was 7 years ago, but the subject is still relevant), the doctor penned a post asking if there was magical thinking in your procurement noting that:

the Procurement Department that is getting the worst deal is the one that hallucinates the most — and needs to — in order to keep their worldview intact

And, furthermore, it was these Procurement departments that were most against modernizing their processes or platforms because their worldview requires them to believe that the antiquated processes and (severely) outdated platforms they are (still) using are just fine. (And they don’t recognize that their Procurement departments still run on the island of misfit toys principle — staffed with people who are nearing retirement, related to the boss, or technologically adverse and have been doing it this way for far too long.)

the doctor also noted that the easiest way to identify these organizations was by their telltale arguments of:

  • our processes are just fine, we just need more people
  • our platform is just fine, we just need more people
  • it’s not worth the cost, and it will slow us down

which were soon augmented with the additional telltale arguments of:

  • the problem isn’t with us, it’s with logistics / risk management / compliance / support
  • the problem isn’t with us, it’s the suppliers who aren’t holding up their end of the contract
  • our needs are just too unique and there’s nothing out there that will close the gaps

as supply chains started to crumble under disruptions. Because, if you just gave them more time, money, and people, everything would work out fine with a little pixie dust.

But we know there’s no silver bullet, and the only answer is to implement the best technology, with the best processes, so you can identify the biggest risks, plan mitigations, detect when they have occurred, respond quickly, and, the rest of the time, deal with exceptions and not standard operating procedures that can be entirely automated.

And, in the late 2010s, that was the extent of the magical thinking theorem. But now, thanks to the Gen-AI garbage marketing overload, and the addition of tail end Millenials (who replaced those put out to the Procurement pasture when they called it quits during COVID or when companies tried to force their return to the office), we have a new corollary to the the Magical Thinking Theorem:

the Procurement department getting the worst deal is also the one that thinks they only way to solve their problem and get the best deal is to adopt and implement Gen-AI as fast as possible

because the Millenials, who grew up glued to their smartphones, and always received instant gratification via Google and Apple, believe there is an app-for-everything and that a natural language Gen-AI app combines the best of both worlds and will solve all their problems.

Their thinking is not only as magical as the last generation thinking (that more time, money, and people can solve anything), but more dangerous (because their answer is to just turn their problems over to the artificial idiocy machine and blindly accept whatever comes out of it, no matter how hallucinatory or ridiculous the answer is).

the doctor said it before and he’ll say it again. There’s no room for magical thinking in Procurement. Just like alchemy needed to be replaced with science, magical thinking needs to be replaced with realist thinking, and random unpredictable Gen-AI replaced with proven deterministic procedural (rules-based) solutions that use tried and true mathematical techniques. (Because, the classic analytics, optimization, and machine learning that you have been ignoring for two decades will do just fine.)

Spendata: The Power Tool for the Power Spend Analyst — Now Usable By Apprentices as Well!

We haven’t covered Spendata much on Sourcing Innovation (SI), as it was only founded in 2015 and the doctor did a deep dive review on Spend Matters in 2018 when it launched (Part I and Part II, ContentHub subscription required), as well as a brief update here on SI where we said Don’t Throw Away that Old Spend Cube, Spendata Will Recover It For You!. the doctor did pen a 2020 follow up on Spend Matters on how Spendata was Rewriting Spend Analysis from the Ground Up, and that was the last major coverage. And even though the media has been a bit quiet, Spendata has been diligently working as hard on platform improvement over the last four years as they were the first four years and just released Version 2.2 (with a few new enhancements in the queue that they will roll out later this year). (Unlike some players which like to tack on a whole new version number after each minor update, or mini-module inclusion, Spendata only does a major version update when they do considerable revamping and expansion, recognizing that the reality is that most vendors only rewrite their solution from the ground up to be better, faster, and more powerful once a decade, and every other release is just an iteration, and incremental improvement of, the last one.)

So what’s new in Spendata V 2.2? A fair amount, but before we get to that, let’s quickly catch you up (and refer you to the linked articles above for a deep dive).

Spendata was built upon a post-modern view of spend analysis where a practitioner should be able to take immediate action on any data she can get her hands on whenever she can get her hands on it and derive whatever insights she can get for process (or spend) improvement. You never have perfect data, and waiting until Duey, Clutterbuck, and Howell1 get all your records in order to even run your first report when you have a dozen different systems to integrate data from, multiple data formats to map, millions of records to classify, cleanse and enrich, and third party data feeds to integrate will take many months, if not a year, and during that year where you quest for the mythical perfect cube you will continue to lose 5% due to process waste, abuse, and fraud, and 3% to 15% (or more) across spend categories where you don’t have good management but could stem the flow simply by identifying them and putting in place a few simple rules or processes. And you can identify some of these opportunities simply by analyzing one system, one category, and one set of suppliers. And then moving on to the next one. And, in the process, Spendata automatically creates and maintains the underlying schema as you slowly build up the dimensions, the mapping, cleansing, and categorization rules, and the basic reports and metrics you need to monitor spend and processes. And maybe you can only do 60% to 80% piecemeal, but during that “piecemeal year”, you can identify over half of your process and cost savings opportunities and start saving now, versus waiting a year to even start the effort. When it comes to spend (related) data analysis, no adage is more true than “don’t put off until tomorrow what you can do today” with Spendata, because, and especially when you start, you don’t need complete or perfect data … you’d be amazed how much insight you can get with 90% in a system or category, and then if the data is inconclusive, keeping drilling and mapping until you get into the 95% to 98% accuracy range.

Spendata was also designed from the ground up to run locally and entirely in the browser, because no one wants to wait for an overburdened server across a slow internet connection, and do so in real time … and by that we mean do real analysis in real time. Spendata can process millions of records a minute in the browser, which allows for real time data loads, cube definitions, category re-mappings, dynamically derived dimensions, roll-ups, and drill downs in real-time on any well-defined data set of interest. (Since most analysis should be department level, category level, regional, etc., and over a relevant time span, that should not include every transaction for the last 10 years because beyond a few years, it’s only the quarter over quarter or year over year totals that become relevant, most relevant data sets for meaningful analysis even for large companies are under a few million transactions.) The goal was to overcome the limitations of the first two generations of spend analysis solutions where the user was limited to drilling around in, and deriving summaries of, fixed (R)OLAP cubes and instead allow a user to define the segmentations they wanted, the way they wanted, on existing or newly loaded (or enriched federated data) in real time. Analysis is NOT a fixed report, it is the ability to look at data in various ways until you uncover an inefficiency or an opportunity. (Nor is it simply throwing a suite of AI tools against a data set — these tools can discover patterns and outliers, but still require a human to judge whether a process improvement can be made or a better contract secured.)

Spendata was built as a third generation spend analysis solution where

  • data can be loaded and processed at any point of the analysis
  • the schema is developed and modified on the fly
  • derived dimensions can be created instantly based on any combination of raw and previously defined derived dimensions
  • additional datasets from internal or external sources can be loaded as their own cubes, which can then be federated and (jointly) drilled for additional insight
  • new dimensions can be built and mapped across these federations that allow for meaningful linkages (such as commodities to cost drivers, savings results to contracts and purchasing projects, opportunities by size, complexity, or ABS analysis, etc.)
  • all existing objects — dimensions, dashboards, views (think dynamic reports that update with the data), and even workspaces can be cloned for easy experimentation
  • filters, which can define views, are their own objects, can be managed as their own objects, and can be, through Spendata‘s novel filter coin implementation, dragged between objects (and even used for easy multi-dimensional mapping)
  • all derivations are defined by rules and formula, and are automatically rederived when any of the underlying data changes
  • cubes can be defined as instances of other cubes, and automatically update when the source cube updates
  • infinite scrolling crosstabs with easy Excel workbook generation on any view and data subset for those who insist on looking at the data old school (as well as “walk downs” from a high-level “view” to a low-level drill-down that demonstrates precisely how an insight was found
  • functional widgets which are not just static or semi-dynamic reporting views, but programmable containers that can dynamically inject data into pre-defined analysis and dimension derivations that a user can use to generate what-if scenarios and custom views with a few quick clicks of the mouse
  • offline spend analysis is also available, in the browser (cached) or on Electron.js (where the later is preferred for Enterprise data analysis clients)

Furthermore, with reference to all of the above, analyst changes to the workspace, including new datasets, new dashboards and views, new dimensions, and so on are preserved across refresh, which is Spendata’s “inheritance” capability that allows individual analysts to create their own analyses and have them automatically updated with new data, without losing their work …

… and this was all in the initial release. (Which, FYI, no other vendor has yet caught up to. NONE of them have full inheritance or Spendata‘s security model. And this was the foundation for all of the advanced features Spendata has been building since its release six years ago.)

After that, as per our updates in 2018 and 2020, Spendata extended their platform with:

  • Unparalleled Security — as the Spendata server is designed to download ONLY the application to the browser, or Spendata‘s demo cubes and knowledge bases, it has no access to your enterprise data;
  • Cube subclassing & auto-rationalization — power users can securely setup derived cubes and sub-cubes off of the organizational master data cubes for the different types of organizational analysis that are required, and each of these sub-cubes can make changes to the default schema/taxonomy, mappings, and (derived) dimensions, and all auto-update when the master cube, or any parent cube in the hierarchy, is updated
  • AI-Based Mapping Rule Identification from Cube Reverse Engineering — Spendata can analyze your current cube (or even a report of vendor by commodity from your old consultant) and derive the rules that were used for mapping, which you can accept, edit, or reject — we all know black box mapping doesn’t work (no matter how much retraining you do, as every “fix” all of a sudden causes an older transaction to be misclassified); but generating the right rules that can be human understood and human maintained guarantees 100% correct classification 100% of the time
  • API access to all functions, including creating and building workspaces, adding datasets, building dimensions, filtering, and data export. All Spendata functions are scriptable and automatable (as opposed to BI tools with limited or nonexistent API support for key functions around building, distributing, and maintaining cubes).

However, as we noted in our introduction, even though this put Spendata leagues beyond the competition (as we still haven’t seen another solution with this level of security; cube subclassing with full inheritance; dynamic workspace, cube, and view creation; etc.), they didn’t stop there. In the rest of this article, we’ll discuss what’s new from the viewpoint of Spendata Competitors:

Spendata Competitors: 7 Things I Hate About You

Cue the Miley Cyrus, because if competitors weren’t scared of Spendata before, if they understand ANY of this, they’ll be scared now (as Spendata is a literal wrecking ball in analytic power). Spendata is now incredibly close to negating entire product lines of not just its competitors, but some of the biggest software enterprises on the planet, and 3.0 may trigger a seismic shift on how people define entire classes of applications. But that’s a post for a later day (but should cue you up for the post that will follow this on on just precisely what Spendata 2.2 really is and can do for you). For now, we’re just going to discuss seven (7) of the most significant enhancements since our last coverage of Spendata.

Dynamic Mapping

Filters can now be used for mapping — and as these filters update, the mapping updates dynamically. Real-time reclassify on the fly in a derived cube using any filter coin, including one dragged out of a drill down in a view. Analysis is now a truly continuous process as you never have to go back and change a rule, reload data, and rebuild a cube to make a correction or see what happens under a reclassification.

View-Based Measures

Integrate any rolled up result back into the base cube on the base transactions as a derived dimension. While this could be done using scripts in earlier versions, it required sophisticated coding skills. Now, it’s almost as easy as a drag-and-drop of a filter coin.

Hierarchical Dashboard Menus

Not only can you organize your dashboards in menus and submenus and sub-sub menus as needed, but you can easily bookmark drill downs and add them under a hierarchical menu — makes it super easy to create point-based walkthroughs that tell a story — and then output them all into a workbook using Spendata‘s capability to output any view, dashboard, or entire workspace as desired.

Search via Excel

While Spendata eliminates the need for Excel for Data Analysis, the reality is that is where most organizational data is (unfortunately) stored, how most data is submitted by vendors to Procurement, and where most Procurement Professionals are the most comfortable. Thus, in the latest version of Spendata, you can drag and drop groups of cells from Excel into Spendata and if you drag and drop them into the search field, it auto-creates a RegEx “OR” that maintains the inputs exactly and finds all matches in the cube you are searching against.

Perfect Star Schema Output

Even though Spendata can do everything any BI tool on the market can do, the reality is that many executives are used to their pretty PowerBI graphs and charts and want to see their (mostly static) reports in PowerBI. So, in order to appease the consultancies that had to support these executives that are (at least) a generation behind on analytics, they encoded the ability to output an entire workspace to a perfect star schema (where all keys are unique and numeric) that is so good that many users see a PowerBI speed up by a factor of almost 10. (As any analyst forced to use PowerBI will tell you, when you give PowerBI any data that is NOT in a perfect star schema, it may not even be able to load the data, and that it’s ability to work with non-numeric keys at a speed faster than you remember on an 8088 is nonexistent.)

Power Tags

You might be thinking “tags, so what“. And if you are equating tags with a hashtag or a dynamically defined user attribute, then we understand. However, Spendata has completely redefined what a tag is and what you can do with it. The best way to understand it is a Microsoft Excel Cell on Steroids. It can be a label. It can be a replica of a value in any view (that dynamically updates if the field in the view updates). It can be a button that links to another dashboard (or a bookmark to any drill-down filtered view in that dashboard). Or all of this. Or, in the next Spendata release, a value that forms the foundation for new derivations and measures in the workspace just like you can reference a random cell in an Excel function. In fact, using tags, you can already build very sophisticated what-if analysis on-the-fly that many providers have to custom build in their core solutions (and take weeks, if not months, to do so) using the seventh new capability of Spendata, and usually do it in hours (at most).

Embedded Applications

In the latest version of Spendata, you can embed custom applications into your workspace. These applications can contain custom scripts, functions, views, dashboards, and even entire datasets that can be used to instantly augment the workspace with new analytic capability, and if the appropriate core columns exist, even automatically federate data across the application datasets and the native workspace.

Need a custom set of preconfigured views and segments for that ABC Analysis? No sweat, just import the ABC Analysis application. Need to do a price variance analysis across products and geographies, along with category summaries? No problem. Just import the Price Variance and Category Analysis application. Need to identify opportunities for renegotiation post M&A, cost reduction through supply base consolidation, and new potential tail spend suppliers? No problem, just import the M&A Analysis app into the workspace for the company under consideration and let it do a company A vs B comparison by supplier, category, and product; generate the views where consolidation would more than double supplier spend, save more than 100K on switching a product from a current supplier to a lower cost supplier; and opportunities for bringing on new tail spend suppliers based upon potential cost reductions. All with one click. Not sure just what the applications can do? Start with the demo workspaces and apps, define your needs, and if the apps don’t exist in the Spendata library, a partner can quickly configure a custom app for you.

And this is just the beginning of what you can do with Spendata. Because Spedata is NOT a Spend Analysis tool. That’s just something it happens to do better than any other analysis tool on the market (in the hands of an analyst willing to truly understand what it does and how to use it — although with apps, drag-and-drop, and easy formula definition through wizardly pop-ups, it’s really not hard to learn how to do more with Spendata than any other analysis tool).

But more on this in our next article. For The Times They Are a-Changin’.

1 Duey, Clutterbuck, and Howell keeps Dewey, Cheatem, and Howe on retainer … it’s the only way they can make sure you pay the inflated invoices if you ever wake up and realize how much you’ve been fleeced for …

The Power of Optimization-Backed Sourcing is in the Right Sourcing Mix Across Scales of Size and Service

the doctor has been pushing optimization-backed sourcing since Sourcing Innovation started in 2006. There’s a number of reasons for this:

  • there is only one other technology that has repeatedly demonstrated savings of 10% or more
  • it’s the only technology that can accurately model total cost of ownership with complex cost discounts and structures
  • it’s the only technology that can minimize costs while adhering to carbon, risk, or other requirements
  • it’s one of only two technologies that can analyze cost / risk, cost / carbon, or other cost / x tradeoffs accurately

However, the real power of optimization-backed sourcing is how it can not only give you the right product mix, but the right mix across scales. This is especially prevalent when doing sourcing events for national or international distribution or utilization. Without optimization, most companies can only deal with suppliers who can handle international distribution or utilization. This generally rules out regional suppliers and always rules out local suppliers, some of whom might be the best suppliers of goods or services to the region or locality. While one may be tempted to think local suppliers are irrelevant because they will struggle to deliver the economy of scale of a regional supplier and will definitely never reach the economy of scale of a national (or international) supplier, unit cost is just one component of the total lifecycle cost of a product or service. There’s transportation cost, tariffs, taxes, intermediate storage, and final storage (of which more will be required since you will need to make larger orders to account for longer distribution timelines) among other costs. So, in some instances, local and regional will be the overall lowest cost and keeping them out of the mix increases costs (and sometimes increases carbon and risk as well).

When it comes to services, the right multi-level mix can lead to savings of 30% or more in an initial event. the doctor has seen this many times over his career (consulting for many of of the strategic sourcing decision optimization startups) because while the big international players can get competitive on hourly rates where they have a lot of resources with a skill set, when it comes to services, there are all in-costs to consider, which include travel to the client site and local accommodations. The thing with national and international services providers is that they tend to cluster all of their resources with a certain skill set in a handful of major locations. So their core IT resources (developers, architects, DBAs, etc.) will be in San Francisco and New York, their core Management consultants will be in Chicago and Atlanta, their core Finance Pros in Salt Lake City and Denver, etc. So if you need IT in Jefferson City, Missouri, Management in Winner, South Dakota, or accounting in Des Moines, Iowa, you’re flying someone in, putting them up at the highest star hotel you have, and possibly doubling the cost compared to a standard day rate.

However, simple product mix and services scenarios are not the only scenarios optimization-backed sourcing can handle. As per this article over on IndianRetailer.com, retailers need to back away from global sourcing and embrace regional (and even local) strategies for cost management, supply stability, and resilience. They are only going to be able to figure that out with optimization that can help them identify the right mix to balance cost and supply assurance, and when you need to do that across hundreds, if not thousands, of products, you can’t do that with an RFX solution and Microsoft Excel.

Furthermore, when you need to minimize costs when a price is fixed, like the price of oil or airline fuel, you need to maximize every related decision like where to refuel, what service providers to contract with, how to transport it, etc. When it can cost up to $40,000 to fuel a 737 for a single flight (when prices are high), and you operate almost 7,000 flights per day with planes ranging from a gulf stream that costs about $10,000 to refuel to a Boeing 747 that, in hard times, can cost almost $400,000 to refuel, you can be spending $60 Million a day on fuel as your fleet burns 10 Million gallons. Storing those 10 Million gallons, transporting those 10 Million gallons, and using that fuel to fuel 7,000 planes takes a lot of manpower and equipment, all of which has an associated cost. Hundreds of thousands of associated costs per day (on the low end), and tens of millions per year. Shaving off just 3% would save over a million dollars easy. (Maybe two million.) However, the complexity of this logistics and distribution model is beyond what any sourcing professional can handle with traditional tools, but easy with an optimization backed platform that can model an entire flight schedule, all of the local costs for storage and fueling, all of the distribution costs from the fuel depots, and so on. (This is something that Coupa is currently supporting with its CSO solution, which has saved at least one airline millions of dollars. Reach out to Ian Milligan for more information if this intrigues you or how this model could be generalized to support global fleet management operations of any kind.)

In other words, Optimization-Backed Sourcing is going to become critical in your highly strategic / high spend categories as costs continue to rise, supply continues to be uncertain, carbon needs to be accounted for, and risks need to be managed.

A Truly Great Article on Transforming Legacy Procurement

If you’re a new occasional reader, you might think that one of the doctor‘s primary goals is to just rip big analyst firms and publications apart when they publish ridiculous results (based on ridiculous surveys) or ill-conceived articles with little to no good Procurement content (if we’re lucky), or wrong content (if we’re not) that, as far as the doctor is concerned, would have been just as good if they unleashed an intern with no knowledge of procurement on Chat-GPT (and you all know what the doctor thinks of that!).

However, that’s just because, as Procurement is hitting the limelight (as a result of all the supply chain disasters we’ve been facing that they have been expected to deal with), coverage has increased significantly (to capitalize on the hot topic), and most of it is, frankly, NOT that good. However, every now and again there is a truly tremendous article published under the radar, and when the doctor finds one of those, he’s very happy to bring your attention to it. Especially when it’s written by a practitioner who obviously gets it.

In her article on From Tactical to Strategic: Transforming Legacy Procurement, the author reminds us that the majority of large scale transformations fail, that a major challenge for older companies is that they have no comprehensive view into global spend, that e-Procurement systems offer many fixes, but also that if they are not optimized for your specific business needs, you could be missing out on opportunities for better supplier partnerships and cost leadership.

This does not mean that you should build your own (overly) customized system, or insist that the systems support your current processes (before determining if those processes are better than the processes supported out-of-the-box by the new systems that have been developed based on typical best practices of the industries the vendor serves), but that the solution has to be appropriate to your industry and support some customization where you need it for specific products, services, or processes that make your business unique (but only those — don’t reinvent the wheel already there where you’re the same as everyone else).

The author then goes on to outline a three-phase approach to identifying, selecting, implementing, and, most importantly, maximizing adoption of the platform — which is an ultimate key to success.

the doctor highly recommends you read this article on going From Tactical to Strategic: Transforming Legacy Procurement.