Monthly Archives: March 2024

Will AI Make Us Irrelevant?

Short Answer: No. But Improper Use Will Make Us “Redundant.

James Meads asks “Will AI in Procurement make us all irrelevant?”

So I will answer. No, it won’t! But it will make those companies who dive off the deep-end on Gen-AI irrelevant as their supply chains crumble with no real human intelligence there to save them when the next crisis hits. (See the myriad of rants here on Sourcing Innovation on just how over-hyped Open Gen-AI technology is and what you actually need to solve your problems.) Also, if we’re lucky, they will take a few providers with no actual platform capability (or Procurement value) down with them. (We need them to get out of the way for those platforms that have been offering real, deterministic, math-based, tried-and-true analytics, optimization, and machine learning solutions [for up to two decades] as there are many companies that need those solutions today.)

While custom-trained closed LLMs can seemingly do a lot of the work for us, they are NOT intelligent, they don’t know good from bad, they don’t know right from wrong, and they definitely don’t know critical from irrelevant. Thus, even though they can put together an NDA or RFP in seconds, it doesn’t mean it’s “fully functional”, that it protects you from all the risks, or that it captures all your requirements. Only an expert human can verify that. [And it doesn’t matter how good your “prompting” is. It can still fail, with a reasonably high probability to boot! (Which is what you can give it!) There’s a reason that Tonkean, an intake automation/enterprise orchestration solution provider, ALWAYS does pre-validation on inputs and-post validation on outputs before showing you anything when it incorporates your LLM technology, because they know just how often it fails and if the response doesn’t closely resemble something expected with very high probability, they won’t even show it to you.]

“AI”, or, more accurately, rules-based automation, will replace humans who are just doing tactical data processing, but it cannot replace humans who can do real strategic analysis, interpretation, and problem solving. Unfortunately for Procurement, given that 80%+ of the time is tactical data processing and fire-fighting, this will cause companies to think they can eliminate 80% of the Procurement team, even though the reality is that the Procurement team isn’t even addressing 20% of spend strategically in any given year, meaning that they should be augmenting the Procurement team with every useful technology they can find to try and get that spend coverage above 80%!

And if you want to know what companies are truly offering valuable “AI” (where the best you will get is Augmented Intelligence, level 2 on the 4 tier scale, as there is no such thing as Artificial Intelligence and many companies still don’t even offer Assisted Intelligence, level 1, and instead disguise their Artificial Idiocy in slick marketing), talk to an analyst who CAN do the math AND the programming.

First published on LinkedIn.

Even Forbes is Falling for the the Gen-AI Garbage!

This recent article in Forbes on the Supply Chain Shift to Intelligent Technology is what inspired last week’s and this week’s rant because, while supply chains should be shifting to intelligent technology, the situations in which that is Gen-AI are still extremely rare (to the point that a blue moon is much more common). But what really got the doctor‘s goat is the ridiculous claims as to what Gen-AI can do. Claims with are simultaneously maddening and saddening because, if they just left out Gen-AI, then everything they claimed is not only doable, but doable with fantastic results.

Of the first three claims, Gen-AI can only be used to solve one — and only partially.

Procurement and Regulatory Compliance
This is one example where a Closed Private Gen-AI LLM is half the battle — it can process, summarize, and highlight key areas of hundred page texts faster and better than prior NLP tech. But it can’t tell you if your current contracts, processes, efforts, or plans will meet the requirements. Not even close. In fact, no AI can — the best AI can just indicate the presence or absence of data, processes, or tech that are most likely to be relevant and then an intelligent human needs to make the decision, possibly only after obtaining appropriate expert Legal advice.
Manufacturing Efficiency
streamline production workflows? optimize processes? reduce errors? No, Hell No, and even the Joker wouldn’t make that joke! You want streamlining? You first have to do a deep process cycle time analysis, compare it to whatever benchmarks you can get, identify the inefficiencies, identify potential processes and tech for improvement, and implement them. Optimize processes? Detailed step by step analysis, identification of opportunities, expert process redesign, training, implementation, and monitoring. Reduce errors? No! People and tech do the processes, not Gen-AI — implement better monitoring, rules, and safeguards.
Virtual Supply Collaboration
A super-charged chatbot on steroids is NOT a virtual assistant. Now, properly sandwiched between classical AI and rules-based intelligence it can deal with 80% of routine inquiries, but not on its own, and it’s arguable if it’s even worth it when a well designed app can get the user to the info they need 10 times faster with just a couple of clicks. Supply chain communicating? People HATE getting a “robot” on a support line as much as you do, to the point some of us start screaming profanities at it if we don’t get a real operator within 10 seconds. Based on this, do you really think your supplier wants to talk to a dumb bot that has NO authority to make a decision (or, at least, should NEVER have the authority — though the doctor is sure someone’s going to be dumb enough to give the bot the authority … let’s just hope they can live with the inevitable consequences)?

And maybe if the article had stopped there the doctor would let it pass, but
first of all, it went on to state the following for “AI”, without clarifying that Gen-AI doesn’t fit in the process, leading us to conclude that, since the first part of the article is about Gen-AI, this part is too, and thus is totally wrong when it claims that:

“AI” understands dirty data
with about 70% accuracy where it counts IF you’re lucky; that’s about how accurate it is at identifying a supplier from your ERP/AP transaction records; an admin assistant will get about 98% accuracy by comparison
it can “confirm” inventories
all it can do is regurgitate what’s in the inventory system — that’s not confirmation!
it can identify duplicate materials
first it has to identify two records that are actually duplicates;
and how likely do you think this is with a supplier mapping accuracy of 70%?
it can identify materials to be shared among facilities
well, okay, it can identify materials that are used across facilities and could be located in a central location — but how useful is that? it’s not because, first of all, YOU ALREADY KNOW THIS, and, second, IT CAN’T DO SUPPLY CHAIN OPTIMIZATION — THAT’S WHAT A SUPPLY CHAIN OPTIMIZATION SOLUTION IS FOR! OPTIMIZATION!!! We’ll break it down syllabically for you so you know what to ask for. OP – TUH – MY – ZAY – SHUN!
it can recommend ideal storage locations
again, NO! This requires solving a very sophisticated optimization model it doesn’t have the data for, doesn’t know how to build, and definitely doesn’t know how to solve.
it can revamp outdated stocking policies
well, only the solution of a proper Inventory OPTIMIZATION Model that identifies the appropriate locations and safety stock levels can identify how these should be revamped
it can recommend order patterns by consumption and lead time
that’s classical curve fitting and tend projection

And, secondly, as the doctor just explained, most of what they were saying AI could do CAN’T be done with AI, and instead can only be done with analytics, optimization, and advanced mathematical models! (You know, the advanced tech (that works) that you’ve been ignoring for over two decades!)

The Gen-AI garbage is getting out of control. It’s time to stop putting up with it and start pushing back against any provider who’s trying to sell you this miracle cure silicon snake oil and show them the door. There are real solutions that work, and have worked, for two decades that will revolutionize your supply chain. You don’t need false promises and tech that isn’t ready for prime time.

Somedays the doctor just wishes he was the Scarecrow. Only someone without a brain can deal with this constant level of Gen-AI bullsh!t and not be stressed about the deluge of misinformation being spread on a daily basis! But then again, without a brain, he might be fooled by the slick salespeople that Gen-AI could give him one, instead of remembering the wise words of the True Scarecrow.

The Supply Chain is Full of Hidden Risks

A recent article in the Supply Chain Management Review by Avetta provided Insights for Procurement Leaders on tackling hidden risks in the supply chain. As per the article, supply chains are full of:

  • Geographic Vulnerabilities
  • Cybersecurity Threats
  • Ethical and Compliance Issues
  • Financial Instability
  • Environmental Recklessness

… and all of this poses a major risk to your supply chain. Avetta‘s baker’s dozen of recommendations are to:

  • conduct due diligence on all level of suppliers
  • identify alternate sources
  • monitor geographical developments
  • prioritize cybersecurity measures
  • conduct regular risk assessments
  • foster a culture of cyber awareness
  • establish clear codes of conduct
  • regularly audit supply chain partners
  • prioritize transparency and accountability
  • rigourous financial due diligence
  • monitor key financial indicators
  • prioritize sustainability initiatives
  • establish robust contingency plans

And these are all good, but most of the risk results from one thing:

  • lack of timely, accurate data on
    • the physical supply chain (people, plants, product, vehicles, etc.)
    • the financial supply chain (the financial state of suppliers, contractors, employees, etc.)
    • the information supply chain (completeness, accuracy, security, etc.)

This says that if you really want to tackle the hidden risks, you need to start with the following as you can’t tackle anything you can’t identify:

  • supply chain visibility — map every entity in your supply chain
  • external risk monitoring — whenever a geographical, political, environmental, or cyber disruption happens anywhere, and is reported, you need to detect that, identify all entities that may be affected, confirm which entities in your supply chain are affected, and take an appropriate mitigating action
  • cyber network monitoring — you need to monitor your entire network, every server, every client (desktop, laptop, tablet, AND cell phone), every router, every API end point, and every wire … your weakest link is your effective security
  • cross-system and account financial monitoring — money disappears when there are holes for it to fall into; holes exist when you have disconnected P-Card, e-Procurement, and AP systems, especially across divisions and you aren’t correlating balances between transfers, bank accounts, and investments on at least a daily basis
  • activity monitoring — all waste, loss, and fraud is the result of a bad actor, whether or not the bad acting was intentional (hint: if the loss is significant, it usually is intentional; incompetence often only results in minor loss); but you can’t monitor everyone, even if you wholly operate in a jurisdiction where doing so is legal; but, when everything is digitized, you can monitor every action, whether or not is in accordance with policy, flag everything that isn’t, and escalate any actions that are against policy that should be investigated

As you detect issues and disruptions, you can start with standard mitigation actions, and as you identify patterns of commonality, you can identify additional contingency plans, which you should already have for every product or service that is critical to your operation.

Note that Sourcing Innovation has published a list of 55+ Supply Chain Risk Vendors that already have solutions that do a lot of this monitoring. There’s no excuse for your organization not to have at least an 80% solution in place today.

You Don’t Need Gen-AI to Revolutionize Procurement and Supply Chain Management — Classic Analytics, Optimization, and Machine Learning that You Have Been Ignoring for Two Decades Will Do Just Fine!

Open Gen-AI technology may be about as reliable as a career politician managing your Nigerian bank account, but somehow it’s won the PR war (since there is longer any requirement to speak the truth or state actual facts in sales and marketing in most “first” world countries [where they believe Alternative Math is a real thing … and that’s why they can’t balance their budgets, FYI]) as every Big X, Mid-Sized Consultancy, and the majority of software vendors are pushing Open Gen-AI as the greatest revolution in technology since the abacus. the doctor shouldn’t be surprised, given that most of the turkeys on their rafters can’t even do basic math* (but yet profess to deeply understand this technology) and thus believe the hype (and downplay the serious risks, which we summarized in this article, where we didn’t even mention the quality of the results when you unexpectedly get a result that doesn’t exhibit any of the six major issues).

The Power of Real Spend Analysis

If you have a real Spend Analysis tool, like Spendata (The Spend Analysis Power Tool), simple data exploration will find you a 10% or more savings opportunity in just a few days (well, maybe a few weeks, but that’s still just a matter of days). It’s one of only two technologies that has been demonstrated, when properly deployed and used, to identify returns of 10% or more, year after year after year, since the mid 2000s (when the technology wasn’t nearly as good as it is today), and it can be used by any Procurement or Finance Analyst that has a basic understanding of their data.

When you have a tool that will let you analyze data around any dimension of interest — supplier, category, product — restrict it to any subset of interest — timeframe, geographic location, off-contract spend — and roll-up, compare against, and drill down by variance — the opportunities you will find will be considerable. Even in the best sourced top spend categories, you’ll usually find 2% to 3%, in the mid-spend likely 5% or more, in the tail, likely 15% or more … and that’s before you identify unexpected opportunities by division (who aren’t adhering to the new contracts), geography (where a new local supplier can slash transportation costs), product line (where subtle shifts in pricing — and yes, real spend analysis can also handle sales and pricing data — lead to unexpected sales increases and greater savings when you bump your orders to the next discount level), and even in warranty costs (when you identify that a certain supplier location is continually delivering low quality goods compared to its peers).

And that’s just the Procurement spend … it can also handle the supply chain spend, logistics spend, warranty spend, utility and HR spend — and while you can’t control the HR spend, you can get a handle on your average cost by position by location and possibly restructure your hubs during expansion time to where resources are lower cost! Savings, savings, savings … you’ll find them ’round the clock … savings, savings, savings … analytics rocks!

The Power of Strategic Sourcing Decision Optimization

Decision optimization has been around in the Procurement space for almost 25 years, but it still has less than 10% penetration! This is utterly abysmal. It’s not only the only other technology that has been generating returns of 10% or more, in good times and bad, for any leading organization that consistently uses it, but the only technology that the doctor has seen that has consistently generated 20% to 30% savings opportunities on large multi-national complex categories that just can’t be solved with RFQ and a spreadsheet, no matter how hard you try. (But if you want to pay them, an expert consultant will still claim they can with the old college try if you pay their top analyst’s salary for a few months … and at, say, 5K a day, there goes three times any savings they identify.)

Examples where the doctor has repeatedly seen stellar results include:

  • national service provider contract optimization across national, regional, and local providers where rates, expected utilization, and all-in costs for remote resources are considered; With just an RFX solution, the usual solution is to go to all the relevant Big X and Mid-Sized Bodyshops and get their rate cards by role by location by base rate (with expenses picked up by the org) and all-in rate; calc. the expected local overhead rate by location; then, for each Big X / Mid-Size- role – location, determine if the Big X all-in rate or the Big X base rate plus their overhead is cheaper and select that as the final bid for analysis; then mark the lowest bid for each role-location and determine the three top providers; then distribute the award between the three “top” providers in the lowest cost fashion; and, in big companies using a lot of contract labour, leave millions on the table because 1) sometimes the cheapest 3 will actually be the providers with the middle of the road bids across the board and 2) for some areas/roles, regional, and definitely local, providers will often be cheaper — but since the complexity is beyond manageable, this isn’t done, even though the doctor has seen multiple real-world events generate 30% to 40% savings since optimization can handle hundreds of suppliers and tens of thousands of bids and find the perfect mix (even while limiting the number of global providers and the number of providers who can service a location)
  • global mailer / catalog production —
    paper won’t go away, and when you have to balance inks, papers, printing, distribution, and mailing — it’s not always local or one country in a region that minimizes costs, it’s a very complex sourcing AND logistics distribution that optimizes costs … and the real-world model gets dizzying fast unless you use optimization, which will find 10% or more savings beyond your current best efforts
  • build-to-order assembly — don’t just leave that to the contract manufacturer, when you can simultaneously analyze the entire BoM and supply chain, which can easily dwarf the above two models if you have 50 or more items, as savings will just appear when you do so

… but yet, because it’s “math”, it doesn’t get used, even though you don’t have to do the math — the platform does!

Curve Fitting Trend Analysis

Dozens (and dozens) of “AI” models have been developed over the past few years to provide you with “predictive” forecasts, insights, and analytics, but guess what? Not a SINGLE model has outdone classical curve-fitting trend analysis — and NOT a single model ever will. (This is because all these fancy-smancy black box solutions do is attempt to identify the record/transaction “fingerprint” that contains the most relevant data and then attempt to identify the “curve” or “line” to fit it too all at once, which means the upper bound is a classical model that uses the right data and fits to the right curve from the beginning, without wasting an entire plant’s worth of energy powering entire data centers as the algorithm repeatedly guesses random fingerprints and models until one seems to work well.)

And the reality is that these standard techniques (which have been refined since the 60s and 70s), which now run blindingly fast on large data sets thanks to today’s computing, can achieve 95% to 98% accuracy in some domains, with no misfires. A 95% accurate forecast on inventory, sales, etc. is pretty damn good and minimizes the buffer stock, and lead time, you need. Detailed, fine tuned, correlation analysis can accurately predict the impact of sales and industry events. And so on.

Going one step further, there exists a host of clustering techniques that can identify emergent trends in outlier behaviour as well as pockets of customers or demand. And so on. But chances are you aren’t using any of these techniques.

So given that most of you haven’t adopted any of this technology that has proven to be reliable, effective, and extremely valuable, why on earth would you want to adopt an unproven technology that hallucinates daily, might tell of your sensitive employees with hate speech, and even leak your data? It makes ZERO sense!

While we admit that someday semi-private LLMs will be an appropriate solution for certain areas of your business where large amount of textual analysis is required on a regular basis, even these are still iffy today and can’t always be trusted. And the doctor doesn’t care how slick that chatbot is because if you have to spend days learning how to expertly craft a prompt just to get a single result, you might as well just learn to code and use a classic open source Neural Net library — you’ll get better, more reliable, results faster.

Keep an eye on the tech if you like, but nothing stops you from using the tech that works. Let your peers be the test pilots. You really don’t want to be in the cockpit when it crashes.

* And if you don’t understand why a deep understand of university level mathematics, preferably at the graduate level, is important, then you shouldn’t be touching the turkey who touches the Gen-AI solution with a 10-foot pole!

Spendata: The Power Tool for the Power Spend Analyst — Now Usable By Apprentices as Well!

We haven’t covered Spendata much on Sourcing Innovation (SI), as it was only founded in 2015 and the doctor did a deep dive review on Spend Matters in 2018 when it launched (Part I and Part II, ContentHub subscription required), as well as a brief update here on SI where we said Don’t Throw Away that Old Spend Cube, Spendata Will Recover It For You!. the doctor did pen a 2020 follow up on Spend Matters on how Spendata was Rewriting Spend Analysis from the Ground Up, and that was the last major coverage. And even though the media has been a bit quiet, Spendata has been diligently working as hard on platform improvement over the last four years as they were the first four years and just released Version 2.2 (with a few new enhancements in the queue that they will roll out later this year). (Unlike some players which like to tack on a whole new version number after each minor update, or mini-module inclusion, Spendata only does a major version update when they do considerable revamping and expansion, recognizing that the reality is that most vendors only rewrite their solution from the ground up to be better, faster, and more powerful once a decade, and every other release is just an iteration, and incremental improvement of, the last one.)

So what’s new in Spendata V 2.2? A fair amount, but before we get to that, let’s quickly catch you up (and refer you to the linked articles above for a deep dive).

Spendata was built upon a post-modern view of spend analysis where a practitioner should be able to take immediate action on any data she can get her hands on whenever she can get her hands on it and derive whatever insights she can get for process (or spend) improvement. You never have perfect data, and waiting until Duey, Clutterbuck, and Howell1 get all your records in order to even run your first report when you have a dozen different systems to integrate data from, multiple data formats to map, millions of records to classify, cleanse and enrich, and third party data feeds to integrate will take many months, if not a year, and during that year where you quest for the mythical perfect cube you will continue to lose 5% due to process waste, abuse, and fraud, and 3% to 15% (or more) across spend categories where you don’t have good management but could stem the flow simply by identifying them and putting in place a few simple rules or processes. And you can identify some of these opportunities simply by analyzing one system, one category, and one set of suppliers. And then moving on to the next one. And, in the process, Spendata automatically creates and maintains the underlying schema as you slowly build up the dimensions, the mapping, cleansing, and categorization rules, and the basic reports and metrics you need to monitor spend and processes. And maybe you can only do 60% to 80% piecemeal, but during that “piecemeal year”, you can identify over half of your process and cost savings opportunities and start saving now, versus waiting a year to even start the effort. When it comes to spend (related) data analysis, no adage is more true than “don’t put off until tomorrow what you can do today” with Spendata, because, and especially when you start, you don’t need complete or perfect data … you’d be amazed how much insight you can get with 90% in a system or category, and then if the data is inconclusive, keeping drilling and mapping until you get into the 95% to 98% accuracy range.

Spendata was also designed from the ground up to run locally and entirely in the browser, because no one wants to wait for an overburdened server across a slow internet connection, and do so in real time … and by that we mean do real analysis in real time. Spendata can process millions of records a minute in the browser, which allows for real time data loads, cube definitions, category re-mappings, dynamically derived dimensions, roll-ups, and drill downs in real-time on any well-defined data set of interest. (Since most analysis should be department level, category level, regional, etc., and over a relevant time span, that should not include every transaction for the last 10 years because beyond a few years, it’s only the quarter over quarter or year over year totals that become relevant, most relevant data sets for meaningful analysis even for large companies are under a few million transactions.) The goal was to overcome the limitations of the first two generations of spend analysis solutions where the user was limited to drilling around in, and deriving summaries of, fixed (R)OLAP cubes and instead allow a user to define the segmentations they wanted, the way they wanted, on existing or newly loaded (or enriched federated data) in real time. Analysis is NOT a fixed report, it is the ability to look at data in various ways until you uncover an inefficiency or an opportunity. (Nor is it simply throwing a suite of AI tools against a data set — these tools can discover patterns and outliers, but still require a human to judge whether a process improvement can be made or a better contract secured.)

Spendata was built as a third generation spend analysis solution where

  • data can be loaded and processed at any point of the analysis
  • the schema is developed and modified on the fly
  • derived dimensions can be created instantly based on any combination of raw and previously defined derived dimensions
  • additional datasets from internal or external sources can be loaded as their own cubes, which can then be federated and (jointly) drilled for additional insight
  • new dimensions can be built and mapped across these federations that allow for meaningful linkages (such as commodities to cost drivers, savings results to contracts and purchasing projects, opportunities by size, complexity, or ABS analysis, etc.)
  • all existing objects — dimensions, dashboards, views (think dynamic reports that update with the data), and even workspaces can be cloned for easy experimentation
  • filters, which can define views, are their own objects, can be managed as their own objects, and can be, through Spendata‘s novel filter coin implementation, dragged between objects (and even used for easy multi-dimensional mapping)
  • all derivations are defined by rules and formula, and are automatically rederived when any of the underlying data changes
  • cubes can be defined as instances of other cubes, and automatically update when the source cube updates
  • infinite scrolling crosstabs with easy Excel workbook generation on any view and data subset for those who insist on looking at the data old school (as well as “walk downs” from a high-level “view” to a low-level drill-down that demonstrates precisely how an insight was found
  • functional widgets which are not just static or semi-dynamic reporting views, but programmable containers that can dynamically inject data into pre-defined analysis and dimension derivations that a user can use to generate what-if scenarios and custom views with a few quick clicks of the mouse
  • offline spend analysis is also available, in the browser (cached) or on Electron.js (where the later is preferred for Enterprise data analysis clients)

Furthermore, with reference to all of the above, analyst changes to the workspace, including new datasets, new dashboards and views, new dimensions, and so on are preserved across refresh, which is Spendata’s “inheritance” capability that allows individual analysts to create their own analyses and have them automatically updated with new data, without losing their work …

… and this was all in the initial release. (Which, FYI, no other vendor has yet caught up to. NONE of them have full inheritance or Spendata‘s security model. And this was the foundation for all of the advanced features Spendata has been building since its release six years ago.)

After that, as per our updates in 2018 and 2020, Spendata extended their platform with:

  • Unparalleled Security — as the Spendata server is designed to download ONLY the application to the browser, or Spendata‘s demo cubes and knowledge bases, it has no access to your enterprise data;
  • Cube subclassing & auto-rationalization — power users can securely setup derived cubes and sub-cubes off of the organizational master data cubes for the different types of organizational analysis that are required, and each of these sub-cubes can make changes to the default schema/taxonomy, mappings, and (derived) dimensions, and all auto-update when the master cube, or any parent cube in the hierarchy, is updated
  • AI-Based Mapping Rule Identification from Cube Reverse Engineering — Spendata can analyze your current cube (or even a report of vendor by commodity from your old consultant) and derive the rules that were used for mapping, which you can accept, edit, or reject — we all know black box mapping doesn’t work (no matter how much retraining you do, as every “fix” all of a sudden causes an older transaction to be misclassified); but generating the right rules that can be human understood and human maintained guarantees 100% correct classification 100% of the time
  • API access to all functions, including creating and building workspaces, adding datasets, building dimensions, filtering, and data export. All Spendata functions are scriptable and automatable (as opposed to BI tools with limited or nonexistent API support for key functions around building, distributing, and maintaining cubes).

However, as we noted in our introduction, even though this put Spendata leagues beyond the competition (as we still haven’t seen another solution with this level of security; cube subclassing with full inheritance; dynamic workspace, cube, and view creation; etc.), they didn’t stop there. In the rest of this article, we’ll discuss what’s new from the viewpoint of Spendata Competitors:

Spendata Competitors: 7 Things I Hate About You

Cue the Miley Cyrus, because if competitors weren’t scared of Spendata before, if they understand ANY of this, they’ll be scared now (as Spendata is a literal wrecking ball in analytic power). Spendata is now incredibly close to negating entire product lines of not just its competitors, but some of the biggest software enterprises on the planet, and 3.0 may trigger a seismic shift on how people define entire classes of applications. But that’s a post for a later day (but should cue you up for the post that will follow this on on just precisely what Spendata 2.2 really is and can do for you). For now, we’re just going to discuss seven (7) of the most significant enhancements since our last coverage of Spendata.

Dynamic Mapping

Filters can now be used for mapping — and as these filters update, the mapping updates dynamically. Real-time reclassify on the fly in a derived cube using any filter coin, including one dragged out of a drill down in a view. Analysis is now a truly continuous process as you never have to go back and change a rule, reload data, and rebuild a cube to make a correction or see what happens under a reclassification.

View-Based Measures

Integrate any rolled up result back into the base cube on the base transactions as a derived dimension. While this could be done using scripts in earlier versions, it required sophisticated coding skills. Now, it’s almost as easy as a drag-and-drop of a filter coin.

Hierarchical Dashboard Menus

Not only can you organize your dashboards in menus and submenus and sub-sub menus as needed, but you can easily bookmark drill downs and add them under a hierarchical menu — makes it super easy to create point-based walkthroughs that tell a story — and then output them all into a workbook using Spendata‘s capability to output any view, dashboard, or entire workspace as desired.

Search via Excel

While Spendata eliminates the need for Excel for Data Analysis, the reality is that is where most organizational data is (unfortunately) stored, how most data is submitted by vendors to Procurement, and where most Procurement Professionals are the most comfortable. Thus, in the latest version of Spendata, you can drag and drop groups of cells from Excel into Spendata and if you drag and drop them into the search field, it auto-creates a RegEx “OR” that maintains the inputs exactly and finds all matches in the cube you are searching against.

Perfect Star Schema Output

Even though Spendata can do everything any BI tool on the market can do, the reality is that many executives are used to their pretty PowerBI graphs and charts and want to see their (mostly static) reports in PowerBI. So, in order to appease the consultancies that had to support these executives that are (at least) a generation behind on analytics, they encoded the ability to output an entire workspace to a perfect star schema (where all keys are unique and numeric) that is so good that many users see a PowerBI speed up by a factor of almost 10. (As any analyst forced to use PowerBI will tell you, when you give PowerBI any data that is NOT in a perfect star schema, it may not even be able to load the data, and that it’s ability to work with non-numeric keys at a speed faster than you remember on an 8088 is nonexistent.)

Power Tags

You might be thinking “tags, so what“. And if you are equating tags with a hashtag or a dynamically defined user attribute, then we understand. However, Spendata has completely redefined what a tag is and what you can do with it. The best way to understand it is a Microsoft Excel Cell on Steroids. It can be a label. It can be a replica of a value in any view (that dynamically updates if the field in the view updates). It can be a button that links to another dashboard (or a bookmark to any drill-down filtered view in that dashboard). Or all of this. Or, in the next Spendata release, a value that forms the foundation for new derivations and measures in the workspace just like you can reference a random cell in an Excel function. In fact, using tags, you can already build very sophisticated what-if analysis on-the-fly that many providers have to custom build in their core solutions (and take weeks, if not months, to do so) using the seventh new capability of Spendata, and usually do it in hours (at most).

Embedded Applications

In the latest version of Spendata, you can embed custom applications into your workspace. These applications can contain custom scripts, functions, views, dashboards, and even entire datasets that can be used to instantly augment the workspace with new analytic capability, and if the appropriate core columns exist, even automatically federate data across the application datasets and the native workspace.

Need a custom set of preconfigured views and segments for that ABC Analysis? No sweat, just import the ABC Analysis application. Need to do a price variance analysis across products and geographies, along with category summaries? No problem. Just import the Price Variance and Category Analysis application. Need to identify opportunities for renegotiation post M&A, cost reduction through supply base consolidation, and new potential tail spend suppliers? No problem, just import the M&A Analysis app into the workspace for the company under consideration and let it do a company A vs B comparison by supplier, category, and product; generate the views where consolidation would more than double supplier spend, save more than 100K on switching a product from a current supplier to a lower cost supplier; and opportunities for bringing on new tail spend suppliers based upon potential cost reductions. All with one click. Not sure just what the applications can do? Start with the demo workspaces and apps, define your needs, and if the apps don’t exist in the Spendata library, a partner can quickly configure a custom app for you.

And this is just the beginning of what you can do with Spendata. Because Spedata is NOT a Spend Analysis tool. That’s just something it happens to do better than any other analysis tool on the market (in the hands of an analyst willing to truly understand what it does and how to use it — although with apps, drag-and-drop, and easy formula definition through wizardly pop-ups, it’s really not hard to learn how to do more with Spendata than any other analysis tool).

But more on this in our next article. For The Times They Are a-Changin’.

1 Duey, Clutterbuck, and Howell keeps Dewey, Cheatem, and Howe on retainer … it’s the only way they can make sure you pay the inflated invoices if you ever wake up and realize how much you’ve been fleeced for …