Category Archives: Spend Analysis

Dear SaaS Provider, Where’s Your Substance? Being SaaSy is No Longer Enough.

As per our January article, Half a Trillion Dollars will be Wasted on SaaS Spend This Year and, as per a recent article over on The CFO, CFO’s are wising up to the hidden bill attached to SaaS and cloud, which might just be growing faster than the US National Debt (on a per capita basis).

As the CFO article notes, per-employee SaaS subscriptions alone are now costing businesses $2,000 (or more) annually on average, and that’s including ALL employees from the Janitor (who shouldn’t be using any SaaS) to the CEO (who likely doesn’t use any SaaS either and just needs a locally installed PowerPoint license).

To put this in perspective, this says a small company of only 1,000 people is spending 2 MILLION on SaaS (and a mid-size company of 10,000 people is spending 20 MILLION), most of it consumer, and likely a good portion of it through B2B Software Marketplaces because it’s easier for AP. If the average salary is 100K with 30K base overhead, that’s costing the organization 15 (or 150) people, or a 1.5% increase in workforce, which is substantial if it’s an organization that needs people to grow.

And the worst part is that a very significant portion of this spend is overspend or unnecessary spend, with many SaaS auditors and SaaS management specialists finding 33% (or more) overspend as a result of duplicate tools, unused licenses, and sometimes outright zombie subscriptions that just need to be cancelled. Plus, poor management and provisioning leads to unnecessary surcharges that is almost as bad as unused licenses.

There’s no excuse for it, and CFOs are not going to put up with it anymore. SaaS Audit and Management tools are going to become a lot more common, and once the zombie subscriptions, unused licenses, and cloud subscriptions are rightsized, when these companies realize they are still spending at least 1,500 per employee on SaaS and cloud, they are going to start grouping tools by function and analyzing value. If there are two tools that do lead management, workforce management, or catalog management, one is going to go. More specifically, the one providing the least value to the organization. It doesn’t support multiple what-if scenario creation yet or true SSDO, but its more than just simple side-by-side comparison and more analysis capability is on the roadmap for later this year.

So, dear SaaS Provider, it’s important to ask:

  • what’s your substance
  • how do you provide more hard dollar value for that substance than your peers
  • how do you measure it and prove it to the customer
  • … and make sure you’re not the vendor that is cancelled during the audit

And, dear organization who hasn’t done a SaaS audit recently, why haven’t you? You’re sitting on 30% overspend in a category which is likely, with most of the spend split between departments and hidden on P-Cards and expense reports, $2,000 per employee and growing daily. You need to do the audit, rightsize your SaaS, and then centralize SaaS management and SaaS acquisition policy. It’s not a minor expense, it’s a major, business altering, outlay.

Analytics Is NOT Reporting!

We’ve covered analytics, and spend analysis, a lot on this blog, but seeing the surge in articles on analytics as of late, and the large number that are missing the point, it seems we have to remind you again that Analytics is NOT Reporting. (Which, of course, would be clear if anyone bothered to pick up a dictionary anymore.)

As defined by the Oxford dictionary, analytics is the systematic computational analysis of data or statistics and a report is a written account of something that has been observed, heard, done, or investigated. In simple terms, analysis is what is done to identify useful information and reporting is the process of displaying that information in a fancy-shmancy graph. One is useful, one is, quite frankly, useless.

A key requirement of analysis is the ability to do arbitrary systematic computational analysis of data as needed to find the information that you need when you need it. Not just a small set of canned analysis on discrete data subsets that become completely and utterly useless once they are run the first time and you get the initial result — which will NEVER change if the analysis can’t change.

Nor is analysis a random AI application that applies a random statistical algorithm to bubble up, filter out, or generate a random “insight” that may or may not be useful from a Procurement viewpoint. Sometimes an outlier is indicative of fraud or a data error, and sometimes an outlier is just an outlier. Maybe the average transaction value with the services firm is 15,000 for the weekly bill; which makes the 3,000 an outlier, but it’s not fraud if the company only needed a cyber-security expert for one day to test a key system — in fact, the insight is useless.

As per our recent post on a true enterprise analytics solution, real analysis requires the ability to explore a hunch and find the answer to any question that pops up when it pops up. To build whatever cube is needed, on whatever dimensions are required, that rolls up data using whatever metrics are required to produce whatever insights are needed to determine if an opportunity is there and if it is worth being pursued. Quickly and cost-effectively in real-time. If you have to wait for a refresh, or spend days doing offline computation in Excel to answer a question that might only save you 20K, you’re not going to do it. (Three days and 6K of your time from a company perspective is not worth a 20K saving if that time spent preparing for a negotiation on a 10M category can save an extra 0.5%, which would equate to 50K. But if you can dynamically build a cube and get an answer in 30 minutes, that 30 minutes is definitely worth it if your hunch is right and you save 20K.)

Analysis is the ability to ask “what if” and pursue the answer. Now! Not tomorrow, next week, or next month on the cube refresh, or when the provider’s personnel can build that new report for you. Now! At any time you should be able to ask What if we reclassify the categories so that the primary classification is based on primary material (“steel”) and not usage (“electrical equipment”); What if the savings analysis is done by sourcing strategy (RFX, auction, re-negotiation, etc.) instead of contract value; and What if the risk analysis is done by trade lane instead of supplier or category. Analysis is the process of asking a question, any question, and working the data to get the answer using whatever computations are required. It’s not a canned report.

Analytics is doing, not viewing. And the basics haven’t changed since SI started writing about it, or publishing guest posts by the Old Greybeard himself. (Analytics I, II, III, IV, V, and VI.)

Spendata: A True Enterprise Analytics Solution

As we indicated in our last article, while Spendata is the absolute best at spend analysis, it’s not just a spend analysis platform. It’s a general-purpose data analytics platform that can be used for much more than spend analysis.

The current end-state vision for business data analytics is a “data lake” database with a BI front end. The Big X consultancies (aided and abetted by your IT department, which is only too eager to implement another big system) will try to convince you of the data paradise you’ll have if you dump all of your business data into a data lake. Unfortunately, reality doesn’t support the vision, because organizational data is created only to the extent necessary, never verified, riddled with errors from day one, and left to decay over time as it’s never updated. The data lake is ultimately a data cesspool.

Pointing a BI tool at the (dirty) lake will spice up the data with bars, pies, waves, scatters, multi-coloured geometric shapes, and so on, but you won’t find much insight other than the realization that your data is, in fact, dirty. Worse, a published BI dashboard is like a spreadsheet you can’t modify. Try mapping new dimensions, creating new measures, adding new data, or performing even the simplest modification of an existing dimension or hierarchy, and you’ll understand why this author likes to point out that BI should actually stand for Bullsh!t Images, not Business Intelligence.

So how does a spend analysis platform like Spendata end up being a general-purpose data analytics tool? The answer is that the mechanisms and procedures associated with spend analysis and spend analysis databases, specifically data mapping and dimension derivation, can be taken to the next level — extended, generalized, and moved into real time. Once those key architectural steps are taken, the system can be further extended with view-based measures, shared cubes where custom modifications are retained across refreshes, and spreadsheet-like dependencies and recalculation at database scale.

The result is an analysis system that can be adapted not only to any of the common spend analysis problems, such as AP/PO analysis or commodity-specific cubes with item level price X quantity data, but also to savings tracking and sourcing and implementation plans. Extending the system to domains beyond spend analysis is simple: just load different data.
The bottom line is that to do real data analysis, no matter what the domain, you need:

  • the ability to extend the schema at any time
  • the ability to add new derived dimensions at any time
  • the ability to change mappings at any time
  • the ability to build derivations, data views, and mappings that are dependent on other derivations, mappings, views, inputs, linked datasets, and so on, with real-time “recalc”
  • the ability to create new views and reports relevant to the question you have … without dumping the data to Excel
  • … and preserve all of the above on cube data refreshes
  • … in your own copy of the cube so you don’t have to wait for anyone to agree
  • … and get an answer today, not on the next refresh next month when you’ve forgotten why you even had the question in the first place

You don’t get any of that from a spend analysis solution, or a BI solution, or a database pointing at a data lake. You only get that in a modern data analysis solution — which supports all of the above, and more, for any kind of data. A data analysis system works equally well across all types of numeric or set-valued data, including, but not limited to sales data, service data, warranty data, process data, and so on.

As Spendata is a real data analysis solution, it supports all of these analyses with a solution that’s easier and friendlier to use than the spreadsheet you use every day. Let’s walk through some examples so you can understand what a data analysis solution really can do.

SALES ANALYSIS

Spending data consists of numerical amounts that represent the price, tax, duty, shipping, etc. paid for items purchased. Sales data is numerical amounts that represent the price, tax, duty, shipping, etc. paid for items sold.

They are basically the inverse of each other. For every purchase, there is a sale. For every sale, there is a purchase. So, there’s absolutely no reason that you shouldn’t be able to apply the exact the same analysis (possibly in reverse) to sales data as you apply to spend data. That is, IF you have a proper data analysis tool. The latter part is the big IF because if you’re using a custom tool that needs to map all data to a schema with fixed semantics, it won’t understand the data and you’re SOL.

However, since Spendata is a general-purpose data analysis tool that builds and maintains its schema on the fly, it doesn’t care if the dataset is spend data or sales data; it’s still transactional data and it’s happy to analyze away. If you need the handholding of a workflow-oriented UI, that can also be configured out of the box using Spendata‘s new “app” capability.

Here are three types of sales analysis that Spendata supports better than CRM/Sales Forecasting systems, and that can’t be done at all with a data lake and a BI tool.

Sales Discount Variation Analysis Over Time By Salesperson … and Client Type

You run a sales team. Are your different salespeople giving the same mix of discounts by product type to the same types of customers by customer size and average sales size?

Sounds easy right? Can’t you simply plot the product/price ratio by month by salesperson in a bubble chart (where volume size correlates to bubble size) against the average trend line and calculate which salespeople are off the most (in the wrong direction)? Sure, but how do you handle client type? You could add a “color” dimension, but when the bubbles overlap and the bubbles blur, can you see it visually? Not likely. And how do you remember a low sales volume customer which is a strategic partner, so has a special deal? Theoretically you could add another column to the table “Salesperson, Product/Price Ratio, Client Type, Over/Under Average”, and that would work as long as you could pre-compute the average discount by Product/Price Ratio and Client Type.

And then you realize that unless you group by category, you have entirely different products in the same product/price ratio and your multi-stage analysis is worthless, so you have to go back and start again, only to find out that the bubble chart is only pseudo-useful (as you can’t really figure it out visually because what is that shade of pink (from the multiple red and white bubbles overlapping) — Fuchsia, Bright, or Barbie — and what does it mean) and you will have to focus on the fixed table to extract any value at all from the analysis.

But then you’ll realize that you still need to see monthly variations in the chart, meaning you want the ability to drag a slider or change the month and have the bubble chart update. Uh-oh, you forgot to individually compute all the amounts by month or select the slider graph! Back to square one, doing it all over again by month. Then you notice some customers have long-term, fixed prices on some products, which messes up the average discount on these products as the prices for these customers are not changing over time. You redo the work for the third (or is it the fourth? time), and then you realize that your definitions of client type “large, medium, and small” are slightly off as a client that should be in large is in medium and two that should be in small were made medium. Aaarrrggghhh!!!

But with Spendata, you simply create or modify dimensions to the cube to segment the data (customer type, product groups, etc.) You leverage a dynamic view-based measure by customer type to set the average prices per time period (used to calculate the discount). You then use filters to define the time range of interest, another view with filters to click through the months over time, a derived view to see the performance by quarter, another by year. If you change the definition of client type (which customers belong to which client type), which products for customers are fixed prices, which SKU’s that are the same type, time range of interest, etc. you simply map them and the entire analysis auto-updates.

This flexibility and power (with no wasted effort) gives you a very deep analysis capability NOT available in any other data analysis platform. For example, you can find out with a few clicks that your “best” salesperson in terms of giving the lowest average discount is actually costing you the most. Turns out, he’s not serving any large customers (who get good discounts) and has several fixed price contracts (which mess up the average discounts). So, the discounts he’s giving the small clients, while less than what large customers get, are significantly more than what other salespeople provide to other small customers. This is something you’d never know if you didn’t have the power of Spendata as your data consultant would give up on the variance analysis at the global level because the salesman’s overall ratio looked good.

Post-Merger White-Space Analysis

White space sales analysis is looking for spaces in the market where you should be selling but are not. For example, if you sell to restaurants, you could look at your sales by geography, normalized by the number of establishments by type or the sales of the restaurants by type in that geography. In a merger, you could measure your penetration at each customer for each of the original companies. You can find white space by looking at each customer (or customer segment) and measuring revenue per customer employee across the two companies. Where is one more effective than the other?

You might think this is no big deal because this was theoretically done during the due diligence and the opportunity for overlap was deemed to be there, as well as the opportunity for whitespace, and whatever was done was good enough. The reality couldn’t be further from the truth.

If the whitespace analysis was done with a standard analytics tool, it has all the following problems:

  • matching vendors were missed due to different name entries and missing ids
  • vendors were not familied by parent (within industry, geography, etc.)
  • the improperly merged vendors were only compared against a target file built by the consultants and misses vendors
  • i.e. it’s poor, but no worse than you’d do with a traditional analytics tool

But with Spendata, these problems would be at least minimized, if not eliminated because:

  • Spendata comes with auto-matching capability
  • … that can be used to enrich the suppliers with NAICS categorization (for example)
  • Spendata comes with auto-familying capability so parent-child relationships aren’t missed
  • Spendata can load all of the companies from a firmographic database with their NAICS codes in a separate cube …
  • … and then federation can be used to match the suppliers in use with the suppliers in the appropriate NAICS category for the white space analysis

It’s thus trivial to

  1. load up a cube with organization A’s sales by supplier (which can be the output from a view on a transaction database), and run it through a view that embeds a normalization routine so that all records that actually correspond to the same supplier (or parent-child where only the parent is relevant) are grouped into one line
  2. load up a cube with organization B’s sales by supplier and do the same … and now you know you have exact matches between supplier names
  3. load up the NAICS code database – which is a list of possible customers
  4. build a view that pulls in, for each supplier in the NAICS category of interest, Org A spend, Org B Spend, and Total Spend
  5. create a filter to only show zero spend suppliers — and there’s the whitespace … 100% complete. Now send your sales teams after these.
  6. Create a filter to show where your sales are less than expected (eg. from comparable other customers or Org A or Org B). This is additional whitespace where upselling or further customer penetration is appropriate.

Bill Rate Analysis

A smart company doesn’t just analyze their (total) spend by service provider, they analyze by service role and against the service role average when different divisions/locations are contracting for the same service that should be fulfilled by a professional with roughly the same skills and same experience level. Why? Because if you’re paying, on average, 150/hr for an intermediate DBA across 80% of locations and 250/hr across the remaining 20%, you’re paying as much as 66% too much at those remaining locations, with the exception being San Francisco or New York where your service provider has to pay their locals a cost-of-living top-up just so they can afford to live there.

By the same token, a smart service company is analyzing what they are getting by role, location, and customer and trying to identify the customers that are (the most) profitable and those that are the least (or unprofitable when you take contract size or support requirements into account), so they can focus on those customers that are profitable, and, hopefully, keep them happy with their better talent (and not just the newest turkey on the rafter).

However, just like sales discount variation analysis over time by client type, this is tough as it’s essentially a variation of that analysis, except you are looking at services instead of products, roles instead of client types, and customer instead of sales rep … and then, for your problem clients, looking at which service reps are responsible … so after you do the base analysis (using dynamic view based measures), you’re creating new views with new measures and filters to group by service rep and filter to those too far beyond a threshold. In any other tool, it would be nigh impossible for even an expert analyst. In Spendata, it’s a matter of minutes. Literally.

And this is just the tip of the iceberg in terms of what Spendata can do. In a future article, we’ll dive into a few more areas of analysis that require very specialized tools in different domains, but which can be done with ease in Spendata. Stay tuned!

The Sourcing Innovation Source-to-Pay+ Mega Map!

Now slightly less useless than every other logo map that clogs your feeds!

1. Every vendor verified to still be operating as of 4 days ago!
Compare that to the maps that often have vendors / solutions that haven’t been in business / operating as a standalone entity in months on the day of release! (Or “best-of” lists that sometimes have vendors that haven’t existed in 4 years! the doctor has seen both — this year!)

2. Every vendor logo is clickable!
the doctor doesn’t know about you, but he finds it incredibly useless when all you get is a strange symbol with no explanation or a font so small that you would need an electron microscope to read it. So, to fix that, every logo is clickable so you can go to the site and at least figure out who the vendor is.

3. Every vendor is mapped to the closest standard category/categories!
Furthermore, every category has the standard definitions used by Sourcing Innovation and Spend Matters!
the doctor can’t make sense of random categories like “specialists” or “collaborative” or “innovative“, despises when maps follow this new age analyst/consultancy award trend and give you labels you just can’t use, and gets red in the face when two very distinct categories (like e-Sourcing and Marketplaces or Expenses and AP are merged into one). Now, the doctor will also readily admit that this means that not all vendors in a category are necessarily comparable on an apples-to-apples basis, but that was never the case anyway as most solutions in a category break down into subcategories and, for example, in Supplier Management (SXM) alone, you have a CORNED QUIP mash of solutions that could be focused on just a small subset of the (at least) ten different (primary) capabilities. (See the link on the sidebar that takes you to a post that indexes 90+ Supplier Management vendors across 10 key capabilities.)

Secure Download the PDF!  (or, use HTTP) [HTML]
(5.3M; Note that the Free Adobe Reader might choke on it; Preview on Mac or a Pro PDF application on Windows will work just fine)

4 Smart Technologies Modernizing Sourcing Strategy — Not Just Doctor Approved!

IBM recently published a great article on 4 smart technologies modernizing sourcing strategies that was great for two reasons. One, they are all technologies that will greatly improve your sourcing. We’ll explain why.

Automation

Business Process Automation (BPA, or RPA — Robotic Process Automation) can optimize sourcing workflows as well as procurement workflows. With good categorization, demand forecasting, inventory management, price intelligence, templates, strategies, situational analysis (that qualitatively and quantitatively define when a strategy should be applied), and workflow, you can automate sourcing just as much as you can automate Procurement. You can eliminate all of the tactical and focus solely on the strategic analysis and decision making.

Blockchain

If you need to record information in a manner that can be publicly accessed and verified, such as to ensure that records for traceability can be independently verified, or to publicly record ownership, blockchain is a great technology as its ultra secure. In Sourcing and Procurement, it can be used to track orders, payments, accounts, and more across global supply chains and multiple private and public parties.

Analytics

In addition to providing an organization with deep insights into their spend and (process level) performance, analytics engines and their “big data brains” provide real-time sourcing flexibility and visibility to enhance order management, inventory management, and logistics management. With proper intelligence, sourcing teams can understand and act on changes in the increasingly complex supply chain — as they happen.

AI

When deep data and analytics are paired with AI, the deep insights can improve forecasts, help identify risk, and provide suggestions for management.

And this brings us to the next great aspect of the article. Not once did it mention Gen-AI. Not once. As the doctor has been stating over and over, the classic analytics, optimization and machine learning you have been ignoring for almost two decades will do wonders for your supply chain. (Blockchain is not always necessary, but will help in the right situation.)