Category Archives: Spend Analysis

Spend Analysis IV: User-Defined Measures, Part 1

Today’s post is from Eric Strovink of BIQ.

A “measure” is a quantity that’s computed for you in an analysis dataset — for example, spend, count of transactions, and so on. There could be many measures in a dataset, such as multiple currencies, or entirely different quantities such as number of units.

Measures are derived from the data supplied, and rolled up (typically summed) to hierarchical totals. Sometimes, however, you want and need control over how (and when) the measure is calculated. Such measures are termed “user-defined” measures.

Let’s first dispense with the usual definition of user-defined measures — namely, eye candy that has no reality outside of the current display window. You can identify eye candy by looking for the little asterisk in the User Manual that says “certain operations” aren’t possible on a user-defined measure. That’s the tip-off that the tool isn’t really creating the measure at all — it’s just computing it on the fly, as needed, for the current display. In order to be truly useful, user-defined measures must have reality at all drillpoints (all “nodes”) in the dataset, at all times, so they can be used freely in analysis functions, just like “regular” measures. It’s no wonder that many “analysis” products avoid performing the millions of computations required to do this properly, preferring instead to do the handful of computations required to pass casual inspection during the sales process. You’ll discover once you dive into the product that its “user-defined” measures are useless; but by then it’s too late.

There are two kinds of user-defined measures:
(1) post-facto computations that are performed after transactional roll-up (the usual definition), and
(2) those that are performed during transactional roll-up, which we’ll consider here.


Click to enlarge

In the above example there are two savings scenarios identified, “Plan1” and “Plan2”. Plan 1 is a 10% savings scenario, and Plan 2 is a 20% savings scenario. However, this savings plan is complex, because it is a real savings plan. It applies only to spend with certain vendors, and only in certain categories. Thus, as you can see from the numbers, savings aren’t just “10% or 20% of the total regardless of what the total might be”; rather, the numbers are never 10% or 20% of the total (and sometimes aren’t reduced at all) because the savings are applied only to certain vendors (24 of 30,000), and only in certain commodity categories.

So how was this done? In order to compute accurate Plan1 and Plan2 amounts at every drillpoint (i.e. every line item in every dimension), the filter criteria must be applied to each transaction as it is being considered for roll-up. And, since the percentage is likely a dynamic parameter (able to be changed by the user in real time), and since the filter is likely also to be dynamic (“I would like to add (subtract) this vendor or commodity to (from) the filter”), the cube can’t be “pre-computed” as many OLAP systems do. In fact, the roll-up has to occur in real time, from scratch; and it has to involve decision-making at every transaction. Here is the fragment of decision-making code that computes the Plan1 measure:

if (FastCompare(User.NewFamily.Filterset))

addto($Plan1$,$TransMeasure.Amount$*(100-User.VendorSpendReduction1.Plan1SavingsPercent)/100); 

else

addto($Plan1$, $TransMeasure.Amount$);

Note that this fragment resembles a real program (because it is), and it could be arbitrarily complex (because it might need to be). However, it was built by a user (with aid from an integrated program development environment), and it is compiled (on the fly, in real time) by the system into custom p-code1 that executes extremely quickly2.  The result is two additional measures that are calculated without noticeable delay.

Although it might be too much to expect a non-technical user of a spend analysis system to produce a code fragment such as the above, the cube nevertheless can be delivered to that user with the Plan1 and Plan2 measures in place, allowing him to alter both the filter parameters (“User.NewFamily.Filterset”) and the savings percentages (“User.VendorSpendingReduction1.Plan1SavingsPercent”), without having to understand or modify the code fragment in any way.

Next installment: User-Defined Measures, Part 2, in which I show how the “simple” case of post-facto user-defined measures can yield surprising and interesting results when combined with another critical concept, dynamic reference filters.

Previous Installment: Crosstabs Aren’t “Analysis”

1 The p-code instructions in this case are designed to maximize performance while minimizing instruction count.
2 BIQ executes 50M pcode instructions per second on an ordinary PC.

Share This on Linked In

Rosslyn Analytics – Building an Analytics Visibility Platform The Massess Will Rally Around

In Part I, I discussed how I was quite impressed with Rosslyn Analytics‘ spend analysis platform and their overall approach to the problem as compared to many of the spend analysis providers in the space and how I saw vision, clarity, and execution in their offering. In this post, I am going to discuss their current platform capabilities and overview a few of the enhancements coming over the next few months.

Spend Basics: As I mentioned in my last post, they integrated with over 30 standard ERP/MRP systems out of the box, and can quickly add additional systems with about 1 day of effort as they have developed a rules-based integration platform that allows them to quickly bring additional systems on-line. Their rules-based system allows data to be cleansed, normalized, and enriched in one step during the extraction, which lets you get straight to the analytics. And they can automatically identify dimensions (categories). Like most platforms, their analytics are report (and dashboard) driven, but they have an ever growing library of reports, you can view data on (and drill down into) any dimension hierarchy you choose, and they are building in the ability to add user-defined parameters and indexes to certain classes of reports in the next release.

Accounts Payable Support: Not only do they suck in invoice data, but they have rule-sets that allow them to automatically detect overpayments and automatically generate reports that alert you to duplicate invoices, duplicate charges across invoices, duplicate payments, overpayments, payments to the wrong supplier, and other discrepancies. Some customers have recovered the cost of an annual license in a month with this feature alone. In addition, they also have similar tax reporting capabilities. Their tax reports will detect overpayments, underpayments, and, more importantly, missed payments that could cause import/export problems for you down the road.

Contracts Support: Integrated with their AP reporting / overpayment detection capabilities, their contracts module allows you to upload your contracts or integrate with an external contract repository. Either way, once you define the appropriate meta-data and triggers, the platform takes care of the rest.

Savings Analysis: Working with global partners, they have developed a savings module, complete with McKinsey-esque bubble charts, that identifies your top savings opportunities based upon your supply base, spend, and market indices. While the current version does not allow you to define your own customized savings plan, the next version will allow you to define your own indexes and support parameters that will allow you to tailor the opportunity reports to your company. And while this still won’t give you the control that user-defined measures will, for your average sourcing professional who’s still making seat-off-the-pants decisions based on total spend and instinct, it’s a great step in the right direction, as it’s the first step to truly bringing spend intelligence to the masses in your average organization.

Sustainability Support: While not yet available, Rosslyn is working hard to integrate a large number of external data feeds that go beyond standard price and risk indices and include carbon and sustainability data. Building on this data, and their ability to do trend analysis (which is currently built in for supplier and category data), they’re in the process of creating a sustainability tracking and reporting solution that will be on-line early next quarter.

All-in-all, it’s a great platform for any organization starting on their supply management journey or stuck in a rut because of the limited capabilities and reach of traditional on-premise platforms built with a single user in mind. And when you’re ready, you can use it to supercharge the performance of your power-analysts as they will have a normalized, cleansed, and enriched data source to start with in the application of their stand-alone analyst tools.

Share This on Linked In

There’s More to Cost than Cost Analysis

While it was exceptionally well written, I was a little disappointed with this article on “uncovering hidden costs” over on SupplyManagement.com.

The article, which made the acute observation that there is no fixed arithmetic formula between the cost of producing the goods and services sold to us and the price charged for them: sellers charge what the market will bear and that to break down a suppliers’ figures, we need to know the proportion of each area of cost that the goods or services are likely to attract, did a great job of describing the different types of analysis one could bring to bear on costs, but a poor job of actually indicating how to reduce the “hidden” costs once found. The advice boiled down to “collaboration is key” which, while correct, doesn’t help you answer the important questions that will arise during the collaboration.

While questions like:

  • Can we give our suppliers access to our contracts to reduce cost?
  • Does our supplier have contracts we could access to cut cost?

are theoretically easy to answer (but not always easy to answer in practice due to the poor state of contract management in many organizations)

questions like:

  • What is the thing made of and what is happening in the market for that material?
  • Where are the cost drivers in these goods or services?
  • How can you buy that material most effectively?

are a little harder to answer.

You need to understand how to perform market intelligence, you need to understand how the goods are assembled (using virtual modelling environments like those offered by Apriori) or the services delivered, and you need to understand what innovative new technologies or processes could be applied to reduce those costs. And that’s more than you’re going to get from a purchase-price, open-book, or total absorption cost analysis. You have to start with a true Total Cost of Ownership and then dive in on each cost.

Share This on Linked In

Spend Analysis III: Crosstabs Aren’t “Analysis”

Today’s post is from Eric Strovink of BIQ.

Pivot tables are great, no question about it. They’re also a pain in the neck to build, so any tool that builds a crosstab automatically is helpful. But crosstabs are where many “spend analysis” systems declare victory and stop.

Pivot tables are most useful when built in large clusters. Hundreds of them, for example, booked automatically down some dimension of interest (like Cost Center by Vendor, booked by Commodity). They’re also best when they’re created dynamically, inserted into existing Excel models, with the raw data readily available for secondary and tertiary analyses.

It’s also useful to see a breakdown of all dimensions by a single dimension — i.e., hundreds of thousands, even millions, of pivot table cells calculated automatically on every drill. For example, here’s MWBE spend, broken down against each of 30,000 nodes, re-calculated on every drill.


Click image to enlarge

Not to belabor the point, but there’s a big difference between (1) dumping a single crosstab to an HTML page, and (2) inserting hundreds of pivot tables into an existing Excel model, or calculating 120,000 crosstab cells automatically on every drill. The former is interesting. The latter supports serious analysis.

Are pivot tables the most useful way to present multidimensional data? Often, they aren’t. The Mitchell Madison Group’s Commodity Spend Report books top GL accounts, top Cost Centers, and top Vendors by Commodity, with a monthly split of spend, showing share of total category spend at each display line. Is a static report like this one “analysis?” No, of course not. But in this case the multi-page “report” isn’t static at all. It was built with a simple extract from the dataset, inserted directly into a user-defined Excel model. The output is trivially alterable by the user, putting analysis power directly into his or her hands. For example, with a simple tweak the report could just as easily be booked by Vendor, showing Commodity, Cost Center, and so on — or adapted to an entirely different purpose.

What about matching externally-derived benchmarks to internal data? Is it useful to force-fit generic commodity benchmark data into an A/P dataset, as some spend analysis vendors try to do, and pretend that actionable information will result? Or is it more productive to load relevant and specific benchmark data into a flexible Excel model that you control, and into which you insert actuals from the dataset? The former approach impresses pie-in-the-sky analysts and bloggers. The latter approach produces concrete multi-page analyses, like this, that demonstrate how “best price” charged for an SKU, per contract, might not be “best price” after all (who ever heard of a PC whose price was flat for 12 months?)1


Click image to enlarge

Next installment: User-Defined Measures

Previous Installment: Why Data Analysis is Avoided

1 This example is based on disguised but directionally accurate data. A similar analysis on actual data identified hundreds of thousands of dollars in recoverable overcharges.

Share This on Linked In

Rosslyn Analytics – Taking Analytics-Based Insights to the Masses

I have to say that I am quite impressed with Rosslyn Analytics‘ spend analysis platform and their overall approach to the problem as compared to many of the spend analysis providers in the space, especially given the relative youth of the platform. And no, it has nothing to do with their UI, graphics, or other eye candy that, as you know, accounts for zero points of credit as far as I am concerned (despite the fact that some bloggers and analysts apparently go Gaga for fancy graphics). Heck, I’m not even going to give them points for ease of use, because that’s a basic requirement of any modern supply management application.

So why am I impressed with Rosslyn? Vision. Clarity. Execution.

Rosslyn understands that sustainable results only emerge en-masse when you enable the masses, your platform has to evolve as needs evolve, every organization is different, and even if it every organization wasn’t different, you still can’t do everything yourself, and it shows in their platform and their delivery there-of.

Rosslyn believes that you don’t have spend visibility unless that visibility is available to, and understood by, everyone in the organization. As a result, they not only sell their platform using an unlimited access model, but designed the basics of their platform to be self-explanatory to the point that anyone — be it a procurement, finance, accounts payable, accounts receivable, or sales user — can load it up and intuitively find a report on the aspects of spend or organization performance relevant to them. Their vision is to provide the foundations of a platform that everyone can use to make more informed decisions.

Rosslyn also believes that you can’t make good decisions unless you have a complete set of relevant data. As a result, they have not only streamlined extract and upload for over 30 of the most common ERP and MRP systems, but they have also built a rules-based platform that allows them to integrate with new systems in under a day, on average. They are able to automatically extend your data with D&B data, other third party index data, and even your own proprietary indexes if you have them. And cleansing is built in, as it should be, because the point, as I have stressed over and over again, is analysis. As a result, you not only see an integrated view of your data, but you have the ability to augment it with non-spend data and give it context, because A/P and invoice data is just the beginning.

Rosslyn is committed to insuring that each and every customer gets rapid results, and their execution speaks for itself. Not only are they able to get even the largest companies up, running, and fully operational — with an average refresh rate less than 24 hours (maxing out at 72 hours, which compares very well with the industry average of 4-5 weeks for most of the spend visibility platforms on the market) — in two to three weeks, but most of their customers see ROI in under 8 weeks. Furthermore, while most organizations will start with only 10 to 20 users, they find that the number of users increases 10-fold within 3 months. In addition, they are constantly upgrading their fully cloud-based multi-tenant SaaS solution with new features, with major upgrades every quarter and minor upgrades every few weeks. Sometimes they add new reports and reporting capabilities “seemingly overnight” to meet the evolving needs of their user base.

And while the platform may not do everything you might want (but then again, what platform will), you can take comfort in the facts that (1) it’s as good as the majority of the spend analysis platforms on the market and that (2) Rosslyn understands you can’t do everything. Thus, while some of the platforms are trying to broaden their footprint and do everything, Rosslyn is staying focussed on spend visibility and working with third-party e-Procurement platform providers to give you a complete solution. Furthermore, while some platforms will make it nearly impossible to get your data out in an effort to lock you into their solution exclusively, Rosslyn makes it just as easy to get your data out of their platform as they do to get your data in. Recognizing that some users will always be more comfortable with Excel, that power analysts will always be trying to come up with new ways to analyze data that current platforms don’t yet address, and that some corporations have invested Millions in proprietary data warehouses and business intelligence platforms, Rosslyn supports exports to a number of standard data formats (XLS, CSV, PDF, etc.) and supports full bi-directional integration with your data warehouses. (They can extract your data, cleanse, classify, and augment it with their rules-based classification engine, and push it back in automatically on a regular refresh cycle.)

In Part II, I will describe the built in capabilities of the platform as it exists now, and some of the exciting developments you’ll see next quarter.

Share This on Linked In