Monthly Archives: January 2010

Spend Analysis IV: User-Defined Measures, Part 1

Today’s post is from Eric Strovink of BIQ.

A “measure” is a quantity that’s computed for you in an analysis dataset — for example, spend, count of transactions, and so on. There could be many measures in a dataset, such as multiple currencies, or entirely different quantities such as number of units.

Measures are derived from the data supplied, and rolled up (typically summed) to hierarchical totals. Sometimes, however, you want and need control over how (and when) the measure is calculated. Such measures are termed “user-defined” measures.

Let’s first dispense with the usual definition of user-defined measures — namely, eye candy that has no reality outside of the current display window. You can identify eye candy by looking for the little asterisk in the User Manual that says “certain operations” aren’t possible on a user-defined measure. That’s the tip-off that the tool isn’t really creating the measure at all — it’s just computing it on the fly, as needed, for the current display. In order to be truly useful, user-defined measures must have reality at all drillpoints (all “nodes”) in the dataset, at all times, so they can be used freely in analysis functions, just like “regular” measures. It’s no wonder that many “analysis” products avoid performing the millions of computations required to do this properly, preferring instead to do the handful of computations required to pass casual inspection during the sales process. You’ll discover once you dive into the product that its “user-defined” measures are useless; but by then it’s too late.

There are two kinds of user-defined measures:
(1) post-facto computations that are performed after transactional roll-up (the usual definition), and
(2) those that are performed during transactional roll-up, which we’ll consider here.


Click to enlarge

In the above example there are two savings scenarios identified, “Plan1” and “Plan2”. Plan 1 is a 10% savings scenario, and Plan 2 is a 20% savings scenario. However, this savings plan is complex, because it is a real savings plan. It applies only to spend with certain vendors, and only in certain categories. Thus, as you can see from the numbers, savings aren’t just “10% or 20% of the total regardless of what the total might be”; rather, the numbers are never 10% or 20% of the total (and sometimes aren’t reduced at all) because the savings are applied only to certain vendors (24 of 30,000), and only in certain commodity categories.

So how was this done? In order to compute accurate Plan1 and Plan2 amounts at every drillpoint (i.e. every line item in every dimension), the filter criteria must be applied to each transaction as it is being considered for roll-up. And, since the percentage is likely a dynamic parameter (able to be changed by the user in real time), and since the filter is likely also to be dynamic (“I would like to add (subtract) this vendor or commodity to (from) the filter”), the cube can’t be “pre-computed” as many OLAP systems do. In fact, the roll-up has to occur in real time, from scratch; and it has to involve decision-making at every transaction. Here is the fragment of decision-making code that computes the Plan1 measure:

if (FastCompare(User.NewFamily.Filterset))

addto($Plan1$,$TransMeasure.Amount$*(100-User.VendorSpendReduction1.Plan1SavingsPercent)/100); 

else

addto($Plan1$, $TransMeasure.Amount$);

Note that this fragment resembles a real program (because it is), and it could be arbitrarily complex (because it might need to be). However, it was built by a user (with aid from an integrated program development environment), and it is compiled (on the fly, in real time) by the system into custom p-code1 that executes extremely quickly2.  The result is two additional measures that are calculated without noticeable delay.

Although it might be too much to expect a non-technical user of a spend analysis system to produce a code fragment such as the above, the cube nevertheless can be delivered to that user with the Plan1 and Plan2 measures in place, allowing him to alter both the filter parameters (“User.NewFamily.Filterset”) and the savings percentages (“User.VendorSpendingReduction1.Plan1SavingsPercent”), without having to understand or modify the code fragment in any way.

Next installment: User-Defined Measures, Part 2, in which I show how the “simple” case of post-facto user-defined measures can yield surprising and interesting results when combined with another critical concept, dynamic reference filters.

Previous Installment: Crosstabs Aren’t “Analysis”

1 The p-code instructions in this case are designed to maximize performance while minimizing instruction count.
2 BIQ executes 50M pcode instructions per second on an ordinary PC.

Share This on Linked In

Are You Ready For The Next China?

A recent article over on the Harvard Business Review on “China Myths, China Facts” reminded me how China is starting to change and how my advice in my recent post on overcoming cultural differences in international trade with China to take time to get to know the people you will be dealing with because their behaviour may be nothing like the usual behaviour of the country in which they reside is becoming truer and truer by the day in some parts of China (that deal regularly with the west).

According to the article, which isn’t entirely accurate (just ask our resident Global Trade expert) the following are myths:

  1. Collectivism
    According the article, Individualism is the reality. While this is becoming true of the emerging “New China”, especially in the urban middle class, the “Old China”, which still makes up the majority of China, is still collectively oriented and, as the article points out, decisions, particularly in the business world, are still made in groups.
  2. Long-term Deliberation
    According to the article, real-time reaction is the reality. While the “New China” that has been dealing with the west for the last two decades (or so) has learned to “react” at western speeds, there is still a lot of deliberation that goes on behind the scenes, and quick decisions are often the result of policies that were decided as the result of long-term deliberations.
  3. Risk Aversion
    According to the article, risk tolerance is the norm. Well, this one is half true. The reality is that the Chinese are neither risk-averse or risk-tolerance. They are what I’d call “risk-comfortable”. You have to remember that China is one of the oldest civilizations on the planet. They can trace their history back millennia, while we struggle to trace ours back a few centuries. They are more aware of risk and used to dealing with it than we could ever imagine. It’s just another part of everyday life to them … that sometimes comes and goes in waves. They know that some risks can never be completely mitigated and that others can never be predicted. As a result, they don’t feel the need to needlessly analyze something to the nth degree when they know nothing will be gained from the exercise. So they make a decision, execute, and accept what comes. We could learn a thing or two from them.

What’s happening is that, just like Japan transformed itself from the “Old Japan” to the “New Japan” over the last few decades, China is in the process of transforming itself from the “Current China” to the “Next China”. This will happen over the next few decades as it claws its way back to global supremacy. So, are you ready?

Share This on Linked In

Rosslyn Analytics – Building an Analytics Visibility Platform The Massess Will Rally Around

In Part I, I discussed how I was quite impressed with Rosslyn Analytics‘ spend analysis platform and their overall approach to the problem as compared to many of the spend analysis providers in the space and how I saw vision, clarity, and execution in their offering. In this post, I am going to discuss their current platform capabilities and overview a few of the enhancements coming over the next few months.

Spend Basics: As I mentioned in my last post, they integrated with over 30 standard ERP/MRP systems out of the box, and can quickly add additional systems with about 1 day of effort as they have developed a rules-based integration platform that allows them to quickly bring additional systems on-line. Their rules-based system allows data to be cleansed, normalized, and enriched in one step during the extraction, which lets you get straight to the analytics. And they can automatically identify dimensions (categories). Like most platforms, their analytics are report (and dashboard) driven, but they have an ever growing library of reports, you can view data on (and drill down into) any dimension hierarchy you choose, and they are building in the ability to add user-defined parameters and indexes to certain classes of reports in the next release.

Accounts Payable Support: Not only do they suck in invoice data, but they have rule-sets that allow them to automatically detect overpayments and automatically generate reports that alert you to duplicate invoices, duplicate charges across invoices, duplicate payments, overpayments, payments to the wrong supplier, and other discrepancies. Some customers have recovered the cost of an annual license in a month with this feature alone. In addition, they also have similar tax reporting capabilities. Their tax reports will detect overpayments, underpayments, and, more importantly, missed payments that could cause import/export problems for you down the road.

Contracts Support: Integrated with their AP reporting / overpayment detection capabilities, their contracts module allows you to upload your contracts or integrate with an external contract repository. Either way, once you define the appropriate meta-data and triggers, the platform takes care of the rest.

Savings Analysis: Working with global partners, they have developed a savings module, complete with McKinsey-esque bubble charts, that identifies your top savings opportunities based upon your supply base, spend, and market indices. While the current version does not allow you to define your own customized savings plan, the next version will allow you to define your own indexes and support parameters that will allow you to tailor the opportunity reports to your company. And while this still won’t give you the control that user-defined measures will, for your average sourcing professional who’s still making seat-off-the-pants decisions based on total spend and instinct, it’s a great step in the right direction, as it’s the first step to truly bringing spend intelligence to the masses in your average organization.

Sustainability Support: While not yet available, Rosslyn is working hard to integrate a large number of external data feeds that go beyond standard price and risk indices and include carbon and sustainability data. Building on this data, and their ability to do trend analysis (which is currently built in for supplier and category data), they’re in the process of creating a sustainability tracking and reporting solution that will be on-line early next quarter.

All-in-all, it’s a great platform for any organization starting on their supply management journey or stuck in a rut because of the limited capabilities and reach of traditional on-premise platforms built with a single user in mind. And when you’re ready, you can use it to supercharge the performance of your power-analysts as they will have a normalized, cleansed, and enriched data source to start with in the application of their stand-alone analyst tools.

Share This on Linked In

There’s More to Cost than Cost Analysis

While it was exceptionally well written, I was a little disappointed with this article on “uncovering hidden costs” over on SupplyManagement.com.

The article, which made the acute observation that there is no fixed arithmetic formula between the cost of producing the goods and services sold to us and the price charged for them: sellers charge what the market will bear and that to break down a suppliers’ figures, we need to know the proportion of each area of cost that the goods or services are likely to attract, did a great job of describing the different types of analysis one could bring to bear on costs, but a poor job of actually indicating how to reduce the “hidden” costs once found. The advice boiled down to “collaboration is key” which, while correct, doesn’t help you answer the important questions that will arise during the collaboration.

While questions like:

  • Can we give our suppliers access to our contracts to reduce cost?
  • Does our supplier have contracts we could access to cut cost?

are theoretically easy to answer (but not always easy to answer in practice due to the poor state of contract management in many organizations)

questions like:

  • What is the thing made of and what is happening in the market for that material?
  • Where are the cost drivers in these goods or services?
  • How can you buy that material most effectively?

are a little harder to answer.

You need to understand how to perform market intelligence, you need to understand how the goods are assembled (using virtual modelling environments like those offered by Apriori) or the services delivered, and you need to understand what innovative new technologies or processes could be applied to reduce those costs. And that’s more than you’re going to get from a purchase-price, open-book, or total absorption cost analysis. You have to start with a true Total Cost of Ownership and then dive in on each cost.

Share This on Linked In

Spend Analysis III: Crosstabs Aren’t “Analysis”

Today’s post is from Eric Strovink of BIQ.

Pivot tables are great, no question about it. They’re also a pain in the neck to build, so any tool that builds a crosstab automatically is helpful. But crosstabs are where many “spend analysis” systems declare victory and stop.

Pivot tables are most useful when built in large clusters. Hundreds of them, for example, booked automatically down some dimension of interest (like Cost Center by Vendor, booked by Commodity). They’re also best when they’re created dynamically, inserted into existing Excel models, with the raw data readily available for secondary and tertiary analyses.

It’s also useful to see a breakdown of all dimensions by a single dimension — i.e., hundreds of thousands, even millions, of pivot table cells calculated automatically on every drill. For example, here’s MWBE spend, broken down against each of 30,000 nodes, re-calculated on every drill.


Click image to enlarge

Not to belabor the point, but there’s a big difference between (1) dumping a single crosstab to an HTML page, and (2) inserting hundreds of pivot tables into an existing Excel model, or calculating 120,000 crosstab cells automatically on every drill. The former is interesting. The latter supports serious analysis.

Are pivot tables the most useful way to present multidimensional data? Often, they aren’t. The Mitchell Madison Group’s Commodity Spend Report books top GL accounts, top Cost Centers, and top Vendors by Commodity, with a monthly split of spend, showing share of total category spend at each display line. Is a static report like this one “analysis?” No, of course not. But in this case the multi-page “report” isn’t static at all. It was built with a simple extract from the dataset, inserted directly into a user-defined Excel model. The output is trivially alterable by the user, putting analysis power directly into his or her hands. For example, with a simple tweak the report could just as easily be booked by Vendor, showing Commodity, Cost Center, and so on — or adapted to an entirely different purpose.

What about matching externally-derived benchmarks to internal data? Is it useful to force-fit generic commodity benchmark data into an A/P dataset, as some spend analysis vendors try to do, and pretend that actionable information will result? Or is it more productive to load relevant and specific benchmark data into a flexible Excel model that you control, and into which you insert actuals from the dataset? The former approach impresses pie-in-the-sky analysts and bloggers. The latter approach produces concrete multi-page analyses, like this, that demonstrate how “best price” charged for an SKU, per contract, might not be “best price” after all (who ever heard of a PC whose price was flat for 12 months?)1


Click image to enlarge

Next installment: User-Defined Measures

Previous Installment: Why Data Analysis is Avoided

1 This example is based on disguised but directionally accurate data. A similar analysis on actual data identified hundreds of thousands of dollars in recoverable overcharges.

Share This on Linked In