Category Archives: Problem Solving

Spend Analysis V: New Horizons (Part 1)

Today I’d like to welcome back Eric Strovink of BIQ who, as I indicated in part I of this series, is authoring the first part of this series on next generation spend analysis and why it is more than just basic spend visibility. Much, much more!

Many of the limitations of spend analysis derive from its underlying
technology. As I’ve discussed in previous installments, the extent to which spend
analysis can be made more useful to business users is often the
extent to which those limitations can be hidden or eliminated.
In essence: an analysis tool is useful to business analysts only
if business analysts actually use it.
Which means that there is a
fine line that vendors must walk between delivering
technology to business users, and shielding them
from it — without going too far and creating an
unnecessary vendor dependency.

In this installment and the next, we’ll look at a few advanced features that aren’t
necessarily available today, but that should be
possible to provide in future, without crossing that line.

Meta Aggregation

By definition, a spend transaction contains the “leaf” level of a
hierarchy only. Consider the following:

   HR Consulting
         Mercer
         Deloitte
   IT Consulting
         IBM
         Accenture
   Management Consulting
         KPMG
         CGI

Low level transactions typically contain “Mercer” or “CGI,” but
not “IT Consulting” or “HR Consulting,” because those intermediate
hierarchy positions (“nodes”) represent an artificial organization
imposed by the user, and have no reality at the transaction level.

Suppose, though, that I’d like to be able to treat intermediate nodes as though
they had reality inside the transaction set itself. Simple example: I’d
like to derive a new range dimension based on the top level of the above
dimension. I want to know which consolidated groupings are at $0-$100K,
which are at $100-$500K, and so on. I don’t care about IBM or
KPMG any more; all I care about is aggregating my own groupings.

In mathematical terms, I’m asking for f(g(x)) — the ability to apply
dimension derivation to a previous aggregation step;
and, inductively and more generally, to do the same to the meta-aggregated dimension itself.

In OLAP implementation terms, I’m asking the engine to treat the
intermediate nodes from any dimension, at any hierarchy level, as virtual transaction columns
rather than as dimensional nodes.
The problem is, intermediate nodes aren’t static; they’re changing all the time. That means
a dimension derived on artificial rollup values must be
re-derived whenever the hierarchy of the source dimension is altered; and, since hierarchy
editing must be a real-time operation (as I have argued in this series and
elsewhere),
the dimension derivation must also be performed on-the-fly.

Tricky as this might be to implement, the logic is easy to specify from the business
user’s perspective. The user simply picks a previously-defined dimension and a
hierarchy level on which to base his new dimension, and he’s done.

Visual Crosstabs

The utility of Shneiderman diagrams (or “treemaps”) to display hierarchical information
is well known; click the thumbnail or here for a live example.


The treemap is useful because it is visually intuitive; in this example, the relative sizes
of the rectangles represent the relative magnitude of spending. The colors indicate
relative change in spending; red is bad, green is good; lighter green is better. Inner
rectangles show the breakdown at the next level of the hierarchy.

Clicking inside one of the white-bordered rectangles
provides an expanded lower-level view of the hierarchy; clicking the
up-arrow button moves back up a level.

Now, suppose that rather than the inner rectangles showing a lower level of
the same hierarchy, instead they showed the breakdown of spending
within another dimension entirely — i.e., a “visual crosstab.” The visual crosstab
would not only show magnitudes, but trends as well.

Unlike with meta aggregation, where the user interface is simple and
the implementation complex, here the user interface is complex and the
implementation fairly simple. The utility of the visual crosstab will
depend strongly on the user
interface — for example, how does the user change the resolution of
the outer dimension to a different hierarchy level? What might that do to
the level of the inner dimension? How might the user invert the view,
so that the inner dimension becomes the outer, and the outer becomes
the inner? Globally, how
can the user be kept aware of what’s being viewed/inverted/clicked, and therefore
be able to make sense
of the result?

Spend Analysis III: Common Sense Cleansing

Today I’d like to welcome back Eric Strovink of BIQ who, as I indicated in part I of this series, is going to be authoring the first part of this series on next generation spend analysis and why it is more than just basic spend visibility. Much, much more!

Many observers would acknowledge that there’s not a lot of difference between
viewing cleansed spend data with SAP BW or Cognos or Business Objects,
and viewing cleansed spend data with a custom data warehouse from a
spend analysis vendor. They’re all OLAP data warehouses; they all
have competent data viewers; they all provide visibility into
multidimensional data. What has historically differentiated spend
analysis from BI systems is the cleansing process itself (along with,
in contrast to the BI view, the decoupling of data dimensions from the
accounting system).

Because it’s hard to distinguish one data warehouse from another, cleansing has
become an important differentiator for many spend analysis
vendors. The vendor has typically developed a viewpoint as to
the relative merits of manual labor/offshore resources, automated
tools, custom databases, and so on, and sells its SA product and
services around that viewpoint. Unfortunately, all the resulting
hype and focus on cleansing services, from both these vendors and the analysts
who follow them, has obscured a simple reality —
namely, that effective data cleansing methods have
been around for years, are well understood, and are easy to implement.

The basic concept, originated and refined by various consultants and
procurement professionals during the early to mid-1990’s, is to build
commodity mapping rules for top vendors and top GL codes (top means
ordered top-down by spending) — in other words, to apply common sense 80-20
engineering principles to spend mapping. GL mapping catches the
“tail” of the spend distribution, albeit approximately; vendor
mapping ensures that the most important vendors are mapped correctly;
and a combination of GL and vendor mapping handles the case of
vendors who supply multiple commodities. If more accuracy is
needed, one simply maps more of the top GLs and vendors. Practitioners routinely
report mapping accuracies of 95% and above. More importantly, this
straightforward methodology enables sourcers to achieve good
visibility into a typical spend dataset very quickly, which in
turn allows them to focus their spend management efforts (and
further cleansing) on the most promising commodities.

Is it necessary to map every vendor? Almost never; although third-party vendor mapping
services are readily available, if you need them. And, as far
as vendor familying is concerned, grouping together multiple instances
of the same vendor clears up more than 95% of the problem.
Who-owns-whom familying using commercial databases seldom
provides additional insight; besides, inside buyers are usually well
aware of the few relationships that actually matter. For example,
you won’t get any savings from UTC by buying from Carrier and
from Otis Elevator. And, it would be a mistake to group Hilton
Hotels under their owners, since they are all franchisees.

[N.B. There are of course cases where insufficient data exist
to use classical mapping techniques. For example, if the dataset
is limited to line item descriptions, then phrase mapping is
required; if the dataset has vendor information only, then
vendor mapping is the only alternative. Commodity maps based
on insufficient data are inaccurate commodity maps, but they
are better than nothing.]

80-20 logic also applies to the overall spend mapping problem.
Consider a financial services firm with an indirect spend base.
Before even starting to look at the data, every veteran sourcer
knows where to start looking first for potential savings:
contract labor, commercial print, PCs and computing, and so
on. Here is a segment of the typical indirect spending
breakdown, originally published by The Mitchell Madison
Group
:

If you have limited resources, it can be counterproductive to start
mapping commodities that likely won’t produce savings, when good estimates
can often be made as to where the big hits are likely to be. If you can score some
successes now, there will be plenty
of time to extend the reach of the system later. If
there are sufficient resources to attack only a couple of
commodities, it makes sense to focus on those commodities alone, rather than to
attempt to map the entire commodity tree.

The bottom line is that data cleansing needn’t be a complex,
expensive, offline process. By applying common sense
to the cleansing problem, i.e. by attacking it incrementally
and intelligently over time, mapping rules can be developed, refined,
and applied when needed. In fact, whether you choose to have an initial
spend dataset created by outside resources, or you decide to create it yourself,
the conclusion is the same:
cleansing should be an online, ongoing process, guided
by feedback and insight gleaned directly (and incestuously)
from the powerful visibility tools of the spend analysis system
itself.
And, as a corollary, cleansing tools must be placed directly into the hands of
purchasing professionals so that they can create and refine
mappings on-the-fly, without any assistance from vendors or internal IT experts.

Next: Defining “Analysis”

Spend Analysis II: The Psychology of Analysis

Today I’d like to welcome back Eric Strovink of BIQ who, as I indicated in part I of this series, is going to be authoring the first part of this series on next generation spend analysis and why it is more than just basic spend visibility. Much, much more!

Data analysis that should be performed is often avoided, because
it carries too much risk for the stakeholder. Let’s consider two examples.

(1) Suppose I am an insurance company CPO with access to one or more
analysts; and that some number of analyst hours are available to me,
in order to investigate savings ideas that occur to me from time to time.

Now, suppose I begin wondering whether the company’s current policy of
auctioning off totaled vehicles is wise. I reason: what if we’re
actually losing money on some of these wrecks? I think: perhaps there
is a closed-form sheet I can provide to my adjusters that lists make/model/year
and gives them an auction/no auction decision; perhaps that sheet would save
the company money.

My problem is that I’m not entirely sure that this idea is worthwhile.
Perhaps the company makes money on almost every auction, and I will waste
the valuable time of one of my analysts by chasing phantom savings that
aren’t there. I must weigh not only the cost of the analysts’ time, but
also the lost opportunity cost associated with the analyst chasing a
low-probability idea — against using that analyst for some immediately
useful purpose, such as prettying up a report that the CEO complained
about, or double-checking a number for the CFO.

I reason as follows: if I think it’s going to take longer than X hours
to determine whether this is a good idea or not, then I can’t chase the
idea. I don’t have the resources to do so, and perhaps I never will.

However, if I know that my analyst can load up a new spend dataset with
auction costs and revenues within minutes; and I know that a subsequent
slice/dice by make/model/year would be trivial; and I know
that a report of precisely the format I need could be produced without
significant effort; then the decision is a no-brainer. I make the decision
to analyze rather than the decision not to analyze.

(2) Suppose I am a CPO with a large A/P spend data warehouse available to
me, but the particular question I want answered is not supported by the
dimensions and hierarchies that it contains. Those dimensions and hierarchies
were built perhaps by the IT department, or perhaps by a spend analysis vendor,
or perhaps by a team of internal support people who are responsible for
maintaining the warehouse; and those dimensions and hierarchies were the
result of a number of committee decisions that will be difficult to alter.
Furthermore, the data warehouse is being used by hundreds of other people in
the organization — which means that I’ll need the permission of all those
potential users to change or add anything.

I reason as follows: I know it will take weeks, perhaps months to convince
my colleagues to change the dataset organization, even if they can be
convinced to do so; and once they are convinced, it will take even longer
for whomever it is that controls the warehouse to implement the changes,
perhaps at high cost that I will need to justify; so is it really worthwhile
for me to pursue using the warehouse to answer my question?

I decide: probably not. Which means that my analyst will have to spend many
hours extracting raw transactions from the warehouse; re-organizing them
herself on her personal computer, using Access or other desktop tools; and
then creating the report that I need. As above, I reason as follows: if
I think it’s going to take longer than X hours to answer my question, then
I’ll live without the answer rather than risk wasting precious analyst cycles.

However, if I know that my analyst can tweak her private copy of the dataset,
adding dimensions and changing hierarchies in just a few minutes, and that
my answer will be available shortly thereafter, I make the decision to
analyze rather than the decision not to analyze.

A flexible and powerful spend analysis system can make a huge psychological
difference to an organization. It changes the analysis playing field
from “we just can’t afford to look into this” to “of course we should
look into this!”

Next installment: Common Sense Cleansing

Everyday Lessons from Operations Research

Although many of the presentations at the INFORMS Annual Meeting in Pittsburgh are very academically focused, Mark S. Daskin‘s Presidential Address Everyday Lessons from Operations Research had a lot of great lessons for those of out us in the field. The following are Mark Daskin’s Top 13 lessons from operations research:

  • 13. Service Gets Worse as Utilization Increases
    Think about customer service lines.
  • 12. Performance Degrades as Variability Increases
    … but variability can be reduced through risk pooling. Think about global sourcing.
  • 11. Variability is necessary.
    Relationships cannot be determined without variability.
  • 10. Expect the unexpected in today’s world.
    There are 300 M people in the US, which means 800 K will be at least 3 standard deviations from the mean in any study you are conducting. There are 1 B people in China, which means at least 2.7 M will be 3 standard deviations from the mean. Globally, there are over 400 K people over 4 standard deviations from the mean in any study you are conducting.
  • 9. If it is too good to be true, it probably is not.
    Learn from experience – and samples.
  • 8. Life is full of errors.
    Both Type I (false positive) and Type II (false negative) – and there is no free lunch. As you decrease one type of error, the other type of error increases – unless you want to pay for more data.
  • 7. A good decision can result in a bad outcome.
    C’est la vie.
  • 6. If you are not using all you have, don’t pay for additional quantity.
    It’s not savings if the item perishes in industry.
  • 5. You can never do better by adding a constraint.
    Adding a constraint can never improve the objective function – that’s optimization.
  • 4. Keep it Simple
  • 3. Think about problem formulation
    • What do you know?
    • What do you need to decide?
    • What do you need to achieve?
    • What inhibits you?
  • 2. Look for compromise solutions
    Sometimes optimal is not good enough – do a tradeoff and robustness analysis before accepting a solution since your optimal solution may be very susceptible to change.
  • 1. Data is not information.

Innovation Matters

Seventy-two percent of companies worldwide will increase spending on innovation in 2006, and 41 percent will increase spending significantly, according to a recent survey of senior management conducted by The Boston Consulting Group (BCG) as summarized in Innovation 2006 and Measuring Innovation 2006.

The study was based on responses from more than 1,000 senior executives from 63 countries and all major industries, so the results are statistically significant.

However, despite plans to raise spending, the study determined that nearly half of the companies are unhappy with their returns on innovation spending. According to James Andrew, Senior Vice President and report author, These findings highlight the paradox we see all the time in practice. Innovation is such an important priority for companies, and although they continue to spend ever-increasing amounts on it, half of all companies remain unsatisfied with the returns they generate. Furthermore, he also stated that This is a critical issue because the costs are even greater than most companies realize. The costs include not only the money invested, but also the opportunity cost of not generating the growth and returns from innovation that are possible and that companies need to meet the demands of the stock market.

This report highlights the need not only for innovation, but management of the innovation process. Contrary to popular belief, the process can be managed and there are techniques that can be used to jump start the process and significantly increase your chances of success.

For example, I described a couple of approaches to innovation in my Purchasing Innovation Series over at e-Sourcing Forum. If you have a clearly identified problem, a great approach is Teoriya Resheniya Izobretatelskikh Zadatch, the Theory of Inventive Problem Solving, or TRIZ for short. The foundation for invention on demand, it can be used to create completely new products to perform a given function or solve an existing function. Another methodology is crowdsourcing, the process of delegating various tasks for which you do not have the manpower or expertise from internal production to external entities or affiliations of networked persons with the expertise, access to, or raw capabilities that you require.

Building on crowdsourcing, you can take advantage of companies, research networks, and laboratories that exist primarily to help you with your innovation needs. One example is InnoCentive, an exciting web-based community matching top scientists to relevant R&D challenges facing leading companies from around the globe, can help you with your chemical, biological, and other scientific problems. Another example is YourEncore, a service provider connecting the technology and product development opportunities of member companies with world class talented individuals whose motto is People don’t retire anymore, they just go on to do other things. Yet another is NineSigma, a company that enables organizations to amplify internal resources and capabilities by tapping a global network of innovators for new solutions, technologies, products, services and opportunities.

Some of these organizations will even help you turn your product idea into a prototype. One example is Nytric, a company that exists to create revolutionary new products, using an advanced mode of thinking, spanning initial idea to market conquest; and by using a business model that shares front end risk so as to ensure a maximum mutual return on investment. Another example is Big Idea Group, an organization that brings together creative inventors and innovation-driven companies.

Furthermore, if you need just need help automating and tracking the process, you could always start with BrightIdea.com‘s On Demand Innovation Management Software, which gets you up and running immediately. Of course, this isn’t the only choice. You could also try Jenni‘s Idea Management Software, Centric Software‘s Product Intelligence software, or Imaginatik‘s Idea Central Software.

In other words, there are companies, products, and methodologies out there to help you ensure that each and every innovation effort you undertake is a success because, simply put, innovation matters.