Monthly Archives: January 2007

Spend Analysis II: The Psychology of Analysis

Today I’d like to welcome back Eric Strovink of BIQ who, as I indicated in part I of this series, is going to be authoring the first part of this series on next generation spend analysis and why it is more than just basic spend visibility. Much, much more!

Data analysis that should be performed is often avoided, because
it carries too much risk for the stakeholder. Let’s consider two examples.

(1) Suppose I am an insurance company CPO with access to one or more
analysts; and that some number of analyst hours are available to me,
in order to investigate savings ideas that occur to me from time to time.

Now, suppose I begin wondering whether the company’s current policy of
auctioning off totaled vehicles is wise. I reason: what if we’re
actually losing money on some of these wrecks? I think: perhaps there
is a closed-form sheet I can provide to my adjusters that lists make/model/year
and gives them an auction/no auction decision; perhaps that sheet would save
the company money.

My problem is that I’m not entirely sure that this idea is worthwhile.
Perhaps the company makes money on almost every auction, and I will waste
the valuable time of one of my analysts by chasing phantom savings that
aren’t there. I must weigh not only the cost of the analysts’ time, but
also the lost opportunity cost associated with the analyst chasing a
low-probability idea — against using that analyst for some immediately
useful purpose, such as prettying up a report that the CEO complained
about, or double-checking a number for the CFO.

I reason as follows: if I think it’s going to take longer than X hours
to determine whether this is a good idea or not, then I can’t chase the
idea. I don’t have the resources to do so, and perhaps I never will.

However, if I know that my analyst can load up a new spend dataset with
auction costs and revenues within minutes; and I know that a subsequent
slice/dice by make/model/year would be trivial; and I know
that a report of precisely the format I need could be produced without
significant effort; then the decision is a no-brainer. I make the decision
to analyze rather than the decision not to analyze.

(2) Suppose I am a CPO with a large A/P spend data warehouse available to
me, but the particular question I want answered is not supported by the
dimensions and hierarchies that it contains. Those dimensions and hierarchies
were built perhaps by the IT department, or perhaps by a spend analysis vendor,
or perhaps by a team of internal support people who are responsible for
maintaining the warehouse; and those dimensions and hierarchies were the
result of a number of committee decisions that will be difficult to alter.
Furthermore, the data warehouse is being used by hundreds of other people in
the organization — which means that I’ll need the permission of all those
potential users to change or add anything.

I reason as follows: I know it will take weeks, perhaps months to convince
my colleagues to change the dataset organization, even if they can be
convinced to do so; and once they are convinced, it will take even longer
for whomever it is that controls the warehouse to implement the changes,
perhaps at high cost that I will need to justify; so is it really worthwhile
for me to pursue using the warehouse to answer my question?

I decide: probably not. Which means that my analyst will have to spend many
hours extracting raw transactions from the warehouse; re-organizing them
herself on her personal computer, using Access or other desktop tools; and
then creating the report that I need. As above, I reason as follows: if
I think it’s going to take longer than X hours to answer my question, then
I’ll live without the answer rather than risk wasting precious analyst cycles.

However, if I know that my analyst can tweak her private copy of the dataset,
adding dimensions and changing hierarchies in just a few minutes, and that
my answer will be available shortly thereafter, I make the decision to
analyze rather than the decision not to analyze.

A flexible and powerful spend analysis system can make a huge psychological
difference to an organization. It changes the analysis playing field
from “we just can’t afford to look into this” to “of course we should
look into this!”

Next installment: Common Sense Cleansing

Missing the Point … or … The Right Way to Handle Freight

Last week I summarized my comments on how Sometimes 80% is enough here on Sourcing Innovation. I did this for multiple reasons – it seems that not everyone gets the point that with regards to optimization, not only is 100% unattainable, but even striving for 100% is often ludicrous.

The reason for this is that you are never optimizing against actual data, but estimated data. Remember, when you are sourcing, you are sourcing against forecasted needs, on forecasted schedules, with forecasted shipment levels associated with forecasted freight costs. Your demand probably will vary slightly, and may vary significantly, your schedules will need to be accelerated or decelerated when demand spikes or drops, your shipment sizes will also vary with seasonal demand variations, and with freight surcharges the norm these days, your freight rates will never be locked in stone. Thus, even an “optimal” solution is not optimal.

Moreover, striving for an optimal solution instead of settling for a (very) near optimal solution may actually decrease the quality of your solution. For example, let’s say your supplier gives you a significant discount (in the form of a rebate) of 10% if you buy 60,000 units, and your anticipated demand is precisely 60,000 units. Let’s say you award the supplier the business, but your forecast was over by 5% and you only buy 57,000 units. Let’s also say that the second cheapest supplier was only 3% less expensive. In this situation, your search for the ultimate solution cost you 7%!

As another example, let’s say a certain carrier will beat every other carrier’s truckload rate by 10%, where the truckload rate applies if you fill 75% or more of the truck. Let’s also say that we have the situation where your expected shipment is 80% of a truckload, that 25% of a truckload costs 20% more than the average shipping cost across your other carriers, and that your shipment size varies significantly by season and promotion (because you are in the food service industry, for example). One week you’ll ship 80%, the next week you’ll ship 60%, and the week after you’ll ship 120%. Chances are good that, in reality, you will not be shipping truckload half the time and paying on average 10% more. (If you are paying 20% more half the time, you’re paying 10% more over all.)

So this brings me back to the title of my post – the right way to handle freight. First of all, let’s note that when dealing with freight, you have one of five situations:

  • Freight is a small percentage of total spend, less than 20%
  • Freight is a moderate percentage of total spend, 20% to 40%
  • Freight is more or less equal to total spend, 40% to 60%
  • Freight is a large percentage of total spend, 60% to 80%
  • Freight is a majority percentage of total spend, greater than 80%

The first case is the most important case. Why? Because it is this case that I find to be the most mishandled and misunderstood. I know for a fact that many corporations have thrown away millions, if not tens or hundreds of millions, of dollars because of their belief that freight optimization needs to be perfect even when it falls into this case and have put off acquiring a decision optimization solution in hopes that the perfect solution will come along soon.

This is the case where the “sometimes 80% is enough” rule comes into play. If someone provides you with an optimization solution that can handle your buy almost perfectly but only handle freight 80%, don’t dismiss it as imperfect and pass up an opportunity to save millions just because it’s not perfect in your eyes. Do the math! If freight is at most 20% of your spend, and the solution is expected to be at least 80% accurate, then the solution computed by the optimizer will be at least 96%. If freight is at most 10% of your spend, then the solution computed by the optimizer will be at least 98% optimal. If your non-optimization assisted solution doesn’t even approach 90% of optimal, why would you pass up an opportunity to save an extra 6%-9%? After all, as per my arguments above, I’d argue you are never going to achieve more than 98% (on average) in reality anyway! So don’t look for perfection when evaluating optimization solutions – chances are you will not find it (even though some solutions might come quite close) as it’s still a maturing and improving technology.

What about the other cases? The fifth case, where freight is the majority of your spend is also easy – you simply invert the problem and source freight lanes, and treat the product buy as freight.

The middle cases are harder. As for cases two and four, where freight is a moderate or large percentage of spend, the best way to handle these cases is to combine the categories with similar categories that can, or will in all likelihood, be shipped on the same trucks or in the same lanes. Preferably, those categories where, in the second case, freight is significantly lower and bumps you back into the first case or where, in the fourth case, freight is significantly higher and bumps you up into the fifth case, as we already know how to handle these cases. Case three is the toughie – product cost and freight are almost equal. What do you focus on?

This is the case where you do enterprise-wide freight optimization. You optimize all of the product buys, amalgamate all the freight requirements, and then optimize the freight. Unless, of course, your spend is significant enough, your pocketbook deep enough, and your patience long enough to throw CombineNet’s top-end optimization platform at it. (It really depends on your organization size – if you are a large organization, the cost of CombineNet should be inconsequential, especially considering the potential savings. If you are a small organization, the difference between the cost of the solution and the expected savings is not likely to be significant. If you are a mid-size, it depends on category size and characteristics.) On an ultra-high end server, their platform can certainly handle most of the problems you can throw at it, but not all problems solve in less than a second … a large and complex enough problem will take minutes, hours, or even days regardless of how good your optimization platform is. (However, if it takes more than a few hours, chances are your model is not the right one.)  Also, since their solution is not part of any suite where your data resides, there will be some integration time.  (But that’s a small price to pay to save $$$!)

Show Me The Money! (Supply Chain Cost Reduction Opportunities)

Show Me The Money!

Sorry to disappoint you, but this isn’t a post about Cuba Gooding Jr., whom all of you action fans will remember as recurring minor character Billy Colton in MacGyver near the end of the series.

Instead, this is a post about how you can Show Me The Money by applying the proper technology at the proper places and proper times in your supply chain to save big, even with rising material costs, inflation, and the global talent war.

The reality is that unless you are best-in-class, and the harsh reality is that, by definition, the vast majority of you are not, your supply chain is hemorrhaging cash. And in all likelihood, lots of cash. Where?, you ask. Everywhere!

Let’s take a simplified PC supply chain for example. Raw materials are mined and shipped to a processing plant where they are refined and shipped to base part manufacturers. These base parts (such as chips, wires, etc.) are then shipped to component manufacturers who produce circuit boards, hard drives, cables, etc. These base components are then shipped to an assembly plant where the PC is assembled. From the assembly plant it is shipped to a central distribution center where it is then shipped to either a regional distribution center, store, or your home, depending on the sophistication of the distribution center.

Furthermore, the specifics of your supply chain depend on who you choose to buy from, who your suppliers choose to buy from, who is chosen to handle your transportation requirements, and who you choose to sell to.

From this example, we derive the following fundamental sources of cost:

  • Labor (inc. raw material collection, processing, & subsequent part and component handling)
  • Parts (inc. design, component raw materials, & built in production operations)
  • Operations (inc. part production, handling, & overhead)
  • Transportation (inc. raw materials, parts, components, & finished product)
  • Buying (who you buy from, where, & when)
  • Selling (who you sell to, where, & when)

However, from a savings viewpoint, not all of these are equally important, since only some of these are really hemorrhaging cash, despite their absolute value on the cash flow statements.

  • Labor is more or less defined by market rates. Moreover, companies that pay more for more productive people often have a higher ROI per person than those that pay less.
  • Selling is marketing, materials, and labor. The first is generally not under your purview, and again the issue is not cost, but results; the second is covered by buying; and the third we just discussed.

This tells us that the fundamental sources of cost, and thus the fundamentally sources of unnecessary costs, ripe for saving, have to do with:

  • Parts
  • Operations
  • Transportation
  • Buying

And those of you reading regularly will know what the answers are.

But back to the point – how do you Show Me The Money? You use these solutions to identify where you are hemorrhaging cash, tackle the issues head on, and stop the leak. And then you point to the big, fat increase on the balance sheet as your doing. And that’s how you Show Me The Money!

It’s also why I keep talking about companies like the following:

They may be small, they may be new, but they are trying to build a solution that will help you find those savings leaks that you are not likely to find on your own. So keep reading!

Spend Analysis I: The Value Curve

Today I’d like to welcome Eric Strovink of BIQ who, as I indicated in my There’s No Spend Analysis Without the Slice ‘N’ Dice post, is going to be authoring the first part of this series examining what is required for a true spend analysis system, spend analysis 2.0 if you are part of the 2.0 movement, as opposed to just a basic spend visibility system.

Spend Analysis has always suffered from what the late British humorist
Stephen Potter might have called the “So What Diathesis.” In other words,
now that you have your spending loaded and classified, what next? Well,
if you’ve never seen your purchasing data loaded into a spend analysis
system, you’re in for a treat, because you can find savings opportunities
just by drilling around. It’s often that easy — drill around; find
opportunities.

However, once the low-hanging fruit is harvested, which can take
anywhere from 6 to 12 months, the value of the spend analysis system
declines steeply — at which point Mr. Potter’s observation comes home
to roost. As illustrated below, there is a moment at which the cost of
the spend analysis system begins to exceed its ongoing value.

It is shortly after this time that (1) usage of the product drops to low
levels; (2) the rest of the organization begins to question the value of
the software; and (3) stakeholders come under pressure to justify continued
high expenditures.

That’s why it’s odd to hear people talk about “The Spending Cube” —
in capital letters — as though there were only one data cube ever
to be built. Actually, there are many different ways to look at spend,
and there’s lots of spend data that simply can’t be organized into a
single data cube anyway. How about a compliance cube, oriented around
invoice level data? A purchasing card cube, specific to p-card idiosyncrasies?
A T&E cube, built from travel agency data on “best price” versus
“actual price,” tracking employee travel and the reasons for the discrepancies?

In fact, it’s obvious to anyone who has worked with multiple datasets at
the A/P, PO, and invoice level that there are many, many different kinds
of data to analyze. Each dataset addresses more opportunity, and presents
another chance to apply a sophisticated analysis tool. Some of these
datasets aren’t “spending” datasets at all, but consist of demand-side
information — for example, cell phone or fleet vehicle usage records,
or operational data such as equipment recovery and maintenance logs.

If a spend analysis system makes it easy to load data and create new datasets,
which it should; and if the system supports as many datasets as you’d like,
as it ought; then there really isn’t any limit to how often the system can
be used, or to how many different kinds of data it can be applied. Which
means that a full-utilization spend analysis system value curve looks more
like this:

In other words, each use of the spend analysis system provides high
initial value, as well as residual value; but the system is used again
and again for new sets of data. The value of the spend analysis
software therefore remains high over time.

Next installment: The Psychology of Spend Analysis

Enterprise Manufacturing Intelligence

Informance International just released their Enterprise Manufacturing Intelligence Solution for manufacturing companies eager to accelerate improvement initiatives, drive operating strategies, and obtain actionable insight for operational performance.

According to their press release, their EMI solution delivers the top-three critical capabilities required to drive better business decisions:

  • multi-site performance analysis
  • enterprise visibility of production financial performance
  • data aggregation from multiple plant facilities

The solution consists of two modules:

  • Informance Manufacturing Strategist
    • What if Scenario Analysis

      Evaluate strategies and the impact on KPIs based on real time data.

    • Bi-Directional Information Flow

      Allows for the development of strategies and day-to-day operating tactics.

    • Real-Time Performance Monitoring

      A solid foundation for closed-loop process improvement.

  • Informance Enterprise Alerts
    • Proactive Notifications

      Automatic warnings if the enterprise is in danger of missing a metric at any level – facility, asset, or resource.

    • Dashboard Monitoring

      Manage issues globally from a single access point.

According to Informance, this allows your enterprise to:

  • Unlock Capacity
  • Increase Productivity without additional Capital Investment
  • Reduce Inventory and Labor Costs
  • Decrease Working Capital

since it can now

  • accelerate, sustain, and benchmark operational performance initiatives such as lean manufacturing, Six Sigma, and TPM,
  • drive operating strategies at the executive level into execution tactics at the plant level, and
  • provide intelligence in the form of actionable insight from actual data.

So what is Enterprise Manufacturing Intelligence? According to Informance, it is a strategic decision support system providing real-time visibility and a consolidated view into your entire manufacturing operations with powerful analytics, exception-based alerting capabilities, and integration to enterprise systems to give corporate decision makers control over all aspects of your manufacturing operations.

Whether or not you choose to define Enterprise Manufacturing Intelligence, or EMI, this way is up to you. What I can tell you is that these capabilities are important, since inefficient operations can cost you a lot of money. That’s why I’ve invited Sudy Bharadwaj, CMO & VP of Solutions Consulting, formerly of Aberdeen, to explain to us precisely what Informance EMI is and how it can help your manufacturing organization, or your contract manufacturer, increase productivity and save money.