Daily Archives: January 17, 2007

Sometimes 80% is enough … (when employing Decision Optimization)

Over on Spend Matters today, Jason put up a shot post titled: Why 80% is not enough.  Of course, I couldn’t leave this one alone, since sometimes 80% is enough.  The comments, which could form a post in themselves, are reprinted below.

Jason:

Obviously not an Optimization Post! Because, with optimization, often 80% is enough! Let’s say you’re operating at 90% efficiency. That says you have 10% to go. If you can achieve 80% of this goal, and get to 98%, and do it affordably (and increase cash flows and profit in the process), that is downright phenomenal regardless of the industry you’re in!!!

For another example, let’s say your primitive spreadsheet model will give you an award allocation that is 80% optimal. Let’s also say that your Platform Optimization Engine (POE), bundled with your cutting edge e-sourcing suite, will give you a solution that’s 96% optimal (80% better than what you would otherwise get). Let’s say this event is for a 10M spend, that the amortized cost of the POE for this event is 20K, and the cost of a one-time use of a Best-of-Breed ( BoB ) solution, guaranteed to achieve 100% of optimality, is 200K. The POE saves you 16% of 10M minus 20K, or 140K. However, BoB won’t save you anything since the 20% of 10M, or 200K, “saved” by BoB is just enough to cover it’s cost, leaving you with a net savings of 0.

In other-words, when you’re talking about Optimization, Often 80% is Enough.

For more insight into decision optimization (particularly as it relates to strategic sourcing), Part VII of my CombineNet Series went up over on Sourcing Innovation today.

Dan:

Precisely. I was referring to a single event, initiative, etc.

The reality is thus: Sometimes 80% is enough, and Sometimes 80% is not even close. It’s all relative.

If you’ve got 80% of spend under management, that’s a big win. (Most companies struggle to get 50%!) But if your total spend is only 80% of optimal, that’s a huge loss.

The truth is the following:
* you need to get as much spend under management as possible
* you need to actively manage it (contract management, compliance management, spend analysis and maverick spend identification, invoice verification, etc.)
* you need to improve your operations constantly
* but you should be satisfied with improvements that close 80% of the gap – you’ll never be perfect at anything, but getting 80% closer to optimal from where you are today is a huge cost savings. Moreover, if you haven’t already, you’ll quickly find out that the 80/20 rule holds in technology too – emerging best of breed solutions that get you 80% closer are readily affordable (e.g. Procuri & Iasta vs. Emptoris & Ariba), but solutions that go beyond 80% improvement are very costly.

So take your 80% win in each category with emerging best-of-breed on-demand offerings, and wait for improvements to come along to get you 80% closer again in the next iteration. (And if they don’t, it’s on-demand, junk it and move to the next on-demand solution if that’s what it takes.)

Consider this: Let’s say I’m spending 500M on acquisitions, and it will cost roughly 10M in on-demand software, systems, and labor to reduce that to 430M where as it will cost 40M for high-end best of breed products and consulting armies to get that spend down to 400M. Which should I choose? The 10M (80%) solution, as it saves me just as much as the 40M (100%) solution: 60M. (500M-430M+10M = 500M-400M+40M)

And that’s why I preach TVM – Total Value Management – based strategic sourcing efforts. Don’t just look at your inbound cost or even your total cost of ownership to produce the product, look at your outbound costs as well. (You have to in turn ship your products to your clients, poor quality will generate returns that will eat away your savings, etc.)

CombineNet VII: BoB’s Power Source

Yesterday I told you that not only could CombineNet support all of the basic cost and constraint categories required for true decision support, but that the model they generate accurately represents all of the costs and constraints they support and they can solve the model faster than all of their competitors the vast majority of the time. I also told you that today I’d highlight where this unique capability comes from.

Paul highlighted it in his Now that’s an Electric Engine … recent post. CombineNet’s ClearBox uses sophisticated tree-search algorithms to find the optimal allocation. Furthermore, the algorithms are ‘anytime algorithms’; they provide better and better solutions during the search process. And, CombineNet has also invented a host of proprietary techniques in tree search algorithms, including new branching strategies, custom cutting plane families, cutting plane generation and selection techniques, and machine learning methods for predicting what techniques will perform well on the instance at hand (for use in dynamically selecting a technique).

Even though every model can be built and solved on an off-the-shelf optimizer, the reality is that we are dealing with NP-Complete problems, which means that solve time generally increases exponentially with problem size. This means that for any given solver and any given model class, there is a limit on the average model size that can be solved in any given time window. Although an efficient model formulation combined with an appropriately tweaked off-the-shelf solver can solve a very decently sized problem, one must not ignore the fact that generic solvers use generic algorithms which are not always optimized for the problem at hand. This indicates that it is likely that one could create an appropriately defined custom designed algorithm that is likely to solve the model faster, if not significantly faster.

What is not as obvious is the degree of difficulty often associated with the development of these custom algorithms for strategic sourcing decision optimization. The nature of these problems is that it is very hard to determine for any given solution strategy and any given model instance, whether it is more or less likely to solve the model faster than another similar solution strategy. It’s the fundamental nature of NP-Complete. If we knew how to do it, we’d likely be in P-Space.

As highlighted in the post, CombineNet began to develop its algorithms in 1997, and it has 16 people working on the algorithms. We have tested hundreds of techniques to find those that shorten solve time for Expressive Commerce clearing problems. Some of the techniques are specific to market clearing, and others apply to combinatorial optimization more broadly. And that’s where the strength of CombineNet is – 10 years of research focussed on determining how to solve the combinatorial optimization problems that underlie strategic sourcing decision optimization problems quickly and optimally. Everything else is just icing on the cake.