Category Archives: Decision Optimization

How Should You Define Procurement Success?

This question is encased in a nut that’s quite tough to crack. We hinted at the importance of defining it three years ago in our post that asked how do you define Procurement success which noted that if you consider the art of the Strategic Sourcing Process, the Category Management Process, or the Contract Management Lifecycle, you [not only] see that they all start about the same at a high-level but that a key requirement of each step is an acceptable definition of success.

This means that if you want to be successful, you need a good definition of success but what should it be?

If you ask the CFO, she will say it should be cost savings! Reduce the outflow!

If you ask the Chief Engineer, it should be the best quality and reliability money can buy!

If you ask the Production Chief, it should be rock solid supply availability.

If you ask the CMO, it should have the most unique gee-whiz features on the market for the biggest marketing splash.

If you ask the VP of Sales, it should be the product that comes with the most value-adds so they can command the greatest price.

And so on.

On SI, we have repeatedly said the definition of procurement success should always be the outcome that brings the most value to the organization, but this can be hard to define when there are a number of competing viewpoints on what value is.

However, we can define Value as the outcome that balances the tradeoff between the goals of the respective stakeholders for maximum return against an agreed upon value scale that normalizes a dollar of savings (for the CFO) against a reliability metric (for the Chief Engineer) against an expected availability metric (for the Production Chief) against a feature differential against the market average (for the CMO) against a value-add differential (for the VP of Sales) [etc].

Now, you might be wondering how you do that? The answer is simple: define an expected dollar value. It’s not as hard to do as you think (as long as you have the [big] data and the model and the software to calculate it)!

The CFO metric is easy, a dollar of savings is a dollar of savings.

The reliability metric is not that much harder. A failure rate of 90% vs 93% during the warranty period has an incremental cost equal to 3% of the units times replacement cost (which is base product cost + processing cost if outside of supplier warranty or processing cost + return cost if inside supplier warranty) and this cost can be amortized per unit.

The supply availability metric is involved, but still easy to define. First you have to calculate an expected chance of disruption based on it. Once you do, the cost can be approximated as follows: (% chance of disruption * % length of disruption x cost per day of disruption) amortized by units. If there is 10% chance of disruption, then you expect one every 10 years, for the estimated length of time, at the estimated cost per day, and amortize that cost over each unit purchased each year. Not perfect, but a good approximation. To find the conversion from expected availability percentage to chance of disruption, you mine your data and extrapolate the multiplier. Easy peasy (with a modern cognitive or deep analytics platform).

The CMO metric is tricky. Just how much better is that gee-whiz feature? Probably not nearly as important as the CMO claims. To figure out an approximate dollar value per unit here, you will have to mine historical data to see the incremental marketing value from the company’s “most differentiated” or “feature rich” products compared to its “least differentiated” or “feature poor” products as compared to the estimated market share each product obtained. If “feature rich” products typically command an extra 10% of market share, each unit is valued at a premium of 10%.

The value-add is easy — mine the historical data to extract the dollar value of each “value-add” available to the company.

Then, to find the optimal trade-off during a sourcing event, build a multi-objective optimization model that maximizes the overall value generated from these goals.

In other words, what used to be downright impossible is now pretty straight forward with strategic sourcing decision optimization and cognitive sourcing.

Good Working Capital Management is More than Just Timing Payables and Receivables

A few years ago we ran a post on the essence of good working capital management. We noted that, at least from a basics point of view, all one really has to do is:

  • Get a grip on receivables.

    When are the customer payments for sales due? The reimbursements from suppliers for reaching volume tiers due? The tax rebates?

  • Get a clear picture on fixed payables.

    What is the average monthly payroll? Overhead? And projected supplier invoices?

  • Get a good estimate of average disruption costs.

    If a receivable isn’t received on time, what’s the impact? Especially if it could impact a supplier payment schedule which needs to be maintained to insure timely supply.

This is the foundation, but in today’s unstable and unpredictable business environment, that’s not enough to maximize working capital management. To maximize working capital management, one has to maximize the value of the capital. In order to maximize working capital, you need to know when to use capital for internal costs, for supplier payments, and for investments. This means one also has to:

  • Understand the value of early supplier payments.

    Not just the value of the early payment discount, but the overall value to the supplier. If they don’t have to borrow at a cost of capital two or three times the buying organization, and then pass that cost on to the buyer in their overhead, that’s a big potential savings to the organization — even if they have to borrow.

  • Understand the organization’s cost of borrowing.

    If the organization can borrow at a low interest rate of 3% or 4% a year in their home market, whereas a supplier can only borrow at a high interest rate of 12% to 20% in their market, the organization can save by borrowing. But you don’t borrow just to save on costs, you borrow to profit. If you can accelerate production and accelerate profitable sales, borrowing is sometimes a pittance. And if you have good investment opportunities, that could also be a good reason to borrow.

  • Understand the organization’s investment opportunities.

    How much from accelerating production? Improving the process? Investing in R&D? Investing in subsidiaries.

Then, when you have all of this information, you do one more step:

  • Build a Working Capital Optimization Model

    and run it. Input all the receivables, payables, disruption costs, early payment opportunities, borrowing opportunities, and investment opportunities and let an optimization-backed cognitive system help you put a plan in place to not only manage working capital, but profit from it.

Category Management Savings Drying Up? Time to Cross-Optimize!

Leaders know that the best way to savings success, especially when the CFO and CEO demand savings today (even though this could sacrifice value tomorrow), is category management — a razor sharp focus on buying like products from like suppliers that allows for apples-to-apples comparison across products on key dimensions of price, quality, warranty, lead-time, etc. so that the best buy that meets the mandatory savings target can be made every time (and as much value preserved in the category as possible).

But Leaders also know, just like the third auction in a row increases costs, good category management sees savings fall rapidly as the fat is quickly squeezed out of the margin and the waste quickly squeezed out of the production, delivery, and inventory process as everything is optimized. This means that as soon as raw material costs go up, category costs go up, and not down.

This can be problematic when (unrealistic) expectations are placed on the Procurement department year after year and savings need to be found even when, apparently, none exist. But here’s the thing, while they don’t exist in the raw materials, or even the overhead, of production, they do exist in the distribution and inventory and still exist in the volume. But only in volume beyond what’s in the category.

This means that the only way to extract them is to increase the volume, which means that you need to simultaneously cross-source and cross-optimize across categories that can be shipped together from the same supply base. For example, while it might be logical to separate brass, bronze, and copper parts from a category management perspective, considering that some suppliers will likely supply parts across these categories (considering brass and bronze are alloys that contain copper), from a sourcing perspective it makes sense to source all three categories simultaneously. This way you can optimize logistics and negotiate additional volume discounts based on spend levels.

This also works in CPG — a supplier may supply computer devices, audio devices, and home security devices — and while you may want to manage these separately, you want to source them simultaneously. And it will work across seemingly unrelated categories if you are buying from suppliers that are essentially distributors (like office supplies vendors, MRO vendors, etc.). All you need to do is find a set of categories where the majority of products come from the same supply base. How do you do this? Simple: use a modern spend analysis tool.

And how do you source multiple categories simultaneously and cross-optimize logistics, inventory, and discounts for the lowest overall total cost of ownership (while maintaining value)? Strategic sourcing decision optimization — the technology SI has been telling you to acquire for a decade. Which vendor? Whichever one suits your needs best. Coupa, Jaggaer ASO, Jaggaer Bravo, and Keelvar are all great. Determine is re-building the Iasta capability on the b-pack platform, and when complete, will join the A-list again … and BidMode is about to hit the scene. Just get one, so you’re not left behind.

Introducing LevaData. Possibly the first Cognitive Sourcing Solution for Direct Procurement.

Who is LevaData? LevaData is a new player in the new optimization-backed direct material prescriptive analytics space, and, to be honest, probably the only player in the optimization-backed direct material prescriptive analytics space. While Jaggaer has ASO and Pool4Tool, it’s direct material sourcing is optimization backed and while it has VMI, it does not have advanced prescriptive analytics for selecting vendors who will ultimately manage that inventory.

LevaData was formed back in 2014 to close the gaps that the founders saw in each of the other sourcing and supply management platforms that they have been a part of over the last two decades. They saw the need for a platform that provided visibility, analytics, insight, direction, optimization, and assistant — and that is what they sent out to do.

So what is the LevaData platform? It is sourcing platform for direct materials that integrates RFX, analytics, optimization, (should) cost modelling, and prescriptive advice into a cohesive whole that helps a buyer buy better when they use and which, to date, has reduced costs (considerably) for every single client.

For example, the first year realized savings for a 5B server and network company who deployed the LevaData platform was 24M; for a 2.4B consumer electronics company, it was 18M; and for a 0.6B network customer, it was 8M. To date, they’ve delivered over 100M of savings across 50B of spend to their customer base, and they are just getting started. This is due to the combination of efficiency, responsiveness, and savings their platform generates. Specifically, about 60% of the value is direct material cost reduction and incremental savings, 30% is responsiveness and being able to take advantage of market conditions in real time, and 10% is improved operational efficiency.

The platform was built by supply chain pros for supply chain buyers. It comes with a suite of f analytics reports, but unlike the majority of analytics platforms, the reports are fine tuned to bill of materials, component, and commodity intelligence. The reports can provide deep insight to not only costs by product, but costs by component and/or raw material and roll up and down bill of materials and raw materials to create insights that go beyond simple product or supplier reports. Moreover, on top of these reports, the platform can create costs forecasts and amortization schedules, track rebates owed, and calculate KPIs.

In order to provide the buyer with market intelligence, the application imports data from multiple market fees, creates benchmarks, compares those benchmarks to internal market data, automatically creates competitive reports, and calculates the foundation costs for should cost models.

And it makes all the relevant data available within the RFX. When a user selects an RFX, it can identify suppliers, identify current market costs, use forecasts and anonymized community intelligence to calculate target costs, and then use optimization to determine what the award split would be, subject to business constraints, and identify the suppliers to negotiate with, the volumes to offer, and the target costs to strive for.

It’s a first of its kind application, and while some components are still basic (as there is no lane or logistics support in the optimization model), missing (as there is no ad-hoc report builder, or incomplete (such as collaboration support between stakeholders or a strong supplier portal for collaboration), it appears to meet the minimal requirements we laid out yesterday and could just be the first real cognitive sourcing application on the market in the direct material space.

There are No Economies of Scale … Just Economic Production Quantities

As the public defender likes to point out on a regular basis over on Spend Matters UK / Europe, economies of scale is a procurement myth. The idea that the more you buy, the bigger discount you can get because the cost diminishes is a myth because, if it were not, if you could buy a large enough quantity, then the cost would eventually get close to 0 per unit.

But the reality is that there are always hard costs that cannot be reduced in the supply chain … particularly those components that involve human labour — product creation, product transportation, product component creation, product component transportation, raw material mining, raw material component transportation, security guards for storage, etc. — and facility leases, utility cost, taxes, etc.

And there are always limits to “economies of scale” production lot sizes. If the line can only do 60 units per hour, then the line can only do 2400 in a normal workweek, 4800 in a double shift work week, 7200 in a triple shift work week and maxes out at 10,080 a week … assuming no downtime (and most lines will require some maintenance). In this case, the major economies of scale are 2400, 4800, and 7200 — as this insures that the labour cost (and facility costs) are spread over the maximum number of units.

In other words, there are economic production quantities (EPQ) where the price per unit is minimized, and this is the optimal economy of scale.

So if you really want to minimize your costs, you can start by minimizing your supplier, and carrier costs, which can be done by appropriately distributing the award across suppliers in economic production quantities that can allow them to give you larger discounts (and still retain a reasonable margin). So how do you do that? Considering that each supplier has a different EPQ, each carrier has a different EPQ, and this varies by product (and plant location), how can you possibly figure out how to split in such a way that you can enable suppliers to reduce their bids?

If you’re a regular reader of Sourcing Innovation, you know the answer. A decision optimization platform …