Category Archives: Decision Optimization

The Power of Optimization-Backed Sourcing is in the Right Sourcing Mix Across Scales of Size and Service

the doctor has been pushing optimization-backed sourcing since Sourcing Innovation started in 2006. There’s a number of reasons for this:

  • there is only one other technology that has repeatedly demonstrated savings of 10% or more
  • it’s the only technology that can accurately model total cost of ownership with complex cost discounts and structures
  • it’s the only technology that can minimize costs while adhering to carbon, risk, or other requirements
  • it’s one of only two technologies that can analyze cost / risk, cost / carbon, or other cost / x tradeoffs accurately

However, the real power of optimization-backed sourcing is how it can not only give you the right product mix, but the right mix across scales. This is especially prevalent when doing sourcing events for national or international distribution or utilization. Without optimization, most companies can only deal with suppliers who can handle international distribution or utilization. This generally rules out regional suppliers and always rules out local suppliers, some of whom might be the best suppliers of goods or services to the region or locality. While one may be tempted to think local suppliers are irrelevant because they will struggle to deliver the economy of scale of a regional supplier and will definitely never reach the economy of scale of a national (or international) supplier, unit cost is just one component of the total lifecycle cost of a product or service. There’s transportation cost, tariffs, taxes, intermediate storage, and final storage (of which more will be required since you will need to make larger orders to account for longer distribution timelines) among other costs. So, in some instances, local and regional will be the overall lowest cost and keeping them out of the mix increases costs (and sometimes increases carbon and risk as well).

When it comes to services, the right multi-level mix can lead to savings of 30% or more in an initial event. the doctor has seen this many times over his career (consulting for many of of the strategic sourcing decision optimization startups) because while the big international players can get competitive on hourly rates where they have a lot of resources with a skill set, when it comes to services, there are all in-costs to consider, which include travel to the client site and local accommodations. The thing with national and international services providers is that they tend to cluster all of their resources with a certain skill set in a handful of major locations. So their core IT resources (developers, architects, DBAs, etc.) will be in San Francisco and New York, their core Management consultants will be in Chicago and Atlanta, their core Finance Pros in Salt Lake City and Denver, etc. So if you need IT in Jefferson City, Missouri, Management in Winner, South Dakota, or accounting in Des Moines, Iowa, you’re flying someone in, putting them up at the highest star hotel you have, and possibly doubling the cost compared to a standard day rate.

However, simple product mix and services scenarios are not the only scenarios optimization-backed sourcing can handle. As per this article over on IndianRetailer.com, retailers need to back away from global sourcing and embrace regional (and even local) strategies for cost management, supply stability, and resilience. They are only going to be able to figure that out with optimization that can help them identify the right mix to balance cost and supply assurance, and when you need to do that across hundreds, if not thousands, of products, you can’t do that with an RFX solution and Microsoft Excel.

Furthermore, when you need to minimize costs when a price is fixed, like the price of oil or airline fuel, you need to maximize every related decision like where to refuel, what service providers to contract with, how to transport it, etc. When it can cost up to $40,000 to fuel a 737 for a single flight (when prices are high), and you operate almost 7,000 flights per day with planes ranging from a gulf stream that costs about $10,000 to refuel to a Boeing 747 that, in hard times, can cost almost $400,000 to refuel, you can be spending $60 Million a day on fuel as your fleet burns 10 Million gallons. Storing those 10 Million gallons, transporting those 10 Million gallons, and using that fuel to fuel 7,000 planes takes a lot of manpower and equipment, all of which has an associated cost. Hundreds of thousands of associated costs per day (on the low end), and tens of millions per year. Shaving off just 3% would save over a million dollars easy. (Maybe two million.) However, the complexity of this logistics and distribution model is beyond what any sourcing professional can handle with traditional tools, but easy with an optimization backed platform that can model an entire flight schedule, all of the local costs for storage and fueling, all of the distribution costs from the fuel depots, and so on. (This is something that Coupa is currently supporting with its CSO solution, which has saved at least one airline millions of dollars. Reach out to Ian Milligan for more information if this intrigues you or how this model could be generalized to support global fleet management operations of any kind.)

In other words, Optimization-Backed Sourcing is going to become critical in your highly strategic / high spend categories as costs continue to rise, supply continues to be uncertain, carbon needs to be accounted for, and risks need to be managed.

COUPA: Centralized Optimization Underlies Procurement Adoption …

… or at least that’s what it SHOULD stand for. Why? Well, besides the fact that optimization is only one of two advanced sourcing & procurement technologies that have proven to deliver year-over-year cost avoidance (“savings”) of 10% or more (which becomes critical in an inflationary economy because while there are no more savings, negating the need for a 10% increase still allows your organization to maintain costs and outperform your competitors), it’s the only technology that can meet today’s sourcing needs!

COVID finally proved what the doctor and a select few other leading analysts and visionaries have been telling you for over a decade — that your supply chain was overextended and fraught with unnecessary risk and cost (and carbon), and that you needed to start near-sourcing/home-sourcing as soon as possible in order to mitigate risk. Plus, it’s also extremely difficult to comply with human rights acts (which mandate no forced or slave labour in the supply chain), such as the UK Modern Slavery Act, California Supply Chains Act, and the German Supply Chain Act if your supply chain is spread globally and has too many (unnecessary) tiers. (And, to top it off, now you have to track and manage your scope 1, 2, and 3 carbon in a supply chain you can barely manage.)

And, guess what, you can’t solve these problems just with:

  • supplier onboarding tools — you can’t just say “no China suppliers” when you’ve never used suppliers outside of China, the suppliers you have vetted can’t be counted on to deliver 100% of the inventory you need, or they are all clustered in the same province/state in one country
  • third party risk management — and just eliminate any supplier which has a risk score above a threshold, because sometimes that will eliminate all, or all but one, supplier
  • third party carbon calculators — because they are usually based on third party carbon emission data provided by research institutions that simply produce averages for a region / category of products (and might over estimate or under estimate the carbon produced by the entire supply base)
  • or even all three … because you will have to migrate out of China slowly, accept some risk, and work on reducing carbon over time

You can only solve these problems if you can balance all forms of risk vs cost vs carbon. And there’s only one tool that can do this. Strategic Sourcing Decision Optimization (SSDO), and when it comes to this, Coupa has the most powerful platform. Built on TESS 6 — Trade Extensions Strategic Sourcing — that Coupa acquired in 2017, the Coupa Sourcing Optimization (CSO) platform is one of the few platforms in the world that can do this. Plus, it can be pre-configured out-of-the-box for your sourcing professionals with all of the required capabilities and data already integrated*. And it may be alone from this perspective (as the other leading optimization solutions are either integrated with smaller platforms or platforms with less partners). (*The purchase of additional services from Coupa or Partners may be required.)

So why is it one of the few platforms that can do this? We’ll get to that, but first we have to cover what the platform does, and more specifically, what’s new since our last major coverage in 2016 on SI (and in 2018 and 2019 on Spend Matters, where the doctor was part of the entire SM Analyst team that created the 3-part in-depth Coupa review, but, as previously noted, the site migration dropped co-authors for many articles).

As per previous articles over the past fifteen years, you already know that:

So now all we have to focus on are the recent improvements around:

  • “smart scenarios” that can be templated and cross-linked from integrated scenario-aware help-guides
  • “Plain English” constraint creation (that allows average buyers & executives to create advanced scenarios)
  • fact-sheet auto-generation from spreadsheets, API calls, and other third-party data sources;
    including data identification, formula derivation and auto-validation pre-import
  • bid insights
  • risk-aware functionality

“Smart Events”

Optimization events can be created from event templates that can themselves be created from completed events. A template can be populated with as little, or as much as the user wants … all the way from simply defining an RFX Survey, factsheet, and a baseline scenario to a complete copy of the event with “last bid” pricing and definitions of every single scenario created by the buyer. Also, templates can be edited at any time and can define specific baseline pricing, last price paid by procurement, last price in a pre-defined fact-sheet that can sit above the event, and so on. Fixed supplier lists, all qualified suppliers that supply a product, all qualified suppliers in an area, no suppliers (and the user pulls from recommended) and so on. In addition to predefining a suite of scenarios that can be run once all the data is populated, the buyer can also define a suite of default reports to be run, and even emailed out, upon scenario completion. This is in addition to workflow automation that can step the buyer through the RFX, auto-respond to suppliers when responses are incomplete or not acceptable, spreadsheets or documents uploaded with hacked/cracked security, and so on. The Coupa philosophy is that optimization-backed events should be as easy as any other event in the system, and the system can be configured so they literally are.

Also, as indicated above, the help guides are smart. When you select a help article on how to do something, it takes you to the right place on the right screen while keeping you in the event. Some products have help guides that are pretty dumb and just take you to the main screen, not to the right field on the right sub-screen, if they even link into the product at all!

“Plain English” Constraint Creation

Even though the vast majority of constraints, mathematically, fall into three/four primary categories — capacity/allocation, risk mitigation, and qualitative — that isn’t obvious to the average buyer without an optimization, analytical, or mathematical background. So Coupa has spent a lot of time working with buyers asking them what they want, listening to their answers and the terminology they use, and created over 100 “plain english” constraint templates that break down into 10 primary categories (allocation, costs, discount, incumbent, numeric limitations, post-processing, redefinition, reject, scenario reference, and collection sheets) as well as a subset of most commonly used constraints gathered into a a “common constraints” collection. For example, the Allocation Category allows for definition “by selection sheet”, “volume”, “alternative cost”, “bid priority”, “fixed divisions”, “favoured/penalized bids”, “incumbent allocations maintained”, etc. Then, when a buyer selects a constraint type, such as “divide allocations”, it will be asked to define the method (%, fixed amount), the division by (supplier, group, geography), and any other conditions (low risk suppliers if by geography). The definition forms are also smart and respond to each, sequential, choice appropriately.

Fantastic Fact Sheets

Fact Sheets can be auto-generated from uploaded spreadsheets (as their platform will automatically detect the data elements (columns), types (text, math, fixed response set, calculation), mappings to internal system / RFX elements), and records — as well as detecting when rows / values are invalid and allow the user to determine what to do when invalid rows/values are detected. Also, if the match is not high certainty, the fact-sheet processor will indicate the user needs to manually define and the user can, of course, override all of the default mappings — and even choose to load only part of the data. These spreadsheets can live in an event or live above the event and be used by multiple events (so that company defined currency conversions, freight quotes for the month, standard warehouse costs, etc. can be used across events).

But even better, Fact Sheets can be configured to automatically pull data in from other modules in the Coupa suite and from APIs the customer has access to, which will pull in up to date information every time they are instantiated.

Bid Insights

Coupa is a big company with a lot of customers and a lot of data. A LOT of data! Not only in terms of prices its customers are paying in their procurement of products and services, but in terms of what suppliers are bidding. This provides huge insight into current marketing pricing in commonly sourced categories, including, and especially, Freight! Starting with freight, Coupa is rolling out a new bid pricing insights for freight where a user can select the source, the destination, the type (frozen/wet/dry/etc), and size (e.g. for ocean freight, the source and destination country, which defaults to container, and the container size/type combo and get the quote range over the past month/quarter/year).

Risk Aware Functionality

The Coupa approach to risk is that you should be risk-aware (to the extent the platform can make you risk aware) with every step you take, so risk data is available across the platform — and all of that risk data can be integrated into an optimization project and scenarios to reject, limit, or balance any risk of interest in the award recommendations.

And when you combine the new capabilities for

  • “smart” events
  • API-enabled fact sheets
  • risk-aware functionality

that’s how Coupa is the first platform that literally can, with some configuration and API integration, allow you to balance third party risk, carbon, and cost simultaneously in your sourcing events — which is where you HAVE to mange risk, carbon, and cost if you want to have any impact at all on your indirect risk, carbon, and cost.

It’s not just 80% of cost that is locked in during design, it’s 80% of risk and carbon as well! And in indirect, you can’t do much about that. You can only do something about the next 20% of cost, risk and carbon that is locked in when you cut the contract. (And then, if you’re sourcing direct, before you finalize a design, you can run some optimization scenarios across design alternatives to gauge relative cost, carbon, and risk, and then select the best design for future sourcing.) So by allowing you to bring in all of the relevant data, you can finally get a grip on the risk and carbon associated with a potential award and balance appropriately.

In other words, this is the year for Optimization to take center stage in Coupa, and power the entire Source-to-Contract process. No other solution can balance these competing objectives. Thus, after 25 years, the time for sourcing optimization, which is still the best kept secret (and most powerful technology in S2P), has finally come! (And, it just might be the reason that more users in an organization adopt Coupa.)

Keelvar: Not satisfied with the hill, it’s trying to climb the mountain!

The last time we covered Keelvar on Sourcing Innovation was back in 2016 when we re-introduced Keelvar: An Optimization-Backed Sourcing Platform because it was The Little Engine that Could. (It’s last deep dive on Spend Matters was also in 2016, in Jason Busch’s 3-part Vendor Analysis that the doctor consulted on, which can be found linked here in Part 1, Part 2, and Part 3: subscription required. With subscription, you can also check out the What Makes It Great Solution Map Analysis.)

Since our last update, Keelvar has made considerable progress in a number of areas, but of particular relevance are:

  1. total cost modelling
  2. constraint definition for its optimization
  3. workflow-based event automation
  4. usability

After a basic overview of the software, the above four improvements are what we are going to focus on in this article as it is the most relevant to sourcing-based cost savings identification.

Keelvar is an optimization-backed sourcing platform (for RFQs and Auctions) that can also support extensive sourcing automation, especially once a full-fledged sourcing event has been run and a template already exists (and approved suppliers have already been defined). We will start with a review of the sourcing platform.

The sourcing platform is designed to walk a user through a sourcing event step-by-step. Keelvar uses a 7-stage sourcing workflow that they break down as follows:

  1. Design: This is where the event is defined. In this stage you define the meta information (id, name, description, contacts, etc.), the schedule, the RFI, the bid sheet (as the application supports export to/import from Excel for Suppliers who can’t figure out how to use anything except Excel), the cost calculation per unit (for analysis, optimization, and reporting), and basic event settings, especially if using an auction.
  2. Invite: This is where you select suppliers for invitation.
  3. Publish: This is where you review the design and invite list and launch it.
  4. Bid: This is the bidding phase where suppliers place bids. The buyer can see bids as they come in, get reports on activity, and manage the event as needed (extend the deadline, answer questions, and distribute the responses to all suppliers).
  5. Evaluate: This is where the mathematical magic happens. In this step you define item/lot groups, bidder groups, and scenarios. (You need to define groups for risk mitigation and quality constraints, which are impossible to define in the platform otherwise.) Scenarios allow you to find the lowest cost options under different business rules, constraints, and goals.
  6. Analyze: This is where the user can apply detailed analytics across bids and scenarios to see the differences, gaps, supplier ranks, etc. in tabular or visual formats; do detailed analysis on the individual scenarios to understand what is driving the cost or the award; and even analyze the potential awards against RFI criteria submitted by the suppliers.
  7. Award: After doing the analysis and making their decision, this is where the buyer makes their award from either a solved scenario or a manual allocation.

So now that the basics are out of the way, let’s talk about total cost modelling. As per our summary above, that starts with the bid sheet. Either in the platform, or, if you prefer, in Excel, you can define all of the cost components of interest (and even upload starting bid values from the current I2P/AP system and/or previous bid sheets). If you have an Excel sheet that breaks down the bid elements you want to collect, and the totals you want, in columnar format, with enough sample rows, you can just upload it and the platform will not only differentiate the raw data columns from the bidder columns, and map your column names to internal, mandatory, defined columns (for items, lanes, etc.), but differentiate purchaser input columns (such as destination city, country, service/product, etc.) from bidder columns (origin city, country, lane cost, unit cost, tariffs/taxes, etc), differentiate raw columns from formulas, extract the formulas, and even determine default visibility to the bidder (who won’t see the formulas, especially if hidden offsets or weightings are used). The user can, of course, correct and override anything if needed, but for each sheet process, the application learns the mappings (based on user overrides and corrections) and over time has a high success rate on import. Once the columns are defined, editing the column roles (purchaser vs. bidder, visibility, mandatory vs optional, etc. is very easy) – you can simply toggle.

In addition, and this is a major improvement over the early days (when there was no quality control on the coal being used to power that little engine), all of the inputs can be associated with one or more validation rules that can require an input be completed, from a valid set, the same as related bid values, and so on. Out of the box rules exist for easily defining uniform values across a column for a lot (if all items must come from or go to the same [intermediate] location, for example) and requiring complete coverage on a group of lots (critical if a supplier must bid all or nothing on an item, set of related items, sub-assembly of a BoM, etc.). If those don’t work, you can use advanced conditional logic on any (set of) column(s) to ensure specific conditional rules are met, especially if a value or answer is dependent on another column or value. The conditional rule generator uses the formula builder that supports all standard numeric operators and numeric columns as well as string-based matching and type/value based operators for ensuring entries come from an appropriate set of values, possibly dependent on the non-numeric value defined in another column.

In other words, because all cost elements can be defined, because arbitrary formulas can be used to define costs, and because rules can be created to ensure all cost elements are valid, the platform truly supports total cost modelling (which is one of the four pillars of Strategic Sourcing Decision Optimization [SSDO]).

For easy reference, the other three pillars are:

  • solid mathematical foundations, which we know Keelvar has from previous coverage;
  • what-if capability, which has been there since the beginning as Keelvar has always supported multiple scenarios;
  • sophisticated constraint definition and analysis — which was lacking in the past and which we will cover next.

Moving onto constraint definition, Keelvar has made considerable improvements both in the definition of bidder and lot groups and the ability to define arbitrary limit constraints on arbitrary collections of bidders and lots/items. This allows it to address the four categories required for SSDO:

  • allocation: to define minimum, fixed, or maximum allocations for a supplier
  • capacity: to take into account supplier, lane, warehouse, or other capacity limits
  • risk mitigation/group-wise allocation: ensuring that the award is split across a group of suppliers to mitigate risk, that a supplier receives a minimum amount of a group of items to satisfy an existing contract, etc.
  • qualitative: to make sure a minimum, average, quality level, diversity goal (volume-wise) or other non-cost constraint is adhered to

Keelvar has always been great at capacity and allocation but, in the past, it’s ability to define risk mitigation/group-wise allocation was limited and qualitative almost non-existent. But with proper definition of bidder and (item) lot groups, and the ability to define constraints on any numeric dimension (not just cost), one can now define the majority of foreseeable instances of both of these constraints. You can create bidder groups by geography, and ensure each geography gets a minimum or maximum allocation. (And even though you couldn’t define a 20/30/50 split directly, you know the cheapest supplier will get 50%, the most expensive 20%, and the middle one 30% by basic logic. If you wanted a 10/25/35/40, that would be a bit more difficult. But logic dictates the two cheapest get 40%, ensuring the two most expensive get 10%, if you insist each group get between 10% and 40%. A simple total-cost analysis tells you which group should be 40%, which group 35%, which group 25%, and which group 10%. And almost every other group-based allocation you would reasonably want to define would be straight-forward or close with post-scenario analysis.)

Quality constraints such as diversity (by volume), quality (by unit), or sustainably approved (by unit) are also very straight-forward to define. For diversity, simply group all the diverse suppliers and ensure they get a minimum percentage of the volume (by unit cost if that’s your metric) to meet your goals. For quality, if every supplier has an internal quality rating, for each quality level, you can define a maximum allocation that can be allowed for that group to ensure a minimum overall quality level. (And if there was hard data by unit by supplier, you’d just define a hidden column in the bid sheet and define a limit constraint on the quality instead of the cost.) For sustainably approved (by unit), you’d simply group all the sustainable suppliers (instead of the diverse ones) and ensure they received a minimum percentage.

In addition, since we last covered Keelvar, they have incorporated soft-constraint support and made the definition thereof super easy. In the application, you can define a constraint as available to be relaxed if the total cost savings exceeds a certain value. That’s as easy (peasy) as it gets.

This takes us to workflow-based event automation. In the updated Keelvar platform, you can define a complete event workflow, and the platform will automate almost the entire event for you, handling everything until it’s time to allocate the award. Once you create an instance, which is as easy as selecting an event template for activate and defining just a few pieces of meta-data, it will auto-fill / update all of the remaining meta-data (since last time if it was previously run), extract the current, approved, supplier list, automatically request approval from the category owner, publish the RFP (or launch the auction) on the predefined date, automatically send the invites out, collect (and validate) the bids (using the predefined validation rules), run the predefined scenarios when the bidding closes, kick-off the predefined analyses and reports on those scenarios and package them up for the event owner (which can include exports), and take the buyer right to the award screen for scenario and/or manual allocation where the user can make the award if ready, review an analysis, or jump back to a scenario, alter it slightly, re-run it, and then use that modified scenario for the award definition.

In terms of process definition, Keelvar has an integrated visual workflow editor where the user can compose the mandatory steps, conditional steps, and necessary approvals at each step (which could be the category owner, a manager if the estimated event value exceeds a threshold, etc.). Each step can link to an appropriate element which can be completely customized as needed.

However, the easiest way to define an event template, and the most effective way, is to instantiate one off of a completed RFP. The built in logic and machine learning can automatically generate a complete workflow-driven template off an RFP. It can define rules for filling in all definition fields off of a few key pieces of meta-data, define rules for identifying the (recommended) suppliers for future events (for one-click approval by the category owner), suggest publication dates and bidding timeframes, define all of the bid validation rules based on the bid-sheets and defined rules, create default scenario definitions, (re)create default bid/scenario analysis and visualization reports as well as rules to auto-package and distribute exports to the event owner, and even identify the recommended scenario for award allocation.

Once the event template is automatically extracted from the completed event, a user can review it in its entirety and edit whatever they want. And then they know when they next instantiate it, it will run flawlessly. (It’s automation. Not automated. And that’s the way it should be.)

Finally, when it comes to usability, if it’s not immediately obvious, usability has been enhanced throughout the platform. But it’s easier to see it than describe it. So if you want a modern optimization-backed sourcing optimization platform, just get a demo and see it for yourself.

In closing, Keelvar is not just the last standing specialist optimization provider, they’re now one of the best. Let’s hope the next major enhancement tackles true Multi-Objective Strategic Sourcing Decision Optimization On Procurement Tends. (MOSS DO OPT!)

Logility “Starboard”: The Real-Time What-If Supply Chain Network Modeller that Every Sourcing Professional Should Have

Now, it’s true that this blog is focussed on Source-to-Pay and it’s true that, as a result, we usually focus on Strategic Sourcing Decision Optimization and occasionally on Logistics-focussed Models and Optimization Solutions, as that what’s typically needed for a Sourcing Professional to make the optimum buy, but this time we’re going to make an exception.

Why is network modelling an exception (besides the fact that, as we told you yesterday when we said Don’t Overlook the Network, it has the absolute best return on investment across all supply chain applications)? Well, if you think about classic network modelling, it’s not something a sourcing professional would do because it’s typically up to logistics and supply chain to maintain the network infrastructure that gets the product from the suppliers to the ports and warehouses and then to the distribution centres, retail facilities, and end consumers in drop-ship models. It’s up to logistics to re-evaluate the supporting network infrastructure on a bi/tri-annual basis and determine if warehouses should be added, relocated, or deleted (on lease end); if ports should be changed (to reduce overall costs due to port fees or local carrier costs or rail vs truck options); if new carriers should be considered; and so on.

The reason that this is typically only done on a bi/tri-annual basis is because it has traditionally been an arduous endeavour where you have to

  • build a very detailed model of all
    • the supplier production facilities, ports, warehouses, distribution centers, manufacturing/assembly centers, and retail facilities
    • the lanes used
    • the modes used for each lane
    • the carriers used for each mode / lane combination
    • the LTL and FTL rates for each carrier
    • the drop-ship rates for direct-to-consumer
  • identify all of the products being purchased and
    • associate them with the appropriate suppliers
    • associate them with the appropriate lanes, modes, and carriers
    • associate them with the appropriate warehouses
    • associate them with the appropriate retail locations or drop-ship locations
  • collect all of the current rates, for every supplier-carrier-lane-mode option in use
  • then solve a current-state optimization problem to determine baseline costs, times to serve, carbon emissions, etc.
  • identify all of the potential port and warehouse locations you could (also) use
  • identify all of the new lanes that would create
  • identify all of the additional carriers that could be used
  • collect quotes for every lane-carrier-mode combination from the potential new options that might actually be used
  • then build an extended model that includes all options and feed in all of the data
  • then solve a full-state model to determine baseline costs, times to serve, carbon emissions, etc.
  • then determine the ranges for the number of ports, warehouses, distribution centers, carriers, time to serve, carbon emissions, etc. that are acceptable
  • solve a copy of the restricted full-state model to determine a new baseline cost
  • then make and create copies of the model and run analysis against different objectives until the model is acceptable, and the costs (time to serve, emissions, etc.) reduced significantly enough to do a network transformation exercise

and this endeavour would typically take three to six months due to the fact it would take weeks to build the baseline models, months to collect the data, and weeks to build, solve, and analyze the models and come up with a new state that improved all the measures of interest as well as the implementation plan to make it happen.

But the problem with doing this bi/tri-annually is that you never know the impact of adding a new supplier or, more importantly, replacing a supplier of a significant product line or category where that supplier is in a completely different location, and possibly one that the last network design never took into account. Plus, the removal of a big supplier might cause a certain node (warehouse, distribution center, etc.) to be significantly under-utilized, resulting in unexpected overspend in certain parts of the distribution network.

But this knowledge is critically important to know before making a major sourcing decision that might change the supply base for a highly utilized product line or category — because the costs of the award will not be the expected costs. They will not be the unit or expected transportation costs used in the analysis that the award decision is based on, but will instead be those costs plus the fixed and variable losses incurred from underutilizing a sub-set of the network and/or overutilizing another sub-set of the network.

While this has always been the case, as the belief was that nothing could traditionally be done about it, if there was a tool that could

  • actively maintain the current network model
  • allow for copies to be created on the fly
  • allow for those copies to be easily modified, including
    • the addition or deletion of nodes (suppliers, ports, warehouses, distribution centers, retail locations, etc.)
    • the definition of new lanes
    • the the addition of carriers and/or carrier modes
    • updated costs for every lane
  • solve those copies quickly and accurately

then a sourcing professional could have deep insight into whether their cost models and assumptions are correct and logistics could update the network model, or at least the future state (if leases/contracts need to expire and new leases/contracts need to be signed), upon every award, and the overall sourcing, logistics, and supply chain costs.

And this is what you can do with Logility Network Optimization, formerly Starboard Solutions (acquired in 2022) and exactly why we are making an exception and covering them.

With the Logility Network Optimization Solution (which really should be called Logility Starboard, for reasons that will soon become clear), a Sourcing Professional can:

  • instantly see a graphical view of the current global network
  • bring up reports that summarize all of the key data
  • drill in to node / carrier / supplier / port / warehouse / distribution / product / combination costs
  • create a copy of the current network with all relevant data
  • and then create a what-if baseline scenario where they can
    • add whatever they wish (through simple pop-up interfaces they can add nodes and relationships),
    • remove whatever they wish (by simply clicking on a node or searching for the entity or relationship and deleting it), and, most importantly,
    • change whatever they want through in the network design through a simple drag-and-drop mechanism
  • they can then specify any constraints and goals, run an optimization, and see the new costs, and, most importantly, extract lanes / costs / variables of interest to populate into the TCO (total cost of ownership) calculations in their sourcing events

Logility Network Optimization can do this because it integrates with third party platforms and constantly extracts current market quotes and market rates for all major global lanes and can, when you change a design, automatically bring in those market rates and costs as a baseline for any lane / (generic) carrier / mode / volume combination you don’t already have a quote for. This not only provides a baseline rate (which might get better with a volume promise, negotiation, or current quote), but a statistically accurate one (especially if you just go with a generic carrier rate). (And, if there is no quote for a lane, the platform is smart enough to build one up from lane segments or tear one down using existing quotes and statistically significant costs per distance using statistically significant base rates for just securing the transportation mode.)

Furthermore, because it is a true multi-tenant cloud solution that uses a distributed “serverless” model that can decompose tasks into subtasks that can be run in (a massively) parallel (manner), such as data fetching, sub-model building, and even model solving (as all optimization models can be solved by solving sub-models on convex subspaces of the high-dimensional solution space), it can do it fast. And it’s just as accurate as the traditional, prior generation tools, at a speed that is breakneck in comparison, and that’s even if it uses statistically significant calculated data.

Moreover, it’s very easy to define multiple constraints and weighted objectives. You can guarantee maximum times to serve / times to deliver (subject to minimums that cannot be improved upon) while balancing overall cost and carbon footprint (through a weighted objective). (It’s quite easy to define objectives in the platform which have built in pop-ups to solve for different goals — service time, emissions limit, cost, and best X, where X is a single dimension or derived dimension that weights 2 or more other dimensions.) Or you can guarantee maximum times to serve, a fixed / x% carbon reduction, while minimizing overall cost. Or you can keep ports you know are stable and the warehouses with contracts you can’t break while allowing the delivery network architecture to shift to minimize overall costs.

The browser based interface to Logility’s platform offers a graphically represented virtual twin to an organization’s network with high-level summary data (products, facilities, lanes, suppliers, customers, activities, and costs) with easy scenario selection and easy definition and modification of scenarios. It’s very easy to dive into definition screens and see the suppliers, facilities, lanes, etc. and see/edit all of the data for any individual supplier, facility, lane, etc.; add a new instance, delete one, and see the associated costs, times, emissions, etc. and the underlying calculations associated with a node or relationship in the network graph (which is stored in a graph database that allows for massive scalability).

It’s also very easy to dynamically generate comparison reports between scenarios that compare (activity) costs (across cost types, such as leases, handling, transport costs, rail costs, ocean freight, tariffs, etc.), (average) service times (by supplier, product, lane, etc.), carbon (by carrier, lane, product, supplier, etc.), and other metrics of interest. Furthermore, when a user is happy with a scenario, they can one-click generate and output a complete comparison / summary report deck to PowerPoint for executive reporting (across as many scenarios as they like).

To enhance usability — which is quite obvious out-of-the-box to anyone who understands the basics of modelling, optimization, and decision analysis — Logility has an integrated quick-tour to get started, a full multi-media course on the platform and the modelling that can be done, playbooks for particular problems and challenges, and weekly office hours where users can ask Logility pros questions and get answers in real time. Logility Network Optimization was designed from the bottom up for usability and success.

Logility Network Optimization is the perfect complement to optimization-backed sourcing platforms with bill of material support. Buyers can model potential changes that would result from awarding to a new supplier, not awarding to an existing supplier, changing carriers or lanes, get expected transportation and tariff costs, augment the supplier quotes with this updated data in real-time through a Logility API feed (from an identified scenario), run a total cost of optimization scenario on the full set of bids augmented with accurate total cost of ownership data, make an award, push that award back to Logility Network Optimization which will update the network model in real time, create a new what-if, and see if the network model should be altered when the new supplier is brought fully on board. For the first time, an organization could have closed loop sourcing, logistics, and network optimization in real-time — a reality that was once as far away as the stars themselves (and why the platform takes you Starboard). It’s a powerful concept, and worth branching out beyond traditional Source-to-Pay providers and Strategic Sourcing Decision Optimization to achieve.

Don’t Overlook the Network (that Corresponds to the Award)

According to a recent Forbes article on Supply Chain Software’s Best Return on Investment, per $1 Billion in company revenues, no supply chain application has a better return on investment (ROI) than network design! And the doctor couldn’t agree more.

Just like strategic sourcing decision optimization is the best bang for the buck in Source to Pay, with documented, average returns of up to 12% year-over-year (by multiple analyst firms) as it can minimize total landed cost, and even total cost of ownership in some cases (including internal inventory costs, waste costs, etc.) and not just bids, while ensuring all business constraints are adhered to, an optimization-backed network design application can help minimize overall organizational supply chain costs. This is because a supply chain network optimization platform can minimize transportation costs, intermediate warehousing costs, tariffs, waste, emergency replenishment in the case of an unexpected stock-out, carbon/GHG, etc.

Plus, as the article notes:

  • network design solutions are absolutely necessary to uncover business value when the production-distribution infrastructure is large (and not just because you just can’t model that infrastructure in a spreadsheet)
  • network design solutions can look at Total Cost to Serve (TCTS) across a wide-range of fixed and marginal costs (and identify unintended circumstances of network design changes that could cause marginal costs to skyrocket)
  • network solutions can allow for multiple scenarios to be defined and multiple models to be run and cross-model and cross-scenario Pareto analysis to be run, trade-offs to be analyzed, and the best decisions to be made

One point that should not be overlooked is that projects will take some time, and it’s not because of the complexity of the network modelling or the time it takes to run the scenarios (as modern computing architectures are super powerful and modern algorithms highly optimized to be efficient and take advantage of massively parallel processing), it’s because you need a lot of good, clean, data. It can take months (and months) just to identify, collect, clean, and enrich the data required for global supply network optimization. But once you do that, the ROI will be beyond the expectations you have for every other supply chain solution.

The article, which describes a project to redesign the spare parts supply chain for a global automotive manufacturer, resulted in a redesign that immediately reduced network costs by 4% and identified transportation cost reduction opportunities through consolidation and re-allocating of routes to a smaller set of 3PLs that will save another 2.5% at contract renewal time. In today’s climate, especially in direct supply chains, a savings of 6%+ across the entire supply chain, and not just one category, is phenomenal!

Plus, as the article notes, in the age of sustainability, reduced transportation mileage and fuller trucks also equate to significant reductions in carbon emissions. WHAT A BONUS!