Category Archives: Best Practices

How Do You Identify Tomorrow’s Supply Chain Paupers?

They still use paper today!

This is another entry in our continuing The More Things Change … series that w began last month. Except this week we’re going back five years instead of the ten years we went back last month.

Although I don’t understand how any supply chain focussed business, and a logistics carrier in particular, could still be paper-based. It blows my mind that just five years ago the WT 100, in their recent article on Rounding the Optimization Curve, reports that there are still a significant number of carriers that keep their records on paper. How can you survive in today’s cost-competitive, just-in-time, value-conscious supply management landscape and work on paper?

And, more importantly, if you are a logistics provider or CPG distributor, how can you effectively still keep running on paper when Amazon is investigating drone delivery? Five years later there are *still* carriers and distributors running primarily on paper. And we’re not pranking you either, like South Park pranked your Amazon, Apple, and Google devices. This is a fact!

And while we’re at it, let’s talk about how you can identify the dead men walking of the day after. They use Excel. We’ve known for years that errors in spreadsheets are pandemic. Needless to say that it boggles my mind that Microsoft Excel continues to be the application of choice for supply chain and logistics management around the world. Fidelity lost 2.6 Billion as a result of a spreadsheet error, Fannie Mae made a 1.13 Billion honest mistake, and RedEnvelope lost more than a quarter of their value in a single day after they warned of a fourth-quarter loss due to a budgeting error that resulted in an overestimate of gross margins. How long is it going to be before someone accidentally uses a plus sign instead of a minus sign in a profit formula and forgets to uncap an inventory calculation and instead of ordering 100,000 units of a profitable product, instead orders 1,000,000 units of a product that actually results in significant losses at the target sale price, for which the market demand is weak, ties up all of the organization’s working capital, and essentially bankrupts the company? My guess, with the steadily increasing complexity of S&OP, JIT inventory management models, and supply chains, not much longer. But, maybe after a few companies are brought to their knees from spreadsheet errors, we’ll see the day when Excel is sh!tcanned along with the dinosaurs who still think it has any more use than a HP or TI calculator.

It’s time for anyone still on paper or Excel to wake up and realize we don’t live in Walt Disneyland and that the story of the prince and the pauper is a fairytale. A pauper is not going to become the benefactor of princely riches just by looking like a bigger, richer, company. In today’s uber-connected world, appearances don’t account for much. It’s not long before someone digs deep and uncovers the truth.

There’s a reason why customers are demanding end-to-end visibility of their supply chains, including those of their supply chains logistics’ partners. And a reason customers now expect all of their suppliers and business partners on the supply chain (including logistics providers) to participate in a supply chain network. It’s because they know that the only way they can accurately manage their supply chain is to keep on top of it, that the only way they can build accurate models is with accurate data gathered from partners, and that the best reports they are going to get are going to come from supply chain visibility and planning software plugged into these “networks” (where, in reality, these are “enterprise communities” that allow the necessary collaboration, not “consumer social networks” where you can poke, prod, and shake your buddy for no apparent reason).

In other words, paper is dead, and Excel will be the new paper, and then, someday, it too will be dead. So if you don’t want to be the pauper, move off of these technologies and onto solutions designed for your supply management needs. With a plethora of Best-of-Breed solutions on the market, designed for large and small providers, it’s extremely likely that there’s at least one solution that meets your needs almost exactly with minimal tweaking. If you look hard enough, the doctor would bet that there’s at least three, or will be before you can look twice.

Introducing LevaData. Possibly the first Cognitive Sourcing Solution for Direct Procurement.

Who is LevaData? LevaData is a new player in the new optimization-backed direct material prescriptive analytics space, and, to be honest, probably the only player in the optimization-backed direct material prescriptive analytics space. While Jaggaer has ASO and Pool4Tool, it’s direct material sourcing is optimization backed and while it has VMI, it does not have advanced prescriptive analytics for selecting vendors who will ultimately manage that inventory.

LevaData was formed back in 2014 to close the gaps that the founders saw in each of the other sourcing and supply management platforms that they have been a part of over the last two decades. They saw the need for a platform that provided visibility, analytics, insight, direction, optimization, and assistant — and that is what they sent out to do.

So what is the LevaData platform? It is sourcing platform for direct materials that integrates RFX, analytics, optimization, (should) cost modelling, and prescriptive advice into a cohesive whole that helps a buyer buy better when they use and which, to date, has reduced costs (considerably) for every single client.

For example, the first year realized savings for a 5B server and network company who deployed the LevaData platform was 24M; for a 2.4B consumer electronics company, it was 18M; and for a 0.6B network customer, it was 8M. To date, they’ve delivered over 100M of savings across 50B of spend to their customer base, and they are just getting started. This is due to the combination of efficiency, responsiveness, and savings their platform generates. Specifically, about 60% of the value is direct material cost reduction and incremental savings, 30% is responsiveness and being able to take advantage of market conditions in real time, and 10% is improved operational efficiency.

The platform was built by supply chain pros for supply chain buyers. It comes with a suite of f analytics reports, but unlike the majority of analytics platforms, the reports are fine tuned to bill of materials, component, and commodity intelligence. The reports can provide deep insight to not only costs by product, but costs by component and/or raw material and roll up and down bill of materials and raw materials to create insights that go beyond simple product or supplier reports. Moreover, on top of these reports, the platform can create costs forecasts and amortization schedules, track rebates owed, and calculate KPIs.

In order to provide the buyer with market intelligence, the application imports data from multiple market fees, creates benchmarks, compares those benchmarks to internal market data, automatically creates competitive reports, and calculates the foundation costs for should cost models.

And it makes all the relevant data available within the RFX. When a user selects an RFX, it can identify suppliers, identify current market costs, use forecasts and anonymized community intelligence to calculate target costs, and then use optimization to determine what the award split would be, subject to business constraints, and identify the suppliers to negotiate with, the volumes to offer, and the target costs to strive for.

It’s a first of its kind application, and while some components are still basic (as there is no lane or logistics support in the optimization model), missing (as there is no ad-hoc report builder, or incomplete (such as collaboration support between stakeholders or a strong supplier portal for collaboration), it appears to meet the minimal requirements we laid out yesterday and could just be the first real cognitive sourcing application on the market in the direct material space.

BIQ: Alive and Well in the Opera House! Part II

Yesterday we noted that BIQ, from the sleepy little town of Southborough, that was acquired by Opera Solutions in 2012, is not only alive and well in the Opera House, but has been continually improved since its acquisition and the new version, 5(.05), even has a capability no other spend analytics product on the market has.

So what is this new capabilities? We’ll get to that. First of all, we want to note that since we last covered BIQ, a number of improvements have been made, and we’ll cover those.

Secondly, we want to note that the core engine is as powerful as ever. Since it runs entirely in memory, on data entirely in memory, it can process 1M transactions per second. Need to add a dimension? Change a measure? Recalculate a report? It’s instantaneous on data sets of 1M transactions or less. And essentially real-time on data sets of 10M transactions. Try getting that performance from your database or OLAP engine. Just try it.

One of the first big changes they made was complete separation of the engine from the viewer. This allowed them to do two things. One, create a minimal engine footprint (for in-memory execution) with a fully exposed API that allowed them to create a full web-based SaaS version as well as an improved desktop application and expose the full power of the BIQ engine to either instance.

They used QlikView for the web interface and through this interface have created a collection of CIQ (category intelligence) and PIQ (performance intelligence) dashboards for just about every indirect category and standard performance category (supplier, operations, finance, etc.) in addition to a standard spend dashboard with reports and insights that rivals any competitor dashboard. In addition, they have exposed all of the dimensions in the underlying data and measures that have been programmed and a user can not only create ad-hoc reports, but ad-hoc cross-tabs and pivot tables on the fly.

And they re-did the desktop interface to look like a modern analytics front-end that was built this decade. As those who saw it know, the old BIQ looked like a Windows 98 application, even though Microsoft never built anything with that amount of power. The new interface is streamlined, slick, and quick. It has all of the functionality of the old interface, plus modern widget that are easy to rearrange, expand, minimize, and deploy.

One of the best improvements is the new data loader. It’s still file based, but supports a plethora of file formats, can be used to transform data from one format to another, merge files into a single file or cube, picking some or all of the data. It’s quick, easy, user friendly, and can process massive amounts of data quickly, letting users know if there are errors or issues that need to be identified almost immediately.

Another great feature is the new anomaly detection engine that can be run in parallel with BIQ, built on the best of BIQ and Signal Hub technology. Right now, they only have an instance fine tuned to T&E spend in the procurement space, but you can bet more instances will be coming soon. But this is a great start. T&E spend is plentiful, a lot of small transactions, and hard to find those needles that represent off policy spend, off contract spend, and, more importantly, fraudulent spend. Using the new anomaly detection feature you can quickly identify when an employee is flying business instead of coach, using an off-contract airline, or, and this is key, charging pet kennels as lodging or strip club bills as executive dinners.

But this isn’t the best new feature. The best new feature is the new Open Extract capability that provides true open access to Python-based analytics in BIQ. The new version of BIQ engine, which runs 100% in memory, includes the python runtime and a fully integrated IDE. Any analyst or data scientist that can script python can access and manipulate the data in the BIQ engine in real time, using constructs built specifically for this purpose. And these custom built scripts run just as fast as the built in scripts as they run native in the engine. For example, you can run a Benford’s Law analysis on 1M transactions in less than a second. And building it in python, and the Anaconda distribution in particular, means that any of the open source analytics packages for Continuum Analytics can be used. There’s nothing else like it on the market. It takes spend analysis to a whole new level.

BIQ: Alive and Well in the Opera House! Part I

Fourteen years ago, in the sleepy little town of Southborough, Massachusetts, a tiny start up called BIQ was created. It’s mission was to give business analysts the powerful transactional data analysis tool that they needed to do their own analysis and get their own insight. Less than two years later, it released that tool, called BIQ, and it totally changed the spend analysis market. For the first time, power analysts could do everything themselves in a market where spend analysis was primarily offered as a service, and they could do it at a price point that was at least an order of magnitude less than what the big providers were charging them. With licenses starting at 36K a year, an analyst could do the same analysis that he was paying a suite provider 360K for and a best of breed provider 1M for. Now, it required a lot of knowledge, aesthetic blindness, elbow grease, and overtime, but it could be done.

And when we say everything, we mean everything. You could load any flat files you want in a standard format (such as csv) in the data loader. You could combine them into any cubes you wanted by defining the overlapping dimensions. You could define ranged and derived dimensions using simple formula or built in definitions. You could drill down in real time, filter on what you wanted, and export subsets of records. You could define any categorization you wanted against any schema, any mapping rules you wanted, they were organized into priority groups, given a priority order, and run most specific to least specific so you never got a collision or random mapping like you might in a tool where you just defined non-prioritized rules that went in a database and often got applied in random order. You could define supplier families that could be reused. You could build your own cross-tab reports. It was the swiss army knife of analytics, at a price every organization could afford.

This quickly made BIQ a favourite not just among mid-market companies that couldn’t afford, and big companies that didn’t want to afford, high priced services, but also niche consultancies that could now do power-house analytics projects on their own, including firms like Lexington Analytics and Power Advocate. This, along with some really smart marketing, pushed BIQ into the mainstream of spend analytics providers, making it a de-factor shortlist candidate for any company wanting do-it-yourself spend analysis. This, of course, got the attention of many providers, who were afraid of the threat, in awe of the technology, or both.

One of these providers was Opera Solutions, who acquired BIQ in 2012, and shortly after, Lexington Analytics. Once the two providers were merged, Opera Solutions instantly had a complete spend analysis software and services solution for the indirect space. And they have steadily improved this offering since its acquisition. The new version comes packed with some big enhancements, including one capability that is not only market leading, but unique among all the spend analysis providers we have covered to date.

What is that? Come back tomorrow!

To Get the Best Supply Base, Go Beyond the Obvious!

the doctor recently came across an article that said that during the sourcing process, there are many qualitative attributes that procurement teams should take into consideration and that sourcing is about the lowest price, but identifying the greatest value for your sourcing dollars and that one should incorporate multi-factor award criteria into an automated sourcing process. All true. It also provided some examples of the most frequently used qualitative factors, which include:

  • Supplier Market Share
  • Supplier Performance
  • Production & Delivery Capabilities

And these are okay, but they don’t tell the whole story. Plus, sometimes the story they tell is not the right one. For example:

  • with respect to supplier market share, you only care that the market share is big enough to make the supplier financially viable … sometimes the emerging suppliers have the best technologies for you
  • with respect to supplier performance, if you haven’t used the supplier before, and the only data you have is negative data from customers that have gone public, you don’t know if this is the typical experience or an anomaly (like 1 out of 100) and sometimes even how recent the data is
  • with respect to production and delivery capabilities, there’s always a third party partner for delivery

That’s why you need to round out the supplier evaluation components, going beyond the typical, and obvious, evaluation factors, if you want to find the best suppliers for now and the future. Some other factors to consider are:

  • Innovation Capability do they have a track record for innovation and helping customers improve their designs, robustness, product longevity, etc.
  • Corporate Social Responsibility the best supplier from a product perspective could be the worst supplier from a corporate perspective if that supplier uses child labour in the supply chain or buys blood diamonds for their x-ray machines and the story breaks
  • Environmental Risk Profile that examines the supplier from a geo-location, social and political, and economic context which are out of the control of the supplier (whose financial, technological, performance, etc. risk you will be qualifying separately)

And these are valid for all suppliers. When you get into specific categories, you might also want to consider:

  • Services Capability can they support the product, offer consulting services around the product, or streamline the production process beyond other suppliers
  • Six Sigma Black Belt can the supplier help you with your design process or streamline your new product development
  • Supplier’s Supply Chain Design
    is their supply chain more efficient than their peers?

So if you want the best supplier, go beyond the obvious in evaluation.