A coordinated effort by LOLCats could easily devour that much chicken …
A coordinated effort by LOLCats could easily devour that much chicken …
In Part I we recapped Sourcing today, in Part II we did a deep dive into the key requirements of the review step as it is today, and then in Part III we did a deeper dive where we explained that while some steps were critical for a sourcing professional to undertake, others, while necessary, were a complete waste of skilled talent time as the majority of the tasks could be automated. Then in Part IV we began our deep dive into the needs assessment phase which we completed in Part V. This was followed by a deep dive into strategy selection in Parts VI and Part VII and the communication step in Parts VIII and IX. And upon review of these steps, we’re still at the point where some tasks have to be done by humans whereas others can be mostly automated. We’re starting to suspect this is true across the entire sourcing cycle, but we can’t be sure until we complete our analysis, can we?
In the next step, the analysis step, we have the following key sub-steps that have to be completed every time (not just sometimes):
In the market pricing step, you collect as much information as you can about pricing for the goods or services you are looking to acquire to be as informed before negotiations as you can be. This could require collecting consumer pricing from retailers, pricing available from GPOs/BPOs, pricing from government contracts (that are public data), import / export manifests (to determine volumes and supply/market dynamics), and pricing from similar product/services on past contracts. It could also involve collecting competitive intelligence through analyst reports, buying collectives, and other avenues.
In the historical and projected spend phase, the organization does deep analysis of historical spend and volumes across the product and services lines, similar product and services lines, and market dynamics. It then pieces all of this together to form projected trends that look at current trends modified with projected demand shifts within company product and services lines and expected uptakes or product line abandonments based on current market dynamics. It collects as many pieces of data that are readily available to try and determine if market shifts are seasonal, responsive to price changes, reactive to new product introductions, or undetermined factors.
In the cross-category “materials” spend phase, the organization makes an effort to identify the the primary components of the spend and how they should influence the spend dynamics of the product or service being acquired. For example, if it’s a metal product where steel is a primary component, they will attempt to identify how the pricing is shifting in other categories where steel is a primary component and compare that to market price shifts. If it’s a service, they will look if the primary costs are related just to talent, to organizational support, or even expenses (such as excessive travel requirements) and compare that to market costs across different divisions of the company. (E.g. extra
IT support is IT support whether contracted by Procurement or IT)
Finally, in the TCO phase, the organization will work hard to identify all the other direct and consequential indirect costs associated with the acquisition. Taxes (and whether or not they are reclaimable and the costs of reclamation if they are), import/export duties, intermittent storage fees, transportation fees, typical loss fees (due to spoilage, waste from mandatory tests, etc.), etc. will be identified and factored in as direct costs. In addition, potential indirect costs such as additional testing, expected loss during local transport, alteration costs for implementation, loss of co-marketing support, etc. will be factored in.
This sounds largely human driven, but, as we’ve discussed during previous steps, sometimes what sounds human driven isn’t. But this is a subject we will explore in Part XI!
There is a theory which states that if ever anyone discovers exactly what the Universe is for and why it is here, it will instantly disappear and be replaced by something even more bizarre and inexplicable. There is another theory which states that this has already happened.
Regardless of this theory is correct or not, advanced sourcing is not the universe … it’s not even the universe of enterprise applications (although it’s becoming a pretty significant part). As a result, your organization should not be scared to acquire, learn, and master it. However, given the continuing rather low uptake of strategic sourcing decision optimization and advanced hybrid spend analysis (that uses machine learning and embeds prescriptive analytics), one would think the average organization is quaking in their boots.
And the answer is not to wait until the application interfaces are simplified enough so that it’s just point-and-click to select a model, accept the default constraints, run the scenario, and accept the result. Just like a top n report in spend analysis will only identify a savings opportunity once, a canned optimization scenario will only identify a significant savings once.
Nor is it the answer to wait until your preferred provider proffers a solution to you. These are solutions you should be seeking on your own, not when your provider brings them to you because every day you wait is a day another opportunity passes you by. And with pressures mounting to generate value, how many opportunities can you afford to miss? None.
So don’t wait. Figure it out. It won’t go away. It won’t change instantaneously when you do. And you won’t have to learn it twice. So just do it.
Walmart recently released a statement that it plans to use employees to do home deliveries, presumably to fulfill online orders, as recently reported on The Washington Post. the doctor couldn’t believe it at first … convinced it was an article from the Onion misposted on a real news site, but apparently it’s real.
Overlooking all the things that could go terribly wrong with this, and all of the new legal liabilities this could cause them to incur (which would give your average risk manager and Chief Counsel nightmares for months), this makes absolutely no sense from a supply chain perspective where the name of the game is cost control (unless, of course, Walmart is looking for a way to actually lose money as a tax avoidance scheme).
There’s a reason even Amazon uses third party carriers for its prime service, and the reason is that, as stated by the article, last mile logistics are costly. Very costly. And they can only be minimized by maximizing the number of packages delivered per hour by a driver. An employee who can only deliver a few packages due to space limitations in their car can’t maximize deliveries compared to a Fedex or UPS van driver that has a van built to maximize the number of packages that can be carried at one time and that is making deliveries determined by software that minimizes the delivery radius of all assigned packaged and delivery time using route optimization software (that eliminates left turns and backed-up routes).
Now, maybe Walmart is thinking that they can introduce a new kind of package assignment algorithm that minimizes the distance from an employee’s home route, and then just pay that employee for additional distance and time required (using google map calculations, etc.), but you still have the problem that the closest employee(s) may not be working that day, may not be able to do deliveries that day, or may not be able to fit the packages in their vehicle. Most of the time the software will have to re-assign and re-assign again until a viable sub-optimal match is found, and at the end of the day the cost would be more than just having a full time driver deliver everything according to route optimization software at a cost that is still more than negotiating a good volume-based outsourcing agreement with the dominant local carriers who can increase the delivery density even more.
The reality is that just because something sounds good (as in 90% of all customers live within 10 miles, where most employees are also located), does not mean it is good — and that’s why you need to perform analytics and optimization before embarking on major initiatives such as this. Because even if Walmart could get near-optimal assignments, it still needs volume, and as long as it takes 3 times as long to do anything on their site as it does on Amazon (and that is definitely true in Canada, where the outsourced development organization prefers to benchmark against sites for other real-world retailers and not Amazon from an online retail perspective), and as long as they continue to ship 6 (light) items on the same order across 5 boxes, their online volume growth is not going to be fast enough to make this idea anywhere as efficient as they hope in the next few years. This is one case where the doctor hopes their trials flop and they see the error of their ways and go back to investing in more hybrid vehicles, more efficient warehouses and inventory management methods, and other initiatives guaranteed to increase efficiency and sustainability.
A lot of vendors will tell you a lot of what they do is so hard and took thousands of hours of development and that no one else could do it as good or as fast or as flexible when the reality is that much of what they do is easy, mostly available in open source, and can be replicated in modern Business Process Management (BPM) configuration toolkits in a matter of weeks.
So, to help you understand what’s truly hard and, in the spend master’s words, so easy a high school student with an Access database could do it, the doctor is going to bust out his technical chops that include a PhD in computer science (with deep expertise in algorithms, data structures, databases, big data, computational geometry, and optimization), experience in research / architect / technology officer industry roles, and cross-platform experience across pretty much all of the major OSs and implementation languages of choice. Having covered basic sourcing and basic procurement it’s time to move on to Supplier Management.
But first, what is Supplier Management? Supplier Management, depending on the vendor, is defined as the provision of Supplier Information Management, Supplier Performance Management, and/or Supplier Relationship Management. The question is, do either of these areas contain any technical difficulty.
Supplier Information Management
Technical Challenge: NONE
Let’s face it, supplier information management is just data in, data out. Collect the data, push it in the database, run a report, pull it out. It’s just a database with a pre-defined schema and some fancy, optimized, UI for getting the right data to push in and pull out.
Supplier Performance Management
Technical Challenge: NONE
Supplier performance management is two part — performance tracking, done with software, and performance improvement initiatives, identified and managed by humans. The latter can be complex, but since this series is focussed on technical complexity, we will ignore this aspect. As for performance tracking, this is just tracking computed metrics over time. Essentially information management, but focussed on collected performance data and metrics.
Supplier Relationship Management
Technical Challenge: NONE
Supplier relationship management is all about managing the relationship. It’s usually done with collaboration (and collaboration software is not technically challenging), development management (lean, six sigma, and other programs), and innovation management (goal definition, initiative tracking, and workflow). All human challenges, not technical challenges.
But does this mean there are no challenges? Depends whether you are using old definitions or new definitions. A new definition goes beyond the basics and looks to software to guide the future of Supplier Management. And that’s where the challenges come in.
Technical Challenge: Predictive Analytics
Inventory levels, sales, and costs are relatively easy to predict with high accuracy with enough data using a suite of trend algorithms. They’re not always right, but they’re right more often than human “gut” (unless you happen to have a true expert who’s top of her league and been doing it for 20 years, and those are very rare) and that’s all we can expect.
But predicting a market trend is different than predicting supplier performance as performance shifts can result from a variety of factors that include, but aren’t limited to, worker problems (such as union strikes), financial problems (which can happen overnight as the result of a massive launch failure, loss, etc.), raw material shortages (as the result of a mine failure, etc.) and so on.
Thus, predicting future performance requires not only tracking performance, but also external market indicators of a financial, regulatory, and incident nature. The latter is particularly tricky as incidents are the result of events that can often only be detected by monitoring news feeds and applying semantic algorithms to the data to identify incidents that can affect future performance. Then, all of this data needs to be integrated to paint a picture that can more accurately predict performance than the predictions made from just monitoring internal data sources.
In other words, if all you are being sold is a data collection and monitoring tool, it’s not particularly challenging to build (and a business process management / workflow configurator tool could probably be used to build a prototype with your custom requirements in a week), but if it’s a true, modern, performance management solution with integrated predicted analytics to help you identify those relationships at risk, that’s a completely different story.
Next Up: Analytics!