Category Archives: Uncategorized

Sourcing the Day After Tomorrow Part X

In Part I we recapped Sourcing today, in Part II we did a deep dive into the key requirements of the review step as it is today, and then in Part III we did a deeper dive where we explained that while some steps were critical for a sourcing professional to undertake, others, while necessary, were a complete waste of skilled talent time as the majority of the tasks could be automated. Then in Part IV we began our deep dive into the needs assessment phase which we completed in Part V. This was followed by a deep dive into strategy selection in Parts VI and Part VII and the communication step in Parts VIII and IX. And upon review of these steps, we’re still at the point where some tasks have to be done by humans whereas others can be mostly automated. We’re starting to suspect this is true across the entire sourcing cycle, but we can’t be sure until we complete our analysis, can we?

In the next step, the analysis step, we have the following key sub-steps that have to be completed every time (not just sometimes):

  • Market Pricing Data
  • Historical and Projected Spend
  • Cross-Category Materials Spend
  • TCO / TLC (Total Cost of Ownership, Total Lifecycle Costs)

In the market pricing step, you collect as much information as you can about pricing for the goods or services you are looking to acquire to be as informed before negotiations as you can be. This could require collecting consumer pricing from retailers, pricing available from GPOs/BPOs, pricing from government contracts (that are public data), import / export manifests (to determine volumes and supply/market dynamics), and pricing from similar product/services on past contracts. It could also involve collecting competitive intelligence through analyst reports, buying collectives, and other avenues.

In the historical and projected spend phase, the organization does deep analysis of historical spend and volumes across the product and services lines, similar product and services lines, and market dynamics. It then pieces all of this together to form projected trends that look at current trends modified with projected demand shifts within company product and services lines and expected uptakes or product line abandonments based on current market dynamics. It collects as many pieces of data that are readily available to try and determine if market shifts are seasonal, responsive to price changes, reactive to new product introductions, or undetermined factors.

In the cross-category “materials” spend phase, the organization makes an effort to identify the the primary components of the spend and how they should influence the spend dynamics of the product or service being acquired. For example, if it’s a metal product where steel is a primary component, they will attempt to identify how the pricing is shifting in other categories where steel is a primary component and compare that to market price shifts. If it’s a service, they will look if the primary costs are related just to talent, to organizational support, or even expenses (such as excessive travel requirements) and compare that to market costs across different divisions of the company. (E.g. extra
IT support is IT support whether contracted by Procurement or IT)

Finally, in the TCO phase, the organization will work hard to identify all the other direct and consequential indirect costs associated with the acquisition. Taxes (and whether or not they are reclaimable and the costs of reclamation if they are), import/export duties, intermittent storage fees, transportation fees, typical loss fees (due to spoilage, waste from mandatory tests, etc.), etc. will be identified and factored in as direct costs. In addition, potential indirect costs such as additional testing, expected loss during local transport, alteration costs for implementation, loss of co-marketing support, etc. will be factored in.

This sounds largely human driven, but, as we’ve discussed during previous steps, sometimes what sounds human driven isn’t. But this is a subject we will explore in Part XI!

Advanced Sourcing Will Not Disappear If You Figure It Out!

There is a theory which states that if ever anyone discovers exactly what the Universe is for and why it is here, it will instantly disappear and be replaced by something even more bizarre and inexplicable. There is another theory which states that this has already happened.
Douglas Adams

Regardless of this theory is correct or not, advanced sourcing is not the universe … it’s not even the universe of enterprise applications (although it’s becoming a pretty significant part). As a result, your organization should not be scared to acquire, learn, and master it. However, given the continuing rather low uptake of strategic sourcing decision optimization and advanced hybrid spend analysis (that uses machine learning and embeds prescriptive analytics), one would think the average organization is quaking in their boots.

And the answer is not to wait until the application interfaces are simplified enough so that it’s just point-and-click to select a model, accept the default constraints, run the scenario, and accept the result. Just like a top n report in spend analysis will only identify a savings opportunity once, a canned optimization scenario will only identify a significant savings once.

Nor is it the answer to wait until your preferred provider proffers a solution to you. These are solutions you should be seeking on your own, not when your provider brings them to you because every day you wait is a day another opportunity passes you by. And with pressures mounting to generate value, how many opportunities can you afford to miss? None.

So don’t wait. Figure it out. It won’t go away. It won’t change instantaneously when you do. And you won’t have to learn it twice. So just do it.

Walmart: Still Running on a 56.6 baud Modem …

Walmart recently released a statement that it plans to use employees to do home deliveries, presumably to fulfill online orders, as recently reported on The Washington Post. the doctor couldn’t believe it at first … convinced it was an article from the Onion misposted on a real news site, but apparently it’s real.

Overlooking all the things that could go terribly wrong with this, and all of the new legal liabilities this could cause them to incur (which would give your average risk manager and Chief Counsel nightmares for months), this makes absolutely no sense from a supply chain perspective where the name of the game is cost control (unless, of course, Walmart is looking for a way to actually lose money as a tax avoidance scheme).

There’s a reason even Amazon uses third party carriers for its prime service, and the reason is that, as stated by the article, last mile logistics are costly. Very costly. And they can only be minimized by maximizing the number of packages delivered per hour by a driver. An employee who can only deliver a few packages due to space limitations in their car can’t maximize deliveries compared to a Fedex or UPS van driver that has a van built to maximize the number of packages that can be carried at one time and that is making deliveries determined by software that minimizes the delivery radius of all assigned packaged and delivery time using route optimization software (that eliminates left turns and backed-up routes).

Now, maybe Walmart is thinking that they can introduce a new kind of package assignment algorithm that minimizes the distance from an employee’s home route, and then just pay that employee for additional distance and time required (using google map calculations, etc.), but you still have the problem that the closest employee(s) may not be working that day, may not be able to do deliveries that day, or may not be able to fit the packages in their vehicle. Most of the time the software will have to re-assign and re-assign again until a viable sub-optimal match is found, and at the end of the day the cost would be more than just having a full time driver deliver everything according to route optimization software at a cost that is still more than negotiating a good volume-based outsourcing agreement with the dominant local carriers who can increase the delivery density even more.

The reality is that just because something sounds good (as in 90% of all customers live within 10 miles, where most employees are also located), does not mean it is good — and that’s why you need to perform analytics and optimization before embarking on major initiatives such as this. Because even if Walmart could get near-optimal assignments, it still needs volume, and as long as it takes 3 times as long to do anything on their site as it does on Amazon (and that is definitely true in Canada, where the outsourced development organization prefers to benchmark against sites for other real-world retailers and not Amazon from an online retail perspective), and as long as they continue to ship 6 (light) items on the same order across 5 boxes, their online volume growth is not going to be fast enough to make this idea anywhere as efficient as they hope in the next few years. This is one case where the doctor hopes their trials flop and they see the error of their ways and go back to investing in more hybrid vehicles, more efficient warehouses and inventory management methods, and other initiatives guaranteed to increase efficiency and sustainability.

Supply Management Technical Difficulty … Part III

A lot of vendors will tell you a lot of what they do is so hard and took thousands of hours of development and that no one else could do it as good or as fast or as flexible when the reality is that much of what they do is easy, mostly available in open source, and can be replicated in modern Business Process Management (BPM) configuration toolkits in a matter of weeks.

So, to help you understand what’s truly hard and, in the spend master’s words, so easy a high school student with an Access database could do it, the doctor is going to bust out his technical chops that include a PhD in computer science (with deep expertise in algorithms, data structures, databases, big data, computational geometry, and optimization), experience in research / architect / technology officer industry roles, and cross-platform experience across pretty much all of the major OSs and implementation languages of choice. Having covered basic sourcing and basic procurement it’s time to move on to Supplier Management.

But first, what is Supplier Management? Supplier Management, depending on the vendor, is defined as the provision of Supplier Information Management, Supplier Performance Management, and/or Supplier Relationship Management. The question is, do either of these areas contain any technical difficulty.


Supplier Information Management

Technical Challenge: NONE

Let’s face it, supplier information management is just data in, data out. Collect the data, push it in the database, run a report, pull it out. It’s just a database with a pre-defined schema and some fancy, optimized, UI for getting the right data to push in and pull out.


Supplier Performance Management

Technical Challenge: NONE

Supplier performance management is two part — performance tracking, done with software, and performance improvement initiatives, identified and managed by humans. The latter can be complex, but since this series is focussed on technical complexity, we will ignore this aspect. As for performance tracking, this is just tracking computed metrics over time. Essentially information management, but focussed on collected performance data and metrics.


Supplier Relationship Management

Technical Challenge: NONE

Supplier relationship management is all about managing the relationship. It’s usually done with collaboration (and collaboration software is not technically challenging), development management (lean, six sigma, and other programs), and innovation management (goal definition, initiative tracking, and workflow). All human challenges, not technical challenges.


But does this mean there are no challenges? Depends whether you are using old definitions or new definitions. A new definition goes beyond the basics and looks to software to guide the future of Supplier Management. And that’s where the challenges come in.

Technical Challenge: Predictive Analytics

Inventory levels, sales, and costs are relatively easy to predict with high accuracy with enough data using a suite of trend algorithms. They’re not always right, but they’re right more often than human “gut” (unless you happen to have a true expert who’s top of her league and been doing it for 20 years, and those are very rare) and that’s all we can expect.

But predicting a market trend is different than predicting supplier performance as performance shifts can result from a variety of factors that include, but aren’t limited to, worker problems (such as union strikes), financial problems (which can happen overnight as the result of a massive launch failure, loss, etc.), raw material shortages (as the result of a mine failure, etc.) and so on.

Thus, predicting future performance requires not only tracking performance, but also external market indicators of a financial, regulatory, and incident nature. The latter is particularly tricky as incidents are the result of events that can often only be detected by monitoring news feeds and applying semantic algorithms to the data to identify incidents that can affect future performance. Then, all of this data needs to be integrated to paint a picture that can more accurately predict performance than the predictions made from just monitoring internal data sources.

In other words, if all you are being sold is a data collection and monitoring tool, it’s not particularly challenging to build (and a business process management / workflow configurator tool could probably be used to build a prototype with your custom requirements in a week), but if it’s a true, modern, performance management solution with integrated predicted analytics to help you identify those relationships at risk, that’s a completely different story.

Next Up: Analytics!

What Makes a Good UX? Part I

Now that we’ve sung Bye, Bye to Monochrome UIs, it’s time to address what makes a good UI, which is the foundation of a good User Experience (UX). This is a question the doctor has been working on for over a year as he has done deep dive vendor reviews (co-authored with the prophet over on Spend Matters Pro, membership required) that go deeper than any analyst or blogger reviews that have ever been done.

This is because each of those reviews have included a section on UI where the UI was rated on a number of high level factors, namely:

  • Overall Ease of Use
  • Ability to Learn and Use Without Training
  • Comparative Ease of Use
  • Sourcing/Procurement User Experience
  • Business User Experience
  • Planned Upgrades

which, while seemingly subjective, were all based on a comparison of the platform against other platforms that themselves were rated against a generalized baseline of what makes a “good” UI / UX for that type of platform. And while these have not yet been shared, as they have been in development, with the release of the first Spend Matters Solution Map (C) in P2P and the upcoming Solution Maps in Sourcing, the doctor and the the prophet have decided to finalize their joint criteria for UX evaluation. To this end, we are co-authoring a series on Measuring the Procurement Technology User Experience: More Than Just a Pretty Screen (Part 1) where we will dive into the general and specific characteristics of what makes a good UX in Sourcing and Procurement.

And while the full deep dives will be on Spend Matters Pro, our view of the basics are something we intend to spread far and wide. So just like key aspects of Sourcing, SRM, and CLM functionality were covered on both Spend Matters and Sourcing Innovation, the core of what makes a good UX will be covered on both blogs as well (but if you want drill down and examples, that will only be found in the deep dives on Pro).

We’ll start with the generics. A good UI brings integrated guidance that helps the user through each function it supports, with the user needing to be aware of policies, detailed user guides, or business specific rules. The platform knows all those and guides the user through any minefields.

An even better UI leverages pattern recognition, machine learning, trend detection, and reasoning to adapt to the user and make the UX better and better over time. In short, the most effective UX not only makes a platform sticky, but makes the everyday user more productive over time — without extensive training or the need for deep knowledge on the part of the user.

And it goes beyond the obvious. For example:

  • An ideal UX doesn’t have to exist — it can be “touch-less” and automate anything that can be automated without user involvement
  • An ideal UX realizes context can be as important as content
  • Mobile is part of the platform and user experience where it makes sense
  • Messaging is used as a competitive advantage (but not necessarily in the way you think it might be)
  • And it incorporates guidance based on true expertise — what some are calling predictive analytics

In other words, it gives the user what the user really needs, not what the developer wants.

For a deeper dive on the features and capabilities of a good UX, see Measuring the Procurement Technology User Experience: More Than Just a Pretty Screen (Part 1) [membership required] and stay tuned for future entries in this series.