Monthly Archives: July 2017

The UX One Should Expect from Best-in-Class Spend Analysis … Part IV

As per our last post, in this series we are diving into spend analysis. Deep into spend analysis. So deep that we’re taking a vertical torpedo to the bottom of the abyss. And if you think this series has been insightful so far, wait until we take you to the bottom. By the end of it, there will be more than a handful of vendors shaking and quaking in their boots when they realize just how far they have to go if they want to deliver on each and every promise of next generation opportunity identification they’ve been selling you on for years.

We’re giving you this series so that you can use it to make sure they deliver. Because, as we have repeatedly pointed out, you only have two technologies at your disposal to achieve year-over-year savings of 10% or more. Optimization (covered in our last four-part series, see Part I, Part II, Part III, and Part IV), which can capture the value, and spend analytics, which can identify the value.

But, as we will keep repeating, it has to be true spend analytics that goes well beyond the standard Top N report templates to allow a user to cube, slice, dice, and re-cube quickly and efficiently in meaningful ways and then visualize that data in a manner that allows the potential opportunities, or lack thereof, to be almost instantly identified.

But, as per our last two posts, this requires truly extreme usability. Since not everyone has an advanced computer science or quantitative analysis degree, not everyone can use the first generation tools. This means that, in organizations without highly trained analysts, the first generation tools would sit on the shelf, unused. And that is not how value is found.

However, creating the right UX is not easy. That’s why it takes a five part series just to outline the core requirements (and when we say core, we mean core — there are a lot more requirements to master to deliver the whole enchilada). But it’s needed because we are in a time where there seems to be a near universal playbook for spend analysis solution providers when it comes to positioning the capability they deliver and when many vendors sound interchangeable when, in fact, they are not.

In each part of the series to date (What To Expect from Best-in-Class Spend Analysis Technology and User Design Part I, Part II, and Part III), over on Spend Matters Pro [membership required], the doctor and the prophet have explored three to four key requirements of a best-in-class spend analytics system that are essential for a good user experience. Here on SI, we’ve covered three of these to whet your appetite for the knowledge that is being kept from you.

In The UX One Should Expect from Best-in-Class Spend Analysis … Part I we discussed the need for real, true, dynamic dashboards. Unlike the first generation dashboards that were dangerous, dysfunctional, and sometimes even deadly to the business, true next generation dynamic dashboards are actually useful and even beneficial. Their ability to provide quick entry points through integrated drill down to key, potentially problematic, data sets can make sharing and exploring data faster, and the customization capabilities that allow buyers to continually eliminate those green lights that lull one into a false sense of security is one of the keys to true analytics success.

In The UX One Should Expect from Best-in-Class Spend Analysis, Part II, we pointed out that one cube will NEVER be enough. NEVER, NEVER, NEVER! And that’s why procurement users need the ability to create as many cubes as necessary, on the fly, in real time. This is required to test any and every hypothesis until the user gets to the one that yields the value generation gold mine. Unless every hypothesis can be tested, it is likely that the best opportunity will never be identified. If we knew where the biggest opportunity was, we’d source it. But the best opportunities are, by definition, hidden, and we don’t know where. Success required cubes, cubes, and more cubes with views, views, and more views. But this is just the foundation.

Then, in The UX One Should Expect from Best-in-Class Spend Analysis, Part III, we indicated that success requires appropriately classified and categorized data. But good data categorization is not always easy, especially for the average user. That’s why the third key requirement is real-time idiot-proof data categorization, which, while a mouthful, is a lot easier to say than it is done. (For details, check out the articles.)

But, as you’ve probably guessed by now, more is required. Much more. In What To Expect from Best-in-Class Spend Analysis Technology and User Design (Part IV) over on Spend Matters Pro [membership required], the doctor and the prophet dive deep into a couple of additional key requirements for a best-in-class spend analytics solution. And, like the previous requirements, these are intensive. Quite intensive.

The one we are focussing on today is support for descriptive, predictive, and predictive analytics. First generation solutions stopped at descriptive. They simply reported on what happened in the past, and stopped there. And usually the description of the past was so far behind that the reports were not always that useful. So next generation moved onto predictive, and computed trends, taking into account historical sales data and current market data to describe opportunities so that, even if the data was a bit outdated, at least the analyst had a good idea of direction.

And as platforms got faster, and more powerful, and more real-time, the predictive power got better, and more useful. And organizations realized more value … but not nearly what they should realize. Because it’s not always enough to know that there may be an opportunity, to realize that opportunity, one needs an idea on how to capture it. And if one’s not a category or market expert, one can be completely lost. But if the system supports prescriptive analytics, then the analyst has an idea where to start. And that is key to a great user experience.

But is that everything the system needs for a great user experience. Nope. And we’ll continue our overview in the next, and final, part of this initial series. (We’ve written the first few chapters, but believe us when we say the book has not been written yet.)

The University is Still Here Because …

A couple of years ago TechCrunch wrote an article that asked Why is the University Still Here? In a time where information is universally accessible, knowledge can be compiled by experts and shared in a reviewed and verified form far and wide, and intelligence can be conveyed direct from an expert in Oxford (England) to an able learner in Liberal (Kansas) if both are ready, willing, and able thanks to virtual classrooms with audio-visual conferencing and screen sharing.

Then, earlier this decade, we saw the launch of massive open online courses (MOOCs) where anyone can register for a course from a leading professor, get the lectures, complete assignments, send them to TAs (teaching assistants) half a world away, get graded (automatically for multiple choice and by a human for essay or problem solving questions), and work towards what is supposed to be the equivalent of a University degree. But is it?

First of all, universities, even with remote learning aspects, have always been based on classroom learning. Secondly, advanced programs have always been based on one-on-one instruction between teacher and student. Thirdly, they have always been based on carefully structured curriculums that are designed to ensure a student gets an appropriate depth and breadth of knowledge. Fourth, the testing is always done in a manner that makes cheating or plagiarism difficult.

MOOCs are the antitheticals of University. They are trying to abolish classrooms. There is no personal one-on-one instruction between a recorded lecture and a semi-engaged viewer. The student can design their own haphazard curriculum that ensures neither depth nor breadth in the appropriate subject matter. And anyone can submit a document created by anyone else and there is no way to know.

But the failure of MOOCs to displace universities is not an argument for the continued existence of universities. Just because X does not displace Y, that doesn’t mean that Y is superior. It just means that the masses do not believe that X is superior. In our case, it’s not enough of a case for universities.

To make the case, we look at where MOOCs failed. As per the techcrunch article, they failed in keeping a user’s interest. Most people who registered for and even started a course, never completed. Most who completed didn’t come back. They weren’t motivated. The reasoning in the article is that because, for the majority of learners, it was part time, on their own time, it never got primacy and without primacy, efforts get abandoned.

And that’s part of the reason MOOCs failed and part of the reason we still need Universities. When you go to University, you make education a primary focus of your life. But the other reason is that a real, established, prestigious University provides something no other form of education can — a well-rounded full-featured educational experience with primacy, one-on-one instruction from an expert, great curriculums, and, most important, a community to share the experience with. This last aspect is key — you are part of a dedicated group of people there to learn and share the experience of learning and better each other in the process. And while that group shrinks a bit over the years, by the end, you have your own support group, and possibly a few colleagues for life, that got you there and take you further. That’s something you’ll never get from a MOOC.

And that’s why Universities still exist and need to continue to exist.

We Need BlockChain, But Not for the Reasons You Think.

The biggest use for blockchain right now is to support digital currency, namely bitcoin, and secure trade of that currency. And since it has the potential to revolutionize e-payments, everyone is talking about it. But let’s face it, your employees don’t take bitcoin, your suppliers probably don’t take bitcoin, and your customers aren’t paying in bitcoin. Most of your employees want direct debit, your contractors want checks, and your suppliers probably want ACH. Bitcoin and blockchain is the furthest thing from their minds and, thus, is the furthest thing from yours.

But there is one use for block chain, and that is, simply put, the secure transfer of IOUs. What do we mean by this? About a year ago we penned a post that asked With Currencies Crazy, Is It Time to Return to Barter. In this post we asked what if there was no exchange of currency. What if it was an exchange of a raw material or service for another raw material or service, where each raw material or service came from the organization or a partner in the same country. Since the value of a product or service, adjusted for inflation, is relatively constant over time and since the relative value of one versus another is also relatively constant over time, such a contract would not be subject to rapid changes in value differences regardless of what happened in the currency markets.

Now imagine if instead of trading raw materials, you could trade IOUs and send them up and down supply chain until all of the differences could be settled within a country. You wouldn’t need to exchange raw materials with a company you might not want to, and, more importantly you definitely wouldn’t need to deal in non-native currencies. You could just settle those IOUs with in-country in-currency bank transfers, clear out the IOUs, and all would be settled.

Up until now, there has been no way to securely trade those IOUs. You had to trade payments in banks. But now, with the advent of blockchain, you can trade those IOUs simply by creating an IOU cryptocurrency specifically for keeping track of all the barters. And, if you’re not sure how to optimize the trading of IOUs, we gave you a great idea on how to do that in our post on With Currencies Crazy, Is It Time to Return to Barter — you build a special, shared, supply chain optimization model that allows all participating entries to upload their data and opt-in to in-currency barter optimization and then trade the IOUs through the new cryptocurrency and only the final imbalances in each country need to be paid. It’s the future …

Is WalMart Going to Force Logistics Scheduling Optimization Mainstream?

Recently, Spend Matters pointed out that Retail Mega-Giant Wal-Mart is stepping up its pressure on suppliers to get fulfillment perfect or pay a fine. According to Bloomberg, the goal is to add 1 Billion to revenue by improving (desired) product availability at stores (as the average stock-out rate of 8% costs a mega-retailer like Wal-Mart an awful lot of money).

But it’s not just stock-outs costing Walmart money. It’s deliveries that don’t happen when they are expected to happen. If a delivery arrives late, then warehouse workers have to stay overtime to get the truck unloaded, and that costs Walmart at least time and a half for every hour the workers have to stay late (plus any hours they had to be paid to wait around, probably doing nothing, for the delivery). If a delivery arrives (a day) early, then regularly scheduled deliveries have to be pushed ahead, possibly contributing to overtime and payment for empty hours (when workers show up for their shift and there is no work to be done for two hours).

And if trucks are waiting in winter, the drivers are not only being paid to sit to wait, but are probably also idling their trucks to keep warm, burning fuel, bumping up costs. So, the supplier is paying more to deliver, and passing that cost onto Walmart. When you think of how many early and late deliveries a mega-retailer like Wal-Mart must get, and you add up all the OT costs, empty hour costs for warehouse workers and drivers, and additional fuel costs, that costs a lot of money even before you take in the potential losses from stock-outs.

Bravo for Wal-Mart for trying to force more perfection into the supply chain and eliminate the considerable losses that come from imperfect orders. But how will the average supplier and/or carrier comply? Logistics scheduling can be a nightmare and be way too much for the average scheduler, or spreadsheet to handle. But as we’ve indicated before, not too much for an appropriately defined optimization solution. It’s about time optimization got more respect, even if it starts with scheduling.

And while optimization needs to be more universally applied, once a supplier or carrier gets comfortable with scheduling optimization, they’ll get more comfortable with optimization in general and move onto the adoption of decision optimization for logistics, and that’s just one step away from the application of decision optimization to high value / strategic events. And that’s, hopefully, only one step away from the universal application of optimization across all sourcing events.

So while this isn’t the most critical application of optimization for an average organization, it’s a great start and bravo to Wal-Mart for forcing suppliers and carriers to perform better in a manner that should force the eventual adoption of optimization.

And if you don’t like it, get over it. And if you don’t like Wal-Mart, remember, their dominance is all your fault.

Do You Have Too Many Suppliers?

Maybe. But maybe you should also be asking Do You Have Too Few? Many organizations assume that just because they have 20K, 30K, 50K, or even 100K suppliers that they have too many. And while that’s probably the case, the question is much more complicated that. First of all, just because a supplier is in your system, that does not mean that the supplier is still being used. Secondly, if you have 100 locations and always use local providers for janitorial, security, (bike) messenger, floral, etc. then you could have 1,000 providers for small services that cannot be consolidated due to business rules or just lack of suppliers. As a result, sheer number of suppliers alone does not mean there is a problem — at least not a serious one.

Secondly, for some categories you want multiple suppliers. If the product is critical, if one supplier cannot (always) meet all the needs, if even minor disruptions in supply could be costly, and so on, you need multiple suppliers. Sometimes more than the minimum number. Risk Management might believe two suppliers is enough, but if one goes out of business, how long will it take to find a second, and start receiving viable products and services. If you have a third supplier, even providing minimal amounts of the products or services, it’s a lot easier to shift demand to that supplier in an emergency. So sometimes extra suppliers are good.

Plus, the ultimate goal of (category) sourcing is to receive the best value — typically defined as the lowest cost award that meets the organizational need. Sometimes the best value will come from assigning all of the award to a single supplier, other times it will require splitting the award between six suppliers — depending on product costs, shipping costs, import/export tariffs, and so on. So, supplier count alone is not a good metric.

As you can see, if you want a truly optimized award across a category, sometimes the organization will have too few suppliers. The right number of suppliers is the number that the organization ends up with after every category is optimally allocated across both the strategic spend and the tail spend. While it will usually be less than the number of (active) suppliers in the supplier database (as most organizations that do not do sourcing across all categories will end up buying from more suppliers then they need to), it won’t always be significantly less. You can’t always cut your supply base in half just because you think you have twice as many suppliers as you need. You properly source each category, and when all is said and done, the suppliers you have selected represent the proper pool size. Any remaining suppliers that aren’t absolutely essential for a non-sourced product or service get cut and then you have a properly sized supply base as it was properly designed. 10K vs 20K vs 50K is irrelevant. Only so much value comes from consolidation alone. Remember that.

And that’s why, in his response to Sydney’s questions on What’s the Cost of Having a Long Supply Tail, and How Do You Determine the ‘Right’ Supply Base, the doctor noted that the size of the supply base is totally irrelevant. The right size is the size that gives you the most value for every category you source. That will vary by company and there is no fixed size, or even formula, to compute it.

And, as the doctor noted on Twitter, your only concern should not be how long the tail is, but how many rats are in the supply chain. Those are the only parties you should be in a rush to stomp out.