The University is Still Here Because …

A couple of years ago TechCrunch wrote an article that asked Why is the University Still Here? In a time where information is universally accessible, knowledge can be compiled by experts and shared in a reviewed and verified form far and wide, and intelligence can be conveyed direct from an expert in Oxford (England) to an able learner in Liberal (Kansas) if both are ready, willing, and able thanks to virtual classrooms with audio-visual conferencing and screen sharing.

Then, earlier this decade, we saw the launch of massive open online courses (MOOCs) where anyone can register for a course from a leading professor, get the lectures, complete assignments, send them to TAs (teaching assistants) half a world away, get graded (automatically for multiple choice and by a human for essay or problem solving questions), and work towards what is supposed to be the equivalent of a University degree. But is it?

First of all, universities, even with remote learning aspects, have always been based on classroom learning. Secondly, advanced programs have always been based on one-on-one instruction between teacher and student. Thirdly, they have always been based on carefully structured curriculums that are designed to ensure a student gets an appropriate depth and breadth of knowledge. Fourth, the testing is always done in a manner that makes cheating or plagiarism difficult.

MOOCs are the antitheticals of University. They are trying to abolish classrooms. There is no personal one-on-one instruction between a recorded lecture and a semi-engaged viewer. The student can design their own haphazard curriculum that ensures neither depth nor breadth in the appropriate subject matter. And anyone can submit a document created by anyone else and there is no way to know.

But the failure of MOOCs to displace universities is not an argument for the continued existence of universities. Just because X does not displace Y, that doesn’t mean that Y is superior. It just means that the masses do not believe that X is superior. In our case, it’s not enough of a case for universities.

To make the case, we look at where MOOCs failed. As per the techcrunch article, they failed in keeping a user’s interest. Most people who registered for and even started a course, never completed. Most who completed didn’t come back. They weren’t motivated. The reasoning in the article is that because, for the majority of learners, it was part time, on their own time, it never got primacy and without primacy, efforts get abandoned.

And that’s part of the reason MOOCs failed and part of the reason we still need Universities. When you go to University, you make education a primary focus of your life. But the other reason is that a real, established, prestigious University provides something no other form of education can — a well-rounded full-featured educational experience with primacy, one-on-one instruction from an expert, great curriculums, and, most important, a community to share the experience with. This last aspect is key — you are part of a dedicated group of people there to learn and share the experience of learning and better each other in the process. And while that group shrinks a bit over the years, by the end, you have your own support group, and possibly a few colleagues for life, that got you there and take you further. That’s something you’ll never get from a MOOC.

And that’s why Universities still exist and need to continue to exist.

We Need BlockChain, But Not for the Reasons You Think.

The biggest use for blockchain right now is to support digital currency, namely bitcoin, and secure trade of that currency. And since it has the potential to revolutionize e-payments, everyone is talking about it. But let’s face it, your employees don’t take bitcoin, your suppliers probably don’t take bitcoin, and your customers aren’t paying in bitcoin. Most of your employees want direct debit, your contractors want checks, and your suppliers probably want ACH. Bitcoin and blockchain is the furthest thing from their minds and, thus, is the furthest thing from yours.

But there is one use for block chain, and that is, simply put, the secure transfer of IOUs. What do we mean by this? About a year ago we penned a post that asked With Currencies Crazy, Is It Time to Return to Barter. In this post we asked what if there was no exchange of currency. What if it was an exchange of a raw material or service for another raw material or service, where each raw material or service came from the organization or a partner in the same country. Since the value of a product or service, adjusted for inflation, is relatively constant over time and since the relative value of one versus another is also relatively constant over time, such a contract would not be subject to rapid changes in value differences regardless of what happened in the currency markets.

Now imagine if instead of trading raw materials, you could trade IOUs and send them up and down supply chain until all of the differences could be settled within a country. You wouldn’t need to exchange raw materials with a company you might not want to, and, more importantly you definitely wouldn’t need to deal in non-native currencies. You could just settle those IOUs with in-country in-currency bank transfers, clear out the IOUs, and all would be settled.

Up until now, there has been no way to securely trade those IOUs. You had to trade payments in banks. But now, with the advent of blockchain, you can trade those IOUs simply by creating an IOU cryptocurrency specifically for keeping track of all the barters. And, if you’re not sure how to optimize the trading of IOUs, we gave you a great idea on how to do that in our post on With Currencies Crazy, Is It Time to Return to Barter — you build a special, shared, supply chain optimization model that allows all participating entries to upload their data and opt-in to in-currency barter optimization and then trade the IOUs through the new cryptocurrency and only the final imbalances in each country need to be paid. It’s the future …

Is WalMart Going to Force Logistics Scheduling Optimization Mainstream?

Recently, Spend Matters pointed out that Retail Mega-Giant Wal-Mart is stepping up its pressure on suppliers to get fulfillment perfect or pay a fine. According to Bloomberg, the goal is to add 1 Billion to revenue by improving (desired) product availability at stores (as the average stock-out rate of 8% costs a mega-retailer like Wal-Mart an awful lot of money).

But it’s not just stock-outs costing Walmart money. It’s deliveries that don’t happen when they are expected to happen. If a delivery arrives late, then warehouse workers have to stay overtime to get the truck unloaded, and that costs Walmart at least time and a half for every hour the workers have to stay late (plus any hours they had to be paid to wait around, probably doing nothing, for the delivery). If a delivery arrives (a day) early, then regularly scheduled deliveries have to be pushed ahead, possibly contributing to overtime and payment for empty hours (when workers show up for their shift and there is no work to be done for two hours).

And if trucks are waiting in winter, the drivers are not only being paid to sit to wait, but are probably also idling their trucks to keep warm, burning fuel, bumping up costs. So, the supplier is paying more to deliver, and passing that cost onto Walmart. When you think of how many early and late deliveries a mega-retailer like Wal-Mart must get, and you add up all the OT costs, empty hour costs for warehouse workers and drivers, and additional fuel costs, that costs a lot of money even before you take in the potential losses from stock-outs.

Bravo for Wal-Mart for trying to force more perfection into the supply chain and eliminate the considerable losses that come from imperfect orders. But how will the average supplier and/or carrier comply? Logistics scheduling can be a nightmare and be way too much for the average scheduler, or spreadsheet to handle. But as we’ve indicated before, not too much for an appropriately defined optimization solution. It’s about time optimization got more respect, even if it starts with scheduling.

And while optimization needs to be more universally applied, once a supplier or carrier gets comfortable with scheduling optimization, they’ll get more comfortable with optimization in general and move onto the adoption of decision optimization for logistics, and that’s just one step away from the application of decision optimization to high value / strategic events. And that’s, hopefully, only one step away from the universal application of optimization across all sourcing events.

So while this isn’t the most critical application of optimization for an average organization, it’s a great start and bravo to Wal-Mart for forcing suppliers and carriers to perform better in a manner that should force the eventual adoption of optimization.

And if you don’t like it, get over it. And if you don’t like Wal-Mart, remember, their dominance is all your fault.

Do You Have Too Many Suppliers?

Maybe. But maybe you should also be asking Do You Have Too Few? Many organizations assume that just because they have 20K, 30K, 50K, or even 100K suppliers that they have too many. And while that’s probably the case, the question is much more complicated that. First of all, just because a supplier is in your system, that does not mean that the supplier is still being used. Secondly, if you have 100 locations and always use local providers for janitorial, security, (bike) messenger, floral, etc. then you could have 1,000 providers for small services that cannot be consolidated due to business rules or just lack of suppliers. As a result, sheer number of suppliers alone does not mean there is a problem — at least not a serious one.

Secondly, for some categories you want multiple suppliers. If the product is critical, if one supplier cannot (always) meet all the needs, if even minor disruptions in supply could be costly, and so on, you need multiple suppliers. Sometimes more than the minimum number. Risk Management might believe two suppliers is enough, but if one goes out of business, how long will it take to find a second, and start receiving viable products and services. If you have a third supplier, even providing minimal amounts of the products or services, it’s a lot easier to shift demand to that supplier in an emergency. So sometimes extra suppliers are good.

Plus, the ultimate goal of (category) sourcing is to receive the best value — typically defined as the lowest cost award that meets the organizational need. Sometimes the best value will come from assigning all of the award to a single supplier, other times it will require splitting the award between six suppliers — depending on product costs, shipping costs, import/export tariffs, and so on. So, supplier count alone is not a good metric.

As you can see, if you want a truly optimized award across a category, sometimes the organization will have too few suppliers. The right number of suppliers is the number that the organization ends up with after every category is optimally allocated across both the strategic spend and the tail spend. While it will usually be less than the number of (active) suppliers in the supplier database (as most organizations that do not do sourcing across all categories will end up buying from more suppliers then they need to), it won’t always be significantly less. You can’t always cut your supply base in half just because you think you have twice as many suppliers as you need. You properly source each category, and when all is said and done, the suppliers you have selected represent the proper pool size. Any remaining suppliers that aren’t absolutely essential for a non-sourced product or service get cut and then you have a properly sized supply base as it was properly designed. 10K vs 20K vs 50K is irrelevant. Only so much value comes from consolidation alone. Remember that.

And that’s why, in his response to Sydney’s questions on “What’s the Cost of Having a Long Supply Tail, and How Do You Determine the ‘Right’ Supply Base”, the doctor noted that the size of the supply base is totally irrelevant. The right size is the size that gives you the most value for every category you source. That will vary by company and there is no fixed size, or even formula, to compute it.

And, as the doctor noted on Twitter, your only concern should not be how long the tail is, but how many rats are in the supply chain. Those are the only parties you should be in a rush to stomp out.

The UX One Should Expect from Best-in-Class Spend Analysis … Part III

In previous posts, we took a deep dive into e-Sourcing (Part I and Part II), e-Auctions (Part I and Part II), and Optimization (Part I, Part II, Part III, and Part IV). But in this series we are diving into spend analysis. And this time we’re taking the vertical torpedo to the bottom of the deep. If you thought our last series was insightful, wait until you finish plowing through this one. By the end of it, there will be more than a handful of vendors shaking in their boots when they realize just how far they have to go if they want to deliver on all those promises of next generation opportunity identification they’ve been selling you on for years! But we digress …

We’ve said it multiple times, but we are going to repeat it again. The key point to remember here is that there are only two advanced sourcing technologies that can identify value (savings, additional revenue opportunity, overhead cost reductions, etc.) in excess of 10% year-over-year-over-year. One of these is optimization (provided it’s done right, useable, and capable of supporting — and solving — the right models; see our last series). The other is spend analytics. True spend analytics that goes well beyond the standard Top N and report templates to allow a user to cube, slice, dice, and re-cube quickly and efficiently in meaningful ways and then visualize that data in a manner that allows the potential opportunities, or lack thereof, to be almost instantly identified.

But, as per our last two posts, this requires truly extreme usability. Since not everyone has an advanced computer science or quantitative analysis degree, not everyone can use the first generation tools. This limits these users to the built-in Top N reports. And as we have indicated many times, once all of the categories in the Top N have been sourced and all of the Top N suppliers have been put under contract, there is no more value to be found from a fixed set of Top N reports. At this point, the first generation tools would sit on the shelf, unused. And that’s not how value is found.

However, creating the right UX is not easy. It’s not just a set of fancy reports (as static reports have been proven to be useless for over a decade), but a powerful set of capabilities that allow users to cube, slice, dice, and re-cube seven ways from Sunday quickly, easily, and repeatedly until they find the hidden value. It’s innovative new reporting and display techniques that makes outlier identification and opportunity analysis quicker and easier and simpler than its ever bin. It’s real-time data validation and verification tools that insure that a user doesn’t spend a week building a business case around data where one of the import files was shifted by a factor of 100 because of missing decimal points, destroying the entire business case in 4 clicks. And so on. And that’s why the doctor and the prophet are bringing you a very in-depth look at what makes a good User eXperience for spend analysis that goes far, far deeper than anyone has done before.

In a time where there seems to be a near universal playbook for spend analysis solution providers when it comes to positioning the capability they deliver and when many vendors sound interchangeable, and when many vendors are fungible in a way that is not necessarily negative, this insight is needed more than ever. And if a few dozen vendors quake in their books when this series is over, so be it.

In the first part of our series, we explored a few key capabilities that must be present from the get go, including, as we dove into here on SI in our first post on The UX One Should Expect from Best-in-Class Spend Analysis … Part I, dynamic dashboards. Unlike the first generation dashboards that were dangerous, dysfunctional, and sometimes even deadly to the business, true next generation dynamic dashboards are actually useful and even beneficial. Their ability to provide quick entry points through integrated drill down to key, potentially problematic, data sets can make sharing and exploring data faster, and the customization capabilities that allow buyers to continually eliminate those green lights that lull one into a false sense of security is one of the keys to true analytics success. (For more details, see the doctor and the prophet‘s first deep dive on “What To Expect from Best-in-Class Spend Analysis Technology and User Design” (Part I) over on Spend Matters Pro [membership required]).

In the second part of our series we explored a few more key capabilities, four to be precise, that include dynamic cube and view creation “on the fly”. Given that:

 

  • A cube will never have all available (current and future) data dimensions
  • Not all data dimensions are important;
  • Some of the essential data (referenced in the previous point) will be third-party data updated at different time intervals
  • A user never needs to analyze all data at once when doing a detailed analysis.
  • We have not (yet) encountered a system that will have enough memory to fit enough of a true “mega cube” in memory for real-time analysis.

 

One cube will NEVER be enough. NEVER, NEVER, NEVER! That’s why procurement users need the ability to create as many cubes as necessary, on the fly, in real time. This is required to test any and every hypothesis until the user gets to the one that yields the value generation gold mine. Unless every hypothesis can be tested, it is likely that the best opportunity will never be identified. If we knew where the biggest opportunity was, we’d source it. But the best opportunities are, by definition, hidden, and we don’t know where. Success requires cubes, cubes, and more cubes with views, views, and more views. (For more detail, or information on the other capabilities we didn’t cover in our post on The UX One Should Expect from Best-in-Class Spend Analysis … Part II, see the doctor and the prophet‘s second deep dive on “What To Expect from Best-in-Class Spend Analysis Technology and User Design” (Part II) over on Spend Matters Pro [membership required].)

But much, much more is required. That’s why the doctor and the prophet recently published their third deep dive on “What To Expect from Best-in-Class Spend Analysis Technology and User Design” over on Spend Matters Pro [membership required] on the breadth of requirements for a good Spend Analysis User Experience. In this piece, we dive deep into three more absolute requirements (which, like the previous requirements, are so critical the absence of any should delete a vendor from your list) including real-time idiot-proof data categorization.

Just about every solution has categorization, most allow end users to at least over-ride categorization, but, in our view few, relatively few solutions can claim (to approach) idiot-proofness.

So what is an idiot proof solution? Before we define this, let us note that the approach a provider takes to classification is secondary. It doesn’t matter whether the methodology provided is fully automated (and based on leading machine learning techniques), hybrid (where the machine learning can be overridden by the analyst with simple rules), or fully manual (where the user can classify data using free-form rules created in any order they want on any fields they want).

This means that the system must provide a simple and effective methodology for classifying, and re-classifying, data in an almost idiot-proof manner. So, if the engine uses AI, it should be easy for the user to view, and alter, the domain knowledge models used by the algorithms. If it uses rules-based approaches, it should be easy to review, visualize, and modify rules using a language and visual techniques wherever possible. And if the solution uses a hybrid approach, the user should be able to quickly analyze the AI, determine the reason for a mis-map, and then define appropriate over-ride rules that will correct any errors the user discovers so the error never materializes again in the future.

In other words, success requires cubes, cubes and more cubes on correctly mapped and classified data that can be accessed through views, views, and more views. With any data the user requires, from any location, in any format. But more on this in upcoming posts. In the interim, for additional insight on a few more key requirements of a spend analytics product for a good user experience, check out the doctor and the prophet‘s second deep dive on “What To Expect from Best-in-Class Spend Analysis Technology and User Design” (Part III) over on Spend Matters Pro [membership required].) As per the past two parts of the series, it’s worth the read. And stay tuned for the next two parts of the series. That’s right! Two more parts. We told you this one was a doozy!