Daily Archives: July 18, 2017

The UX One Should Expect from Best-in-Class Spend Analysis … Part II

Now that we’ve taken a deep dive into e-Sourcing (Part I and Part II), e-Auctions (Part I and Part II), and Optimization (Part I, Part II, Part III, and Part IV), we are diving into spend analysis. And this time we’re taking the vertical torpedo to the bottom of the deep. If you thought our last series was insightful, wait until you plow through this one. By the end of it, there will be more than a handful of vendor’s shaking in their boots when they realize just how far they have to go if they want to deliver on all those promises of next generation opportunity identification they’ve been selling you on for years! But we digress …

The key point to remember here is that there are only two advanced sourcing technologies that can identify value (savings, additional revenue opportunity, overhead cost reductions, etc.) in excess of 10% year-over-year-over-year. One of these is optimization (provided it’s done right, useable, and capable of supporting — and solving — the right models). The other is spend analytics. True spend analytics that goes well beyond the standard Top N and report templates to allow a user to cube, slice, dice, and re-cube quickly and efficiently in meaningful ways and then visualize that data in a manner that allows the potential opportunities, or lack thereof, to be almost instantly identified.

This requires extreme usability. As noted in our last post, not everyone has an advanced computer science or quantitative analysis degree, and first generation tools were so hard to use that once all of the categories in the top n report were sourced and all the suppliers in the top n suppliers put under contract, there was no more value to be had. And the tools sat on the shelf when they should be used weekly, if not daily. If a hunch can be explored in an hour, and every tenth hunch uncovers a 100K+ value generation opportunity, that’s a 10X return that would never be realized otherwise as the analyst would never have time to explore ten hunches otherwise.

But, as with optimization, it’s hard to create the right UX. It’s not just a set of fancy reports (as static reports have been proven to be useless for over a decade), but a set of capabilities that allow users to cube, slice, dice, and re-cube seven ways from Sunday quickly, easily, and repeatedly until they find the hidden value. It’s innovative new reporting and display techniques that makes outlier identification and opportunity analysis quicker and easier and simpler than its ever bin. It’s real-time data validation and verification tools that insure that a user doesn’t spend a week building a business case around data where one of the import files was shifted by a factor of 100 because of missing decimal points, destroying the entire business case in 4 clicks. And so on.

That’s why the doctor and the prophet are bringing you an in-depth look at what makes a good User eXperience for spend analysis that goes deeper — far deeper — than anyone has ever gone before. In a time where there seems to be a near universal playbook for spend analysis solution providers when it comes to positioning the capability they deliver and when many vendors sound interchangeable, and when many vendors are fungible in a way that is not necessarily negative, this insight is needed more than ever. And if a few vendors quake in their boots when this series is over, so be it. Last week, over on Spend Matters Pro [membership required], the doctor and the prophet published our second piece on What To Expect from Best-in-Class Spend Analysis Technology and User Design that continued our in-depth foray into this critical, but often ill-explained, technology.

So what is required? As per our first post, dozens (upon dozens) of innovative and unique capabilities, including the next generation dynamic dashboards that we discussed in our last post. In our deep dive, we explore four more core requirements, one of which is dynamic cube and view creation “on the fly”. Given that:

  • A cube will never have all available (current and future) data dimensions
  • Not all data dimensions are important;
  • Some of the essential data (referenced in the previous point) will be third-party data updated at different time intervals
  • A user never needs to analyze all data at once when doing a detailed analysis.
  • We have not (yet) encountered a system that will have enough memory to fit enough of a true “mega cube” in memory for real-time analysis.

One cube will NEVER be enough. NEVER, NEVER, NEVER! That’s why procurement users need the ability to create as many cubes as necessary, on the fly, in real time. This is required to test any and every hypothesis until the user gets to the one that yields the value generation gold mine. Because, as this blog has previously published (in why data analysis is avoided), if it is too difficult or costly to do an analysis, a gut-feel assessment as to the value that will be yielded will be done. And if it looks like the cost to value ratio will be too high, the analysis will be avoided. The end result is that the organization will never truly know if the potential value was low or high.

In other words, success requires cubes, cubes, and more cubes with views, views, and more views. With any data the user requires, from any location, in any format. But more on this in upcoming posts. In the interim, for three more requirements of a spend analytics product for a good user experience, check out What To Expect from Best-in-Class Spend Analysis Technology and User Design over on Spend Matters Pro [membership required].