Category Archives: Spend Analysis

The UX One Should Expect from Best-in-Class Spend Analysis … Part III

In previous posts, we took a deep dive into e-Sourcing (Part I and Part II), e-Auctions (Part I and Part II), and Optimization (Part I, Part II, Part III, and Part IV). But in this series we are diving into spend analysis. And this time we’re taking the vertical torpedo to the bottom of the deep. If you thought our last series was insightful, wait until you finish plowing through this one. By the end of it, there will be more than a handful of vendors shaking in their boots when they realize just how far they have to go if they want to deliver on all those promises of next generation opportunity identification they’ve been selling you on for years! But we digress …

We’ve said it multiple times, but we are going to repeat it again. The key point to remember here is that there are only two advanced sourcing technologies that can identify value (savings, additional revenue opportunity, overhead cost reductions, etc.) in excess of 10% year-over-year-over-year. One of these is optimization (provided it’s done right, useable, and capable of supporting — and solving — the right models; see our last series). The other is spend analytics. True spend analytics that goes well beyond the standard Top N and report templates to allow a user to cube, slice, dice, and re-cube quickly and efficiently in meaningful ways and then visualize that data in a manner that allows the potential opportunities, or lack thereof, to be almost instantly identified.

But, as per our last two posts, this requires truly extreme usability. Since not everyone has an advanced computer science or quantitative analysis degree, not everyone can use the first generation tools. This limits these users to the built-in Top N reports. And as we have indicated many times, once all of the categories in the Top N have been sourced and all of the Top N suppliers have been put under contract, there is no more value to be found from a fixed set of Top N reports. At this point, the first generation tools would sit on the shelf, unused. And that’s not how value is found.

However, creating the right UX is not easy. It’s not just a set of fancy reports (as static reports have been proven to be useless for over a decade), but a powerful set of capabilities that allow users to cube, slice, dice, and re-cube seven ways from Sunday quickly, easily, and repeatedly until they find the hidden value. It’s innovative new reporting and display techniques that makes outlier identification and opportunity analysis quicker and easier and simpler than its ever bin. It’s real-time data validation and verification tools that insure that a user doesn’t spend a week building a business case around data where one of the import files was shifted by a factor of 100 because of missing decimal points, destroying the entire business case in 4 clicks. And so on. And that’s why the doctor and the prophet are bringing you a very in-depth look at what makes a good User eXperience for spend analysis that goes far, far deeper than anyone has done before.

In a time where there seems to be a near universal playbook for spend analysis solution providers when it comes to positioning the capability they deliver and when many vendors sound interchangeable, and when many vendors are fungible in a way that is not necessarily negative, this insight is needed more than ever. And if a few dozen vendors quake in their books when this series is over, so be it.

In the first part of our series, we explored a few key capabilities that must be present from the get go, including, as we dove into here on SI in our first post on The UX One Should Expect from Best-in-Class Spend Analysis … Part I, dynamic dashboards. Unlike the first generation dashboards that were dangerous, dysfunctional, and sometimes even deadly to the business, true next generation dynamic dashboards are actually useful and even beneficial. Their ability to provide quick entry points through integrated drill down to key, potentially problematic, data sets can make sharing and exploring data faster, and the customization capabilities that allow buyers to continually eliminate those green lights that lull one into a false sense of security is one of the keys to true analytics success. (For more details, see the doctor and the prophet‘s first deep dive on “What To Expect from Best-in-Class Spend Analysis Technology and User Design” (Part I) over on Spend Matters Pro [membership required]).

In the second part of our series we explored a few more key capabilities, four to be precise, that include dynamic cube and view creation “on the fly”. Given that:

 

  • A cube will never have all available (current and future) data dimensions
  • Not all data dimensions are important;
  • Some of the essential data (referenced in the previous point) will be third-party data updated at different time intervals
  • A user never needs to analyze all data at once when doing a detailed analysis.
  • We have not (yet) encountered a system that will have enough memory to fit enough of a true “mega cube” in memory for real-time analysis.

 

One cube will NEVER be enough. NEVER, NEVER, NEVER! That’s why procurement users need the ability to create as many cubes as necessary, on the fly, in real time. This is required to test any and every hypothesis until the user gets to the one that yields the value generation gold mine. Unless every hypothesis can be tested, it is likely that the best opportunity will never be identified. If we knew where the biggest opportunity was, we’d source it. But the best opportunities are, by definition, hidden, and we don’t know where. Success requires cubes, cubes, and more cubes with views, views, and more views. (For more detail, or information on the other capabilities we didn’t cover in our post on The UX One Should Expect from Best-in-Class Spend Analysis … Part II, see the doctor and the prophet‘s second deep dive on “What To Expect from Best-in-Class Spend Analysis Technology and User Design” (Part II) over on Spend Matters Pro [membership required].)

But much, much more is required. That’s why the doctor and the prophet recently published their third deep dive on “What To Expect from Best-in-Class Spend Analysis Technology and User Design” over on Spend Matters Pro [membership required] on the breadth of requirements for a good Spend Analysis User Experience. In this piece, we dive deep into three more absolute requirements (which, like the previous requirements, are so critical the absence of any should delete a vendor from your list) including real-time idiot-proof data categorization.

Just about every solution has categorization, most allow end users to at least over-ride categorization, but, in our view few, relatively few solutions can claim (to approach) idiot-proofness.

So what is an idiot proof solution? Before we define this, let us note that the approach a provider takes to classification is secondary. It doesn’t matter whether the methodology provided is fully automated (and based on leading machine learning techniques), hybrid (where the machine learning can be overridden by the analyst with simple rules), or fully manual (where the user can classify data using free-form rules created in any order they want on any fields they want).

This means that the system must provide a simple and effective methodology for classifying, and re-classifying, data in an almost idiot-proof manner. So, if the engine uses AI, it should be easy for the user to view, and alter, the domain knowledge models used by the algorithms. If it uses rules-based approaches, it should be easy to review, visualize, and modify rules using a language and visual techniques wherever possible. And if the solution uses a hybrid approach, the user should be able to quickly analyze the AI, determine the reason for a mis-map, and then define appropriate over-ride rules that will correct any errors the user discovers so the error never materializes again in the future.

In other words, success requires cubes, cubes and more cubes on correctly mapped and classified data that can be accessed through views, views, and more views. With any data the user requires, from any location, in any format. But more on this in upcoming posts. In the interim, for additional insight on a few more key requirements of a spend analytics product for a good user experience, check out the doctor and the prophet‘s second deep dive on “What To Expect from Best-in-Class Spend Analysis Technology and User Design” (Part III) over on Spend Matters Pro [membership required].) As per the past two parts of the series, it’s worth the read. And stay tuned for the next two parts of the series. That’s right! Two more parts. We told you this one was a doozy!

The UX One Should Expect from Best-in-Class Spend Analysis … Part II

Now that we’ve taken a deep dive into e-Sourcing (Part I and Part II), e-Auctions (Part I and Part II), and Optimization (Part I, Part II, Part III, and Part IV), we are diving into spend analysis. And this time we’re taking the vertical torpedo to the bottom of the deep. If you thought our last series was insightful, wait until you plow through this one. By the end of it, there will be more than a handful of vendor’s shaking in their boots when they realize just how far they have to go if they want to deliver on all those promises of next generation opportunity identification they’ve been selling you on for years! But we digress …

The key point to remember here is that there are only two advanced sourcing technologies that can identify value (savings, additional revenue opportunity, overhead cost reductions, etc.) in excess of 10% year-over-year-over-year. One of these is optimization (provided it’s done right, useable, and capable of supporting — and solving — the right models). The other is spend analytics. True spend analytics that goes well beyond the standard Top N and report templates to allow a user to cube, slice, dice, and re-cube quickly and efficiently in meaningful ways and then visualize that data in a manner that allows the potential opportunities, or lack thereof, to be almost instantly identified.

This requires extreme usability. As noted in our last post, not everyone has an advanced computer science or quantitative analysis degree, and first generation tools were so hard to use that once all of the categories in the top n report were sourced and all the suppliers in the top n suppliers put under contract, there was no more value to be had. And the tools sat on the shelf when they should be used weekly, if not daily. If a hunch can be explored in an hour, and every tenth hunch uncovers a 100K+ value generation opportunity, that’s a 10X return that would never be realized otherwise as the analyst would never have time to explore ten hunches otherwise.

But, as with optimization, it’s hard to create the right UX. It’s not just a set of fancy reports (as static reports have been proven to be useless for over a decade), but a set of capabilities that allow users to cube, slice, dice, and re-cube seven ways from Sunday quickly, easily, and repeatedly until they find the hidden value. It’s innovative new reporting and display techniques that makes outlier identification and opportunity analysis quicker and easier and simpler than its ever bin. It’s real-time data validation and verification tools that insure that a user doesn’t spend a week building a business case around data where one of the import files was shifted by a factor of 100 because of missing decimal points, destroying the entire business case in 4 clicks. And so on.

That’s why the doctor and the prophet are bringing you an in-depth look at what makes a good User eXperience for spend analysis that goes deeper — far deeper — than anyone has ever gone before. In a time where there seems to be a near universal playbook for spend analysis solution providers when it comes to positioning the capability they deliver and when many vendors sound interchangeable, and when many vendors are fungible in a way that is not necessarily negative, this insight is needed more than ever. And if a few vendors quake in their boots when this series is over, so be it. Last week, over on Spend Matters Pro [membership required], the doctor and the prophet published our second piece on “What To Expect from Best-in-Class Spend Analysis Technology and User Design” that continued our in-depth foray into this critical, but often ill-explained, technology.

So what is required? As per our first post, dozens (upon dozens) of innovative and unique capabilities, including the next generation dynamic dashboards that we discussed in our last post. In our deep dive, we explore four more core requirements, one of which is dynamic cube and view creation “on the fly”. Given that:

  • A cube will never have all available (current and future) data dimensions
  • Not all data dimensions are important;
  • Some of the essential data (referenced in the previous point) will be third-party data updated at different time intervals
  • A user never needs to analyze all data at once when doing a detailed analysis.
  • We have not (yet) encountered a system that will have enough memory to fit enough of a true “mega cube” in memory for real-time analysis.

One cube will NEVER be enough. NEVER, NEVER, NEVER! That’s why procurement users need the ability to create as many cubes as necessary, on the fly, in real time. This is required to test any and every hypothesis until the user gets to the one that yields the value generation gold mine. Because, as this blog has previously published (in why data analysis is avoided), if it is too difficult or costly to do an analysis, a gut-feel assessment as to the value that will be yielded will be done. And if it looks like the cost to value ratio will be too high, the analysis will be avoided. The end result is that the organization will never truly know if the potential value was low or high.

In other words, success requires cubes, cubes, and more cubes with views, views, and more views. With any data the user requires, from any location, in any format. But more on this in upcoming posts. In the interim, for three more requirements of a spend analytics product for a good user experience, check out “What To Expect from Best-in-Class Spend Analysis Technology and User Design” over on Spend Matters Pro [membership required].

The UX One Should Expect from Best-in-Class Spend Analysis … Part I

Our last series, which was a doozy at four parts, covered The UX One Should Expect form Best-in-Class Optimization (Part I, Part II, Part III, and Part IV) and while it probably left even the best-in-class optimization vendors quivering in their boots (as no vendor meets every criteria on our list, and the majority don’t even come close), it had to be done. No other advanced sourcing technology can identify, and capture, as much value year-over-year as optimization and with costs risings, budgets shrinking, competition escalating, and global conditions changing constantly, this technology is becoming a must-have for every sourcing organization period. But, as the Brits like to say, the maths are hard and you can’t expect an average buyer without an advanced degree in math, engineering, computer science, etc. to know this stuff, so it has to be useable. Very useable. It has to be stupid easy to select a cost model, pick and choose english constraints, import the costs, enter a few parameters (like maximum number of suppliers, preferred award split, etc.), and run the model. And if it’s unsolveable (because it’s over-constrained or data is missing), the reason, and the fix, has to be made crystal clear to the user.

But it’s not the only important advanced sourcing application that every sourcing organization should have. The other is spend analytics. It’s the only other advanced sourcing technology that can identify year-over-year value of 10% or more. (And while it can’t always capture that value as you will need to do a sourcing event to capture it, if you’re not doing the right sourcing events on the right categories at the right time, you’ll never realize the full extent of value that can be realized. It’s not uncommon to realize a cost reduction of 30% or more on your first appropriately designed optimization-backed multi-level global services event where you bid out at the global, national, regional, and local levels and get the right mix of the right providers in the right places for the needs at hand.) You need spend analysis, but not everyone has an advanced computer science or quantitative analysis degree, and first generation tools were so hard to use that once all of the categories in the top n report were sourced and all the suppliers in the top n suppliers put under contract, there was no more value to be had. But with an easy to use tool, the true value of analysis is exposed, and your top analysts will be finding new opportunities every day, week after week, month after month, and year over year.

But, as with optimization, it’s hard to create the right UX. It’s not just a set of fancy reports (as static reports have been proven to be useless for over a decade), but a set of capabilities that allow users to cube, slice, dice, and re-cube seven ways from Sunday quickly, easily, and repeatedly until they find the hidden value. It’s innovative new reporting and display techniques that makes outlier identification and opportunity analysis quicker and easier and simpler than its ever bin. It’s real-time data validation and verification tools that insure that a user doesn’t spend a week building a business case around data where one of the import files was shifted by a factor of 100 because of missing decimal points, destroying the entire business case in 4 clicks. And so on.

That’s why the doctor and the prophet are bringing you an even longer and more in-depth look at what makes a good User eXperience for spend analysis. In a time where there seems to be a near universal playbook for spend analysis solution providers when it comes to positioning the capability they deliver, when many vendors sound interchangeable, and when many vendors are fungible in a way that is not necessarily negative, this insight matters more than ever. Last week, over on Spend Matters Pro [membership required], the doctor and the prophet published our first piece on “What To Expect from Best-in-Class Spend Analysis Technology and User Design” that begins our in-depth foray into this critical, but often ill-explained, technology.

So what is required for a best-in-class spend analysis user experience? Dozens of things, but one key thing is integrated dynamic dashboards. Unlike the first generation dashboards that were dangerous and deadly (as the doctor has written about dozens of times on SI over the years, including rants here, here, here, and here), true, modern, next generation dynamic dashboards are actually useful and even beneficial. They’re ability to provide quick entry points through integrated drill down to key, potentially problematic, data sets can make sharing and exploring data faster, and the customization capabilities that allow buyers to continually eliminate those green lights that lull one into a false sense of security is the key to analytics success.

For a deeper dive into what an integrated dynamic dashboard is, check out the doctor and the prophet‘s initial description of “What To Expect from Best-in-Class Spend Analysis Technology and User Design” over on Spend Matters Pro [membership required] and stick around for the remainder of the series where all will become clear.

Supply Management Technical Difficulty … Part IV.2

A lot of vendors will tell you a lot of what they do is so hard and took thousands of hours of development and that no one else could do it as good or as fast or as flexible when the reality is that much of what they do is easy, mostly available in open source, and can be replicated in modern Business Process Management (BPM) configuration toolkits in a matter of weeks. In this series we are tackling the suite of supply management applications and pointing out what is truly challenging and what is almost as easy as cut-and-paste.

In our first three posts we discussed basic sourcing, basic procurement, and supplier management where few technical challenges truly exist. Then, yesterday, we started a discussion of spend analysis, where there are deep technical challenges (that we discuss today), but also technical by-gones that vendors still perpetuate as challenges and stumpers that are not true challenges but are to many vendors who didn’t bother to spend the time it takes to hire a development team that can figure them out. Yesterday we discussed the by-gones and the first-stumper. Today, we discuss the second big stumper and the true challenges.


Technical Stumper: Multi-Cube

The majority of applications support one, and only one, cube. As SI has indicated again and again (and again and again), the power of spend analysis resides in the ability to quickly create a cube on a hunch, on any schema of interest, analyze the potential opportunity, throw it away, and continue until a new value opportunity is found. This also needs to be quick and easy, or the best opportunities will never be found.

But even today, many applications support ONE cube. And it makes absolutely no sense. Especially when all one has to do to create a new cube is just create a copy of the data in a set of temporary tables designed just for that and update the indexes. In modern databases, it’s easy to dynamically create a table, bulk copy data from an existing table to the new table, and then update the necessary index fields. The cube can be made semi-persistent by storing the definition in a set of meta-tables and associating it with the user (which is exactly how databases track tables anyway).

Apparently vendors are stumped on how easy this is, otherwise, the doctor is stumped as to why most vendors do not support such basic capability.


Technical Challenge: Real-time (Collaborative) Reclassification

This is a challenge. Considering that the reclassification of a data set to a new hierarchy or schema could require processing every record, and that many data sets will contain millions of transaction records, and modern processors can only do so many instructions per second, this will likely always be a challenge as big data gets bigger and bigger. As a result even the best algorithms can generally only handle a few million records on a high end PC or laptop in real time. And while you can always add more cores to a rack, there’s still a limit as to how many cores can be connected to an integrated memory bank through a high-speed bus … and as this is the key to high-speed data processing, even the best implementations will only be able to process so many transactions a second.

Of course, this doesn’t explain why some applications can re-process a million transactions in real time and some crash before you load 100,000. This is just bad coding. This might be a challenge, but it’s still one that should be handled as valiantly as possible.

Technical Challenge: Exploratory 3-D Visualization

There’s a reason that Nintendo, X-box, and PlayStation keep releasing new hardware. They need to support faster rendering as the generation of realistic 3-D graphics in real-time requires very powerful processing. And while there is no need to render realistic graphics in spend analysis, creating 3-D images that can be rotated in real-time, blown up, shrunk down, drilled into to create a new 3-D image, which can again be rotated, blown-up, shrunk, drilled into, etc. in real time is just as challenging. This is because you’re not just rendering a complex image (such as a solar system, 3-D heated terrain map, etc.) but also annotating it with derived metrics that require real-time calculation, storing the associated transactions for tabular pop-up, etc. — and we already discussed how hard it is to reclassify (and re-calculate derived metrics on) millions of transactions in real time.

Technical Challenge: Real-time Hybrid “AI”

First of all, while there is no such thing as “AI”, because machines are not intelligent, there is a such thing as “automated reasoning” as machines are great at executing programmed instructions using whatever logic system you give them, and while there is no such thing as “machine learning” as it requires true intelligence to learn, there is a such thing as an “adaptive algorithm” and the last few years have seen the development of some really good “adaptive algorithms” that employ the best “automated reasoning” techniques that can, with training, over time improve to the point where classification accuracy can (quickly) get to 95% or better. And the best can be pre-configured with domain models that can jump-start the classification process and often get up to 80% accuracy with no training on reasonably clean data.

But the way these algorithms typically work is that data is fed into a neural network or cluster-machine, the outputs compared to a domain model, and where the statistical-based technique fails to generate the right classification, the resulting score is analyzed and the statistical weights or boundaries of the cluster modified, and the network or cluster machine re-run until the classification accuracy reaches a maximum. But in reality, what needs to happen is that as users correct classifications in real-time when doing ad-hoc analysis in derived spend cubes, the domain models need to be modified and the techniques updated in real time, and the override mapping remembered until the classifier automatically classifies all similar future transactions correctly. This requires the implementation of leading edge “AI” (which should be called “AR”) that is seamlessly integrated with leading edge analytics.

In other words, while building any analytics application may have been a significant challenge last decade when the by-gones were still challenges and the stumpers required a significant amount of brain-power and coding to deal with, that’s not the case anymore. These days, the only real challenge is real-time reclassification, visualization, and reasoning on very large data sets … even with parallel processing, this is a challenge if a large number of records have to be reprocessed, re-indexed, and derived dimensions recalculated.

But, of course, the challenges, and lack of, don’t end with analytics. Stay tuned!

Supply Management Technical Difficulty … Part IV.1

A lot of vendors will tell you a lot of what they do is so hard and took thousands of hours of development and that no one else could do it as good or as fast or as flexible when the reality is that much of what they do is easy, mostly available in open source, and can be replicated in modern Business Process Management (BPM) configuration toolkits in a matter of weeks.

So, to help you understand what’s truly hard and, in the spend master’s words, so easy a high school student with an Access database could do it, the doctor is going to bust out his technical chops that include a PhD in computer science (with deep expertise in algorithms, data structures, databases, big data, computational geometry, and optimization), experience in research / architect / technology officer industry roles, and cross-platform experience across pretty much all of the major OSs and implementation languages of choice. We’ll take it area by area in this series. In our first three posts we tackled basic Sourcing, basic Procurement, and Supplier Management and in this post we’re deep diving into Spend Analytics.

In our first three posts, we focussed just on technical challenges, but in this post, in addition to technical challenges, we’re also going to focus on technical stumpers (which shouldn’t be challenges, but for many organizations are) and technical by-gones (which were challenges in days gone by, but are NOT anymore).


Technical By-Gone: Formula-Based Derived Dimensions

In the early days, there weren’t many mathematical libraries, and building a large library, making it efficient, and integrating it with an analytics tool to support derived dimensions and real time reporting was quite a challenge, and typically required a lot of work, and a lot of code optimization that often required a lot of experimentation. But now there are lots of libraries, lots of optimized algorithms, and integration is pretty-straight forward.

Technical By-Gone: Report Builder

This is just a matter of exposing the schema, selecting the dimensions and facts of interest, and feeding it into a report object — which can be built using dozens (and dozens) of standard libraries. And if that’s too hard, there are dozens of applications that can be licensed and integrated that already do all the heavy lifting. In fact, many of your big name S2P suites now offering “analytics” are doing just this.

Technical Stumper: Multi-Schema Support

When you get right down to it, a schema is just a different indexing of data, which is organized into records. This means that all you need to do to support a schema is add an index to a record. This also means that all you need to do to support multiple schemas is to add multiple indexes to a record. This means that by normalizing a database schema into entity tables, relationship tables, and other discrete entities, it’s actually easy to support multiple categorizations for spend analysis including UNSPSC, H(T)S codes, a modified best-practice service provider schema for successful spend analysis, and any other schema needed for organizational reporting.

This says that all you need to support another schema is a set of schema tables that define the schema and a set of relationship tables that relate entities, such as transactions, to their appropriate place in the schema. One can even use general purpose tables that support hierarchies. The point is that there are lots of options and it is NOT hard! Maybe a lot of code (and code optimization), but it is NOT hard.


Technical Stumper: Predictive Analytics

Predictive Analytics sounds challenging, and creating good analytics algorithms takes time, but a number of these have been developed across the areas of analytics that works well, and the only thing they require is good data — since the strength of a good analytics application resides in its ability to collect, cleanse, enhance, and classify data, it shouldn’t be hard to just feed that into a predictive analytics library. But apparently it is. As few vendors offer even basic trend analysis, inventory analysis, etc. Why they don’t implement the best public domain / textbook libraries or implement third part libraries and solutions which have more powerful, and adaptive, algorithms that work better with more data for all of the common areas that prediction has been applied to for at least five years is beyond the doctor. While it’s a challenge to come up with newer, better, algorithms, it’s not hard to use what’s out there, and there is already a lot to start with.

Come back tomorrow as we continue our in-depth discussion of analytics.