Supply Management Technical Difficulty … Part IV.1

A lot of vendors will tell you a lot of what they do is so hard and took thousands of hours of development and that no one else could do it as good or as fast or as flexible when the reality is that much of what they do is easy, mostly available in open source, and can be replicated in modern Business Process Management (BPM) configuration toolkits in a matter of weeks.

So, to help you understand what’s truly hard and, in the spend master’s words, so easy a high school student with an Access database could do it, the doctor is going to bust out his technical chops that include a PhD in computer science (with deep expertise in algorithms, data structures, databases, big data, computational geometry, and optimization), experience in research / architect / technology officer industry roles, and cross-platform experience across pretty much all of the major OSs and implementation languages of choice. We’ll take it area by area in this series. In our first three posts we tackled basic Sourcing, basic Procurement, and Supplier Management and in this post we’re deep diving into Spend Analytics.

In our first three posts, we focussed just on technical challenges, but in this post, in addition to technical challenges, we’re also going to focus on technical stumpers (which shouldn’t be challenges, but for many organizations are) and technical by-gones (which were challenges in days gone by, but are NOT anymore).


Technical By-Gone: Formula-Based Derived Dimensions

In the early days, there weren’t many mathematical libraries, and building a large library, making it efficient, and integrating it with an analytics tool to support derived dimensions and real time reporting was quite a challenge, and typically required a lot of work, and a lot of code optimization that often required a lot of experimentation. But now there are lots of libraries, lots of optimized algorithms, and integration is pretty-straight forward.

Technical By-Gone: Report Builder

This is just a matter of exposing the schema, selecting the dimensions and facts of interest, and feeding it into a report object — which can be built using dozens (and dozens) of standard libraries. And if that’s too hard, there are dozens of applications that can be licensed and integrated that already do all the heavy lifting. In fact, many of your big name S2P suites now offering “analytics” are doing just this.

Technical Stumper: Multi-Schema Support

When you get right down to it, a schema is just a different indexing of data, which is organized into records. This means that all you need to do to support a schema is add an index to a record. This also means that all you need to do to support multiple schemas is to add multiple indexes to a record. This means that by normalizing a database schema into entity tables, relationship tables, and other discrete entities, it’s actually easy to support multiple categorizations for spend analysis including UNSPSC, H(T)S codes, a modified best-practice service provider schema for successful spend analysis, and any other schema needed for organizational reporting.

This says that all you need to support another schema is a set of schema tables that define the schema and a set of relationship tables that relate entities, such as transactions, to their appropriate place in the schema. One can even use general purpose tables that support hierarchies. The point is that there are lots of options and it is NOT hard! Maybe a lot of code (and code optimization), but it is NOT hard.


Technical Stumper: Predictive Analytics

Predictive Analytics sounds challenging, and creating good analytics algorithms takes time, but a number of these have been developed across the areas of analytics that works well, and the only thing they require is good data — since the strength of a good analytics application resides in its ability to collect, cleanse, enhance, and classify data, it shouldn’t be hard to just feed that into a predictive analytics library. But apparently it is. As few vendors offer even basic trend analysis, inventory analysis, etc. Why they don’t implement the best public domain / textbook libraries or implement third part libraries and solutions which have more powerful, and adaptive, algorithms that work better with more data for all of the common areas that prediction has been applied to for at least five years is beyond the doctor. While it’s a challenge to come up with newer, better, algorithms, it’s not hard to use what’s out there, and there is already a lot to start with.

Come back tomorrow as we continue our in-depth discussion of analytics.