Monthly Archives: May 2017

Supply Management Technical Difficulty … Part IV.2

A lot of vendors will tell you a lot of what they do is so hard and took thousands of hours of development and that no one else could do it as good or as fast or as flexible when the reality is that much of what they do is easy, mostly available in open source, and can be replicated in modern Business Process Management (BPM) configuration toolkits in a matter of weeks. In this series we are tackling the suite of supply management applications and pointing out what is truly challenging and what is almost as easy as cut-and-paste.

In our first three posts we discussed basic sourcing, basic procurement, and supplier management where few technical challenges truly exist. Then, yesterday, we started a discussion of spend analysis, where there are deep technical challenges (that we discuss today), but also technical by-gones that vendors still perpetuate as challenges and stumpers that are not true challenges but are to many vendors who didn’t bother to spend the time it takes to hire a development team that can figure them out. Yesterday we discussed the by-gones and the first-stumper. Today, we discuss the second big stumper and the true challenges.


Technical Stumper: Multi-Cube

The majority of applications support one, and only one, cube. As SI has indicated again and again (and again and again), the power of spend analysis resides in the ability to quickly create a cube on a hunch, on any schema of interest, analyze the potential opportunity, throw it away, and continue until a new value opportunity is found. This also needs to be quick and easy, or the best opportunities will never be found.

But even today, many applications support ONE cube. And it makes absolutely no sense. Especially when all one has to do to create a new cube is just create a copy of the data in a set of temporary tables designed just for that and update the indexes. In modern databases, it’s easy to dynamically create a table, bulk copy data from an existing table to the new table, and then update the necessary index fields. The cube can be made semi-persistent by storing the definition in a set of meta-tables and associating it with the user (which is exactly how databases track tables anyway).

Apparently vendors are stumped on how easy this is, otherwise, the doctor is stumped as to why most vendors do not support such basic capability.


Technical Challenge: Real-time (Collaborative) Reclassification

This is a challenge. Considering that the reclassification of a data set to a new hierarchy or schema could require processing every record, and that many data sets will contain millions of transaction records, and modern processors can only do so many instructions per second, this will likely always be a challenge as big data gets bigger and bigger. As a result even the best algorithms can generally only handle a few million records on a high end PC or laptop in real time. And while you can always add more cores to a rack, there’s still a limit as to how many cores can be connected to an integrated memory bank through a high-speed bus … and as this is the key to high-speed data processing, even the best implementations will only be able to process so many transactions a second.

Of course, this doesn’t explain why some applications can re-process a million transactions in real time and some crash before you load 100,000. This is just bad coding. This might be a challenge, but it’s still one that should be handled as valiantly as possible.

Technical Challenge: Exploratory 3-D Visualization

There’s a reason that Nintendo, X-box, and PlayStation keep releasing new hardware. They need to support faster rendering as the generation of realistic 3-D graphics in real-time requires very powerful processing. And while there is no need to render realistic graphics in spend analysis, creating 3-D images that can be rotated in real-time, blown up, shrunk down, drilled into to create a new 3-D image, which can again be rotated, blown-up, shrunk, drilled into, etc. in real time is just as challenging. This is because you’re not just rendering a complex image (such as a solar system, 3-D heated terrain map, etc.) but also annotating it with derived metrics that require real-time calculation, storing the associated transactions for tabular pop-up, etc. — and we already discussed how hard it is to reclassify (and re-calculate derived metrics on) millions of transactions in real time.

Technical Challenge: Real-time Hybrid “AI”

First of all, while there is no such thing as “AI”, because machines are not intelligent, there is a such thing as “automated reasoning” as machines are great at executing programmed instructions using whatever logic system you give them, and while there is no such thing as “machine learning” as it requires true intelligence to learn, there is a such thing as an “adaptive algorithm” and the last few years have seen the development of some really good “adaptive algorithms” that employ the best “automated reasoning” techniques that can, with training, over time improve to the point where classification accuracy can (quickly) get to 95% or better. And the best can be pre-configured with domain models that can jump-start the classification process and often get up to 80% accuracy with no training on reasonably clean data.

But the way these algorithms typically work is that data is fed into a neural network or cluster-machine, the outputs compared to a domain model, and where the statistical-based technique fails to generate the right classification, the resulting score is analyzed and the statistical weights or boundaries of the cluster modified, and the network or cluster machine re-run until the classification accuracy reaches a maximum. But in reality, what needs to happen is that as users correct classifications in real-time when doing ad-hoc analysis in derived spend cubes, the domain models need to be modified and the techniques updated in real time, and the override mapping remembered until the classifier automatically classifies all similar future transactions correctly. This requires the implementation of leading edge “AI” (which should be called “AR”) that is seamlessly integrated with leading edge analytics.

In other words, while building any analytics application may have been a significant challenge last decade when the by-gones were still challenges and the stumpers required a significant amount of brain-power and coding to deal with, that’s not the case anymore. These days, the only real challenge is real-time reclassification, visualization, and reasoning on very large data sets … even with parallel processing, this is a challenge if a large number of records have to be reprocessed, re-indexed, and derived dimensions recalculated.

But, of course, the challenges, and lack of, don’t end with analytics. Stay tuned!

Supply Management Technical Difficulty … Part IV.1

A lot of vendors will tell you a lot of what they do is so hard and took thousands of hours of development and that no one else could do it as good or as fast or as flexible when the reality is that much of what they do is easy, mostly available in open source, and can be replicated in modern Business Process Management (BPM) configuration toolkits in a matter of weeks.

So, to help you understand what’s truly hard and, in the spend master’s words, so easy a high school student with an Access database could do it, the doctor is going to bust out his technical chops that include a PhD in computer science (with deep expertise in algorithms, data structures, databases, big data, computational geometry, and optimization), experience in research / architect / technology officer industry roles, and cross-platform experience across pretty much all of the major OSs and implementation languages of choice. We’ll take it area by area in this series. In our first three posts we tackled basic Sourcing, basic Procurement, and Supplier Management and in this post we’re deep diving into Spend Analytics.

In our first three posts, we focussed just on technical challenges, but in this post, in addition to technical challenges, we’re also going to focus on technical stumpers (which shouldn’t be challenges, but for many organizations are) and technical by-gones (which were challenges in days gone by, but are NOT anymore).


Technical By-Gone: Formula-Based Derived Dimensions

In the early days, there weren’t many mathematical libraries, and building a large library, making it efficient, and integrating it with an analytics tool to support derived dimensions and real time reporting was quite a challenge, and typically required a lot of work, and a lot of code optimization that often required a lot of experimentation. But now there are lots of libraries, lots of optimized algorithms, and integration is pretty-straight forward.

Technical By-Gone: Report Builder

This is just a matter of exposing the schema, selecting the dimensions and facts of interest, and feeding it into a report object — which can be built using dozens (and dozens) of standard libraries. And if that’s too hard, there are dozens of applications that can be licensed and integrated that already do all the heavy lifting. In fact, many of your big name S2P suites now offering “analytics” are doing just this.

Technical Stumper: Multi-Schema Support

When you get right down to it, a schema is just a different indexing of data, which is organized into records. This means that all you need to do to support a schema is add an index to a record. This also means that all you need to do to support multiple schemas is to add multiple indexes to a record. This means that by normalizing a database schema into entity tables, relationship tables, and other discrete entities, it’s actually easy to support multiple categorizations for spend analysis including UNSPSC, H(T)S codes, a modified best-practice service provider schema for successful spend analysis, and any other schema needed for organizational reporting.

This says that all you need to support another schema is a set of schema tables that define the schema and a set of relationship tables that relate entities, such as transactions, to their appropriate place in the schema. One can even use general purpose tables that support hierarchies. The point is that there are lots of options and it is NOT hard! Maybe a lot of code (and code optimization), but it is NOT hard.


Technical Stumper: Predictive Analytics

Predictive Analytics sounds challenging, and creating good analytics algorithms takes time, but a number of these have been developed across the areas of analytics that works well, and the only thing they require is good data — since the strength of a good analytics application resides in its ability to collect, cleanse, enhance, and classify data, it shouldn’t be hard to just feed that into a predictive analytics library. But apparently it is. As few vendors offer even basic trend analysis, inventory analysis, etc. Why they don’t implement the best public domain / textbook libraries or implement third part libraries and solutions which have more powerful, and adaptive, algorithms that work better with more data for all of the common areas that prediction has been applied to for at least five years is beyond the doctor. While it’s a challenge to come up with newer, better, algorithms, it’s not hard to use what’s out there, and there is already a lot to start with.

Come back tomorrow as we continue our in-depth discussion of analytics.

Will Coupa Inspire Sourcing?

In our last post we asked whether or not Coupa inspired at Inspire, referencing our post from last week that asked if they would inspire and indicated where inspiration might come from, and determined that, for an average Procurement professional, Coupa most definitely inspired. But it’s Sourcing and Procurement. Did Coupa inspire Sourcing?

The short answer is no (but it’s not the full answer). Sourcing wasn’t really addressed beyond the statement that with the acquisition of Trade Extensions, most likely being (re)named Coupa Sourcing Optimization, Coupa now has the ability to do complex events, as complex as an organization requires. And while this is true, it’s not very inspiring to an organization that might not even know why they truly need advanced sourcing.

But then again, as of now, Coupa does not really understand advanced sourcing. Up until now, for a Coupa customer, sourcing has been very simple RFXs and basic auctions, age old sourcing technology that any strategic buyer would not find very interesting. But, fortunately for us, Coupa readily admits this. And, as of now, have not decided when, or even if, they are going to integrate Trade Extensions or Spend 360 into their core platform. (Their expectation is they will integrate what makes sense at the right time, but recognize that these were Power User Applications acquired primarily to support current, and future, power users.)

This is good news. The main reason every acquisition of a (strategic sourcing decision) optimization company has failed to date is, as pointed out in a previous post, because the acquirer believed they understood the technology, could integrate, and take it further (and immediately preceded to do just that). But, as we pointed out in a previous post, the ultimate procurement application is the exact antithesis of the ultimate sourcing application. For starters, one is no UI and the other is a UI so complex current desktop systems cannot support it yet.

Based on the history of acquisitions of advanced technology in our space, the only chance of success is to allow both of these acquisitions to more or less continue business as usual, independent of Coupa’s primary business, until such time as all parties collectively, and in conjunction with their collective user base, decide what integration makes sense, when, and how. (And, fortunately for us, this is exactly what Coupa plans to do.) Moreover, in most companies, the people that do the complex sourcing are not only a small user base but are not the people who do procurement and requisitioning. As a result, effective integration right now only consists of pushing transactional data to Spend360 for opportunity analysis, kicking off a sourcing event in Trade Extensions, and then pushing a selected award into Coupa for contract creation and tracking.

As long as Coupa takes advantage of the new Trade Extensions 6 capabilities to define workflows and UX that are easy to use and consistent with what a Coupa power user might define, then they can create a transition for their more advanced organizations into true sourcing, and, more importantly, the Trade Extensions 6 platform can offer a path for those customers that need a more modern Procurement application. And letting each unit do what it does best will insure that, for the first time in the history of acquisitions of advanced sourcing technologies, all parties will actually thrive.

In other words, by saying and doing essentially nothing at this point with respect to their recent advanced sourcing acquisitions, Coupa is actually doing the best they could actually do because it’s going to take them a while to figure out what they got, what they can do, and how all of these units will work together. As far as the doctor knows, they haven’t even hired a translator yet who speaks advanced optimization, machine learning, econometrics (and risk), and truly simplistic Procurement and can teach all parties a common language. So, even though they’ve historically advanced as fast as possible in Procurement, the fastest way for them to advance beyond is to actually slam on the breaks and start again in first gear. If they do this and maintain this philosophy, at a future Inspire, they will finally inspire sourcing in a way that their Source-to-Pay brethren have not.

Did Coupa Inspire?

Last week we asked if Coupa would inspire at Inspire, especially given that most big companies, especially in the Procurement arena, have not given us much to be inspired about as of late. In fact, most announcements in the Procurement Arena have consisted of new UIs, name changes, and acquisitions — none of which does anything for the end user who needs to do a job day-in-and-day-out and spends much of it fighting with an outdated processes implemented by an even more outdated technology causing them to go prematurely bald as they rip out their follicles one by one.

We said that in this day and age you don’t get inspired unless the technology brings value and a technology doesn’t bring value unless it offers something more than the same-old same-old which could include an inclusive design (that also provides functionality needed by suppliers to serve buyers), a multi-functional application (that supports the organizational stakeholders that Procurement must serve), or a better user experience (which is not just a fancy-smancy UI).

So, to this end, did Coupa Inspire?

Inclusive Design?

Definitely. In this release they’ve made it easier for suppliers and service providers to acknowledge POs, flip to invoices, enter timesheets upon services completion, and respond to buyers. The suppliers can determine whether or not they want dynamic discounting, and what terms they will accept (and not need to opt in to or opt out from buyer offers one-by-one). They can maintain their catalogs and use the collaboration tools. With Coupa’s new release, it’s a more inclusive design.

Multi-Functional Application?

With InvoicePay integration, AP can now pay suppliers in 31 countries and know they are being fully compliant with local regulations. Their inventory functionality makes it easy for office managers responsible for indirect inventory. Their single data store gives Finance visibility into all spend under management. And their new risk scoring functionality gives buyers visibility into potential risky transactions that are being directed at potentially risky suppliers. It certainly is becoming multi-functional.

User Experience?

Coupa has figured out the most important feature of a Procurement UI, and that is, simply put, no UI. Procurement is a tactical function that is focussed entirely on servicing a need in the most efficient and effective manner possible. A user shouldn’t see any more than they need to see, do any more than they need to do, and, most importantly if they don’t need to see or do anything, they shouldn’t need to see or do anything at all. If all that is required is an approval for a requisition that completely satisfies all the requirements, and the approval is only required because the total amount exceeds an amount where all requisitions must be approved, then if the approver is aware the approval request is coming, is aware of what it’s for, and has given verbal approval already, then all she should need to do is press a button in the approval request email or send a yes response to an SMS — no application entry needed. In Coupa, she can do that, and a number of other processes have been simplified as well.

So, did Coupa inspire?

For an average Procurement professional, most definitely yes!

But what about a sophisticated sourcing professional, who has to do demand consolidation, new supplier identification and strategic supplier management, complex negotiation, and sophisticated contract creation? Well, that all depends on their read of what Coupa’s acquisitions mean … come back tomorrow.

Supply Management Technical Difficulty … Part III

A lot of vendors will tell you a lot of what they do is so hard and took thousands of hours of development and that no one else could do it as good or as fast or as flexible when the reality is that much of what they do is easy, mostly available in open source, and can be replicated in modern Business Process Management (BPM) configuration toolkits in a matter of weeks.

So, to help you understand what’s truly hard and, in the spend master’s words, so easy a high school student with an Access database could do it, the doctor is going to bust out his technical chops that include a PhD in computer science (with deep expertise in algorithms, data structures, databases, big data, computational geometry, and optimization), experience in research / architect / technology officer industry roles, and cross-platform experience across pretty much all of the major OSs and implementation languages of choice. Having covered basic sourcing and basic procurement it’s time to move on to Supplier Management.

But first, what is Supplier Management? Supplier Management, depending on the vendor, is defined as the provision of Supplier Information Management, Supplier Performance Management, and/or Supplier Relationship Management. The question is, do either of these areas contain any technical difficulty.


Supplier Information Management

Technical Challenge: NONE

Let’s face it, supplier information management is just data in, data out. Collect the data, push it in the database, run a report, pull it out. It’s just a database with a pre-defined schema and some fancy, optimized, UI for getting the right data to push in and pull out.


Supplier Performance Management

Technical Challenge: NONE

Supplier performance management is two part — performance tracking, done with software, and performance improvement initiatives, identified and managed by humans. The latter can be complex, but since this series is focussed on technical complexity, we will ignore this aspect. As for performance tracking, this is just tracking computed metrics over time. Essentially information management, but focussed on collected performance data and metrics.


Supplier Relationship Management

Technical Challenge: NONE

Supplier relationship management is all about managing the relationship. It’s usually done with collaboration (and collaboration software is not technically challenging), development management (lean, six sigma, and other programs), and innovation management (goal definition, initiative tracking, and workflow). All human challenges, not technical challenges.


But does this mean there are no challenges? Depends whether you are using old definitions or new definitions. A new definition goes beyond the basics and looks to software to guide the future of Supplier Management. And that’s where the challenges come in.

Technical Challenge: Predictive Analytics

Inventory levels, sales, and costs are relatively easy to predict with high accuracy with enough data using a suite of trend algorithms. They’re not always right, but they’re right more often than human “gut” (unless you happen to have a true expert who’s top of her league and been doing it for 20 years, and those are very rare) and that’s all we can expect.

But predicting a market trend is different than predicting supplier performance as performance shifts can result from a variety of factors that include, but aren’t limited to, worker problems (such as union strikes), financial problems (which can happen overnight as the result of a massive launch failure, loss, etc.), raw material shortages (as the result of a mine failure, etc.) and so on.

Thus, predicting future performance requires not only tracking performance, but also external market indicators of a financial, regulatory, and incident nature. The latter is particularly tricky as incidents are the result of events that can often only be detected by monitoring news feeds and applying semantic algorithms to the data to identify incidents that can affect future performance. Then, all of this data needs to be integrated to paint a picture that can more accurately predict performance than the predictions made from just monitoring internal data sources.

In other words, if all you are being sold is a data collection and monitoring tool, it’s not particularly challenging to build (and a business process management / workflow configurator tool could probably be used to build a prototype with your custom requirements in a week), but if it’s a true, modern, performance management solution with integrated predicted analytics to help you identify those relationships at risk, that’s a completely different story.

Next Up: Analytics!