Category Archives: Technology

Two Hundred and Twenty Nine Years Ago Today …

The first foundations were laid for the patent pirates with the introduction of the first U.S. patent to Samuel Hopkins for a potash process. While patents are a necessary evil to protect the investments of real inventor and corporations that have to spend millions upon millions (sometimes to the tune of hundreds of millions) to produce a truly new technology, software patents are an unnecessary evil that allow the pirates to plunder millions upon millions of dollars from rivals with fundamentally different products (but covered under an interpretation of a sufficiently abstract description) and prevent true innovation in our space.

It’s been a downhill trend ever since the first software patent was issued 51 years, 3 months, and 8 days ago as it was a mere 4 years before the software patent pirates saw an opportunity when the first software patent case went before the courts a mere four years later.

That’s right, we’ve had almost five decades of pirates in cyberspace, and you thought malware was the big problem?

Still Looking for that Supply Management Usability Guide!

Long-time readers will know that there are a lot of guides out there as to what a good Supply Management solution for Sourcing, Procurement, etc. should do — including a lot of advice on this topic here on SI and over on Spend Matters, but not many guides. And while the doctor did write rather extensively on the topic of usability in Sourcing, Supply Management, Procurement, and P2P over on Spend Matters Pro, there are still very few guides for usability. (Searches in major search engines still come up few and far between, even after our first post on the topic here on SI seven years ago).

As per our last post, if the provided software was so obvious and easy to use that even a fifth-grader could figure it out, then the issue of “ineffective instructions” is a small one. But the reality is that, even with most platforms that are attempting to adopt consumer-style interfaces, most procurement and logistics software is still reasonably complicated due to the complex nature of what a Procurement or Logistics package capable of supporting global trade needs to do.

The thing is, even though the functionality is well understood, the best way to lay out the functionality, and underlying workflow, is not well understood in comparison and, unfortunately, if one company builds an interface that is too close to a competitor’s for some standard functionality, instead of the formation of a standard, in America, we get a frivolous lawsuit (courtesy of the patent pirates). So even though there should be design standards, there usually aren’t.

And even when the best-of-breed providers finally figure it out, since most of their UIs are built on decade(s) old technology, updating the UI is no easy feat. Especially when the new generation of employees, the millennials, are expecting consumer like interfaces. But who has anything close to this? Coupa with parts of the core platform (which has been built and re-built repeatedly to be easy to use around core Procurement functionality) and advanced sourcing (built on TESS 6 built from the ground up to be eminently configurable); Zycus is on the right path with their dew drop technology, but it will take a while to upgrade the entire platform; Vroozi with their mobile-first philosophy is quite usable for what it does; Keelvar with their configurable automation-based workflows; and GEP with their new user-centric UI vision are not just a few examples, but the majority of examples.

In comparison in the S2P game, Ivalua is getting close with their configurable workflows, but it’s still not obvious how to configure the platform to make it obvious to junior users; Wax Digital is one platform on one code base and pretty simple (but based on older Microsoft tech that takes time to upgrade); Determine, based on the old b-Pack platform is very configurable, but older technology and far from a modern look-and-feel; and Synertrade is really outdated (but very powerful).

And if we go beyond the big names, when it comes to the smaller vendors, except for a few of the newer best-of-breeds, like Bonfire and ScoutRFP, usability has always been a second concern and while a few of the smaller vendors are updating their UI (like EC Sourcing which should be much more modern with a year), most vendors are definitely not there yet.

Hence, since most platforms aren’t consumer like, and not likely to be figured out 100% by junior users without training, we still need that Supply Management Technology usability guide — especially since none of the platforms mentioned above with “modern” interfaces have the same workflows for the processes they support.

And what about the poor organizations who still have a mishmash of five generation one or two systems with inconsistent interfaces and workflows? What hope do they have of making sense of the full inter-related capabilities of their systems? Very little.

And while the doctor knows more than ever that the very nature of software, which is always evolving, makes such a guide difficult (and that this particular challenge is compounded by the fact that America still allows software to be patented so the pirates can plunder), but there should be at least some standard workflows and processes that all sourcing, procurement, and logistics software should attempt to follow in a reasonably standard way. It would make things easier for all supply chain partners, minimize unnecessary stresses and bumps, and help us evolve the profession as a whole. But alas, it will probably be another seven years before we get close to a real usability guide.

Forty Two Years Ago Today …

The world’s first Global Positioning System (GPS) signal was transmitted from Navigation Technology Satellite 2 (NTS-2) and received at Rockwell Collins in Cedar Rapids, Iowa, at 12:41 a.m. Eastern time.

That’s right! Forty two years ago, something we couldn’t imaging living without didn’t exist! No way to track our goods in real time. No way to even figure out where we are in real time. No mapping applications on your cell phone. No ride-sharing companies. Etc.

GPS hasn’t been around as long as many supply chain pros have, who couldn’t imagine a supply chain without it. Think about that!

Big Data: Are You Still Doing it Wrong?

The only buzzword on par with big data is cloud. According to the converted, or should I say the diverted, better decision are made with better data, and the more data the merrier. This sounds good in theory, but most algorithms that predict demand, acquisition cost, projected sales prices, etc. are based on trends. But these days the average market life of a CPG product, especially in electronics or fashion, is six months or less, and the reality is that there just isn’t enough data to predict meaningful trends on. Moreover, in categories where the average lifespan is longer, you only need the data since the last supply/demand imbalance, global disruption, or global spike in demand as the data you need for the current trend before that is irrelevant … unless you are trying to predict a trend shift, in which case you need the data that falls an interval on each slide of the trend shift for the last n trends.

And if the price only changes weekly, you don’t need data daily. And if you are always buying from the same geography, dictated by the same market, you only need that market data. And if you are using “market data” but 90% of the market is buying through 6 GPOs, then you only need their data. In other words, you only need enough relevant data for accurate prediction. Which, in many cases, will just be a few hundred dat points, even if you have access to thousands (or tens of thousands or even hundreds of thousands).

In other words, big data does not mean good data, and the reality is that you rarely need big data.

But you know that AI doesn’t work without big data? Well, their are two fallacies here.

The first fallacy is that (real) AI exists. As I hoped would have been laid bare in our recent two-week series on Applied Indirection, the best that exists in our space is assisted intelligence (which does nothing without YOUR big brain behind it, and the most advanced technology out there is barely borderline augmented intelligence.

The second fallacy is that you need big data to get results from deep neural networks or other AI statistical or probabilistic machine learning technologies. You don’t … as long as you have selected the appropriate technology appropriately configured with a statistically relevant sample pool.

But here’s the kicker. You have to select the right technology, configure it right and give it the right training set … encoded the right way. Otherwise, it won’t learn anything and won’t do anything when applied. This requires a good understanding of what you’re dealing with, what you’re looking for, and how to process the data to extract, or at least bubble up, the most relevant features for the algorithms to work on. But if you don’t know how to do that, then, yes, you might need hundreds of thousands or millions of data elements and an oversized neural network or statistical classifier to identify all the potentially relevant features, analyze them in different ways, find the similarities that lead to the tightest, most differentiable clusters and adjust all the weights and settings to output that.

But then, as MIT recently published (E.g. MIT, Tech Review), and some of us have known for a long time, many of the nodes in that neural networks, calculations in the SVM, etc. are going to be of minimal, near zero, impact and up to 90% of the calculations are going to be pretty much unnecessary. [E.g. the doctor saw this when he was experimenting with neural networks in grad school over 20 years ago; but due to the lack of processing power (as well as before and after data sets to work on) then versus now it was a bit of trail and error to reduce network size]. In fact, as the MIT researchers found, you can remove most of these nodes, make minor adjustments to the other nodes and network, retrain the network, and get more or less equivalent results with a fraction of the calculations.

And if you can figure out precisely what those nodes are measuring and extract those features from the data before hand and create appropriately differentiated metadata fingerprints and feed those instead to a properly designed neural network or other multi-level classifier, not only can you get fantastic results with less calculation, but less data as well.

Great results come from great data that is smartly gathered, processed, and analyzed — not big data thrown into dumb algorithms where you hope for the best. So if you’re still pushing for bigger and bigger data to throw into bigger and bigger networks, you’re doing it wrong. That’s the wrong way to do it. And the only way you can call it AI is if you re-label AI to mean Anti-Intelligence.

AI: Applied Indirection in Contract (Lifecycle) Management

Continuing our expose of why you should think “Applied Indirection” and not “Any form of Intelligence” when you hear AI, because most solutions claiming to be AI are really just dumb systems with RPA (robotic process automation) and classic statistical models from the 90’s, we move onto Contract (Lifecycle) Management which, like analytics, is almost universally touted to have AI, even when there isn’t even a shred of anything that comes close.

This doesn’t meant that there aren’t vendors with true AI, especially when you classify it as Assisted Intelligence (and sometimes even Augmented Intelligence), in the space, just that, as the buzz-acronym reaches new heights, there will be many more vendors claiming AI than those that actually have AI and you will need to do your homework to find out which is which.

Example #1 of Applied Indirection in C(L)M: Auto-Renewal Detection & Management

Yes, evergreen contracts can be a big problem in Procurement when they have outsourced their usefulness, but detecting and managing these is not hard, and certainly doesn’t require any AI whatsoever. All you have to do is specify the contract as “evergreen” or “auto-renew” by checking a box and enter a notice-by date (to prevent an evergreen renewal” as well as the start date and end date and most contract management platforms can alert you in sufficient time to take action, escalate to your supervisor if you don’t, and kick-off a termination process at the push of a button.

For anything close to AI, you require a system that can detect when a contract is evergreen or auto-renewing when there isn’t a spelled out and easily identified auto-renewal clause that can be found with a simple reg-ex search. For example, when a crafty supplier buries an auto-renewal requirement in the liability section under the notices subsection titled “methods for delivering official notices” which starts off “Official notices shall be sent by X, Y, or Z, to A or B and only treated as an official notice upon proof of receipt. This includes a notice of non-renewal, as the contract will automatically renew 30 days prior to expiry otherwise.” Even a good lawyer might miss that in a fifty page contract when it’s snuck in on the third revision.

Example #2 of Applied Indirection in C(L)M: Off-Contract Purchasing

Maverick purchasing is a big problem. But it’s not one that you need AI to detect. If you encode all the products, services, and / or categories that should be bought on contract, it’s pretty easy to identify when a purchase for that product, service, or category is not bought from that supplier. And if the contract only applies to a region, it’s pretty easy to encode that too and it’s just a simple check.

And even if you have two or three suppliers in a multi-supplier contract for risk mitigation purposes, then it’s just a matter of making sure at least one of the supplier got the purchase, and if each supplier had a geographic area, that the right one for the area. Again, simple rule checks. No AI needed.

The key is to detect when something is off-contract when it is not specifically coded to a contract, either because it’s a new product, missing a category designation, required to hit a volume commitment, and so on. And while this can often be accomplished by identifying the closest product or service (using a document likeness statistic or weighted field match), sometimes advanced NLP may be employed for better results (and this would constitute weak AI).

Example #3 of Applied Indirection in C(L)M: Clause Suggestion

On the surface, this sounds pretty smart … point out clauses that should be in my contract to protect me. Under the hood, in most CLM systems that include authoring, it’s basically a set of templates that are used to specify what to look for in a contract type, with additions or subtractions for well defined industries that the provider serves. It’s basically a check list. And it’s about as dumb as it gets.

Can it be smarter? Of course, but the smarts are more around proper contract identification than clause selection. Because the clauses that should be included generally depend first and foremost on the type of contract, secondly on the product or service, and thirdly on the regulations that affect the products and services in the origin country, the destination countries, and any points in between. Then, identifying which regulations come into play and which types of clauses will be needed. This requires good NLP, probabilistic selection, and, preferably, adaptive learning that learns over time when Legal or Procurement chooses an alternate clause over a standard clause. A system should have assisted intelligence here to be useful, and augmented intelligence to be truly useful. But few do.

Note that SI is not saying that systems with the non-AI abilities discussed above are not valuable, as any system that automates tactical processes and minimizes non-strategic busy work is valuable. We are just saying you shouldn’t pay for what you’re not getting, or overpay for what you are. Buy what you need, and pay accordingly.