Monthly Archives: November 2018

AI in Procurement Today

As per yesterday’s post, there is no true AI in procurement, at least with respect to the traditional definition of AI as artificial intelligence, but there is AI out there if you interpret AI as assisted intelligence, and some of it is pretty good.

What is there? If you check the doctor‘s 2-part in-depth piece over on Spend Matters on AI in Procurement Today (Part I and Part II) [membership required], you’ll see there are six areas where at least on one or two providers add a lot of value. They are:

  • True Automation
  • Smart Auto-Reorder of MRO / retail stock
  • Enhanced Mobile Support
  • Guided (and sometimes Guilted) Buying
  • M-Way Match And Error Prevention
  • Smart (Automatic) Approvals

And, in some cases, a system will integrate its automation, m-way match, and smart approvals to determine when an invoice with a small fluctuation can be automatically paid and when it can’t. For example, when an invoice comes in for services at a rate 10% higher than the last invoice, most m-way match systems would block it and bubble it up to the lead buyer / requisitioner. But a smarter system with integrated checks, behavioural analysis, and a history of override decisions might do the following:

  • check the PO and see it referenced a master contract with an evergreen clause where the original term had expired and the supplier had the right to increase rates up to 15%
  • check the user’s past overrides and see that they generally approve rate increases of 10% or less
  • check the user’s approval authority and see that they have the ability to make that approval
  • calculate the probability of automatic approval by the buyer and if it’s 90% or greater, queue the invoice for automatic payment, with a notification to the user that they may want to explicitly renegotiate the contract as the next invoice from the supplier might be at a 15% increase

Now, this is not going to help you in all cases, but every time you waste time investigating an overage you can’t do anything about, it’s a waste of time and, thus, any assisted intelligence solution that can prevent a waste of your time is valuable.

For more details on what the best systems can do today, if you have a Pro membership, the doctor strongly encourages you to check out AI in Procurement Today (Part I and Part II) and find out what your Procurement system should be doing for you.

When A Vendor is Selling (Cognitive) AI, What Are You Really Buying?

AI is the buzzword, or, more precisely, the buzz acronym. Just about every enterprise vendor is claiming they have AI, even if all they have is RPA (and even if what they have is pushing the definition of RPA). However, whether your vendor has AI or not (and the answer is that they probably don’t, as most of the best vendors just have ML, possibly enabled by AR, but probably not), it is coming, and if you don’t adopt (at least) the (precursor) technology available today, your Sourcing and Procurement organization may be left in the dust.

And by now you are probably firmly bamboozled, so let’s set the record straight, starting at the bottom of the AI technology ladder.

At the bottom of the technology ladder we have RPA, short for robotic process automation, which is generally used to automate what would otherwise be very manual processes, usually by way of a rules-based workflow engine.

On the next rung we have ML, short for machine learning, which applies (usually improvements on, or variations of) open-source or standard algorithms that can extract a model from a set of inputs to produce the associated outputs with high probability. The better platforms use machine learning to tune, if not define, the rules used by the workflow engines embedded in the platforms.

Sometimes the mix of ML and RPA is so good that for certain, focussed, applications that the platforms almost seems intelligent, and this is often what passes for AI these days. But it’s not real artificial intelligence, it’s assisted intelligence as it helps you do a better job, but your intelligence is still required to identify the right recommendations and approve the right actions.

The next rung up is AR, automated reasoning, which can take a set of assumptions, encodings of logical rules and predictive models, and compute derivations that can surpass even a human expert most of the time for very well (and narrowly) defined applications or problems. It’s basically the modern equivalent of an expert system that can compute millions of inter-related logical inferences until new realizations are discovered.

The next rung up is the version of AI that exists today, augmented intelligence, which expertly integrates RPA, ML, and AR to produce applications that more-or-less mimic what an expert would do the majority (but not all of) the time. And that allows an organization to automate some low-value tasks that would otherwise require manual effort as they were generally identified as strategic, but not always worth the effort.

If it existed, the next rung would be the AI that is touted, true artificial intelligence, which does not exist today. (And that’s a good thing, because if there was true AI, would the C-Suite need you? Yes. But would they realize it? Probably not.)

But the final rung, and where everyone wants to get to, is cognitive. AI technology that is not only intelligent, and that can make great decisions unassisted every time, but make the decisions the best human buyer for every situation would make considering all hard and soft variables.

And that’s the technology ladder you are dealing with, and now you know that where you are is likely not where you want to be. But don’t fret, things are getting better. Stay tuned!

… And Stop Paying for More Analysis Software Than You Need!

Yesterday SI featured a guest post from Brian Seipel who advised you to Stop Paying for More Analysis than You Need because, simply put, a lot of analytics effort and reports yield little to no return. As Brian expertly noted,

  • Sometimes 80% classification at the transactional level is enough
    Especially if you can get 95%+ by supplier or dollar volume. Once it’s easy to see there’s no opportunity in a category (either because it’s all under contract, the spend is low, the spend versus market price on what is classified leaves little savings opportunity etc.), why classify more?
  • If you are producing a heap of reports on a regular basis, many won’t get looked at
    Especially if the reports aren’t telling you anything new. Plus, as previously explained on SI, a great Spend Analysis Report is useful 3 times. The first time it is used to detect an opportunity, midway through a project to capture an identified savings opportunity to make sure the plan is coming together, at the end of the project to gauge the realized savings. That’s it.
  • A 20% savings isn’t always meaningful
    You’re probably overspending on office supplies by 20%, but it may not matter. If office supplies (because you’ve moved to a mostly paperless office thanks to investments in 2nd monitors and tablets and secure electronic distribution and janitorial supplies is under MRO) is only 10K, and capturing that 2K would take a week of effort running a simple event and negotiating a master contract when your fully burdened cost is 2K a day, is it worth it? Heck no. You don’t spend 10K to save 2K. It’s all about the ROI.
  • Speculative analysis on categories you have no control over may not pay out
    Just because you can show Marketing they are overspending by 50% doesn’t mean they are going to do anything about it. If they solemnly believe you can’t measure talent or impact on a spend basis, and you have no say over the final award, you will be fighting an uphill battle and while the argument should be made to the C-Suite, it has to come from the CPO, so until she is ready to take the battle on, spending on an analysis you can predict from intuition and market analysis is not going to give the ROI you need today.

When you put all this together, this gives you some rules about what you should be looking for, and spending on, when you select an analytics system (especially if you are not a do-it-yourselfer, even though there are systems today that are ridiculously easy to use compared to the reporting systems that first rolled out two decades ago).

  • Don’t overpay for auto-class
    While no one wants to manually classify transactions (even though a crack analyst can classify a Fortune 500 spend by hand in 2 to 3 days to 95%+ with a powerful multi-level rules-based system with regular expression pattern match, augmented intelligence, and drag and drop reclassification capability), considering how easy it is to manually classify straggler transactions once you’ve achieved 90%+ auto-classification to a best-in-class industry categorization (with 95%+ reliability), don’t overpay for auto-class. In fact, don’t pay extra at all — there are a dozen systems with this feature that can get you there. Only pay extra for a system that makes it easy to accomplish mappings and re-mappings and maintain them in a consistent and non-conflicting manner.
  • It doesn’t matter how many reports there are out of the box
    Because, once you get through the first set of projects that fix the spend issues identified, they will all be useless anyway. What matters is how many templates there are for customizing your own. It’s all about being able to define the top X from a subset of categories, geographies, suppliers, departments, users, etc. that are likely to contain your best opportunities, not just the top X spend or transaction volume. It’s about the Schneidermann diagrams and bubble charts on the dimensions that matter on the relevant subset of data. It should be easy to define any type of report you may need to run regularly on whatever filtered subset of data that is relevant to you at the time.
  • Totals, CheckSums, and Data Validations Should be Easy
    … and auto-run on every data import. You want to be able to focus in on your mapping and verification efforts where the spend, and potential opportunity, is large enough to be worth your time, know that the totals add up (to what is expected), and that the data wasn’t corrupted on export or import. The system should verify the data is within the appropriate time window, that at least key dimensions (supplier [id], GL code, etc.) are within expected sets and ranges, and source system identifiers are present.
  • Built In Category Intelligence is only valuable if you need it
    … don’t pay for community spend intelligence, integrated market feeds, or best-practice templates for categories you don’t source (regularly) or that don’t constitute a significant savings opportunity, especially if those fees are ongoing as part of a subscription. Unless it’s intelligence you will use every month, pay for it as a one-off from a market intelligence vendor that offers that service.

The reality is that second generation spend analysis systems are now a commodity, and you can get a great enterprise platform subscription that starts in the low to mid five figures annually that does more than than most organizations need. (And personal consultant licenses to great products for much, much, less.) Don’t overpay for the software, save it for the analyst who can use it to find you savings.

Stop Paying for More Analysis than you Need


Today we welcome another guest post from Brian Seipel a Procurement Consultant at Source One Management Services focused on helping corporations understand their spend profile and develop actionable strategies for cost reduction and supplier relationship management. Brian has a lot of real-world project experience in supply chain distribution, and brings some unique insight on the topic.

I wrapped up a large spend analysis initiative recently. The project spanned a dozen international operating companies with over two dozen stakeholders pitching in. By the end, we analyzed roughly one million transactions from dozens of disparate systems. It was a lot of work to be sure, but it also provided an unparalleled view into over $1 billion in spend.

Despite the heavy lift, this analysis was critical. It served as the foundation for identifying strategic sourcing projects slated to save this organization millions. The benefit far outweighed the cost.

This is not always the case.

We live in an age where analytics reign supreme. Some organizations staff whole teams to churn out an uncountable (and maybe uncontrollable) number of spreadsheets, reports, and dashboards filled to the brim with data. Other organizations hire third parties like yours truly or implement state-of-the-art analytics packages to crunch these numbers. Either way, end users are left with more data points than they’d ever care to actually use in their decision-making processes.

I feel like I’ve slammed a lot of hyperbole into a few short paragraphs. Let’s dial it back with a simple statement and follow-up question: Even in this data-forward world, organizations need to ensure that we’re not wasting valuable resources on analyses that don’t warrant it. So how do we tell which efforts are worth the time?

Let’s break that down into a few more specific questions.

What direct impact are we trying to make?

This sounds like a throw-away question, but it isn’t. Think of the last ten reports you personally handed off to your boss or your boss’ boss. If I were a betting man, I’d say you could take at least one of them out of your weekly stack without the end user even noticing. Why? Because the people consuming these reports are inundated by data. They don’t have time to sift through reports generated for the sake of bureaucracy.

If you can look at a report and not know what specific challenge it helps solve, odds are good the answer is “none.” Sync up with the end user and confirm it provides the value you think it does.

How much of an impact can we expect?

A spend analysis has a clear enough direct impact on a defined challenge – we need to understand where money is going, to which suppliers, at what point in time, in order to identify projects to reduce cost. That said, some spend may not warrant the attention.

This may sound a bit like a “chicken vs. egg” issue, since we often can’t estimate value before we dig into the numbers. That said, we should have general figure in our mind before investing the time. Saving 20% on office supplies is great when your Staples bill is six figures. Drop that to a few spare thousand every year and the value just isn’t there.

How much buy-in can we expect?

Are relevant stakeholders likely to pursue the projects your analysis shines light on? If not, do you have the leverage, authority, or sheer charm and charisma needed to turn them? I’ve seen plenty of projects die on the vine because of hesitation or outright hostility on the part of key stakeholders. Investing in analytics for projects destined to fail before they start is a sucker’s game.

?There’s a decades-gone-by phrase that old timers in the IT industry will recognize: “Nobody ever got fired for buying IBM.” The elements of fear, uncertainty, and doubt that made it effective back then are still relevant today. Think of the last time your office’s internet connection dropped off, even for a few minutes. Were you thinking about the cost savings your new provider offers? Cost savings may be good, but IT knows reliable uptime is better and is what makes or breaks them.

How deep does our dive need to be?

It pays to get down into the weeds when creating a spec list or generating an in-depth market basket. Once you’ve established the value of a project, it makes sense to invest in it by pulling the devil out of the details. Ending on a detailed note doesn’t mean we need to start the same way, though.

I pick on office supplies a lot when giving an example here. Let’s go back to that six figure Staples spend from earlier. How many pens, pencils, dry erase markers, reams of paper, and other supplies make up that figure? We’re looking at potentially thousands of line items. Remember the goal of our spend analysis – identify projects that can lead to cost savings. Do we really care about each individual line item right now? Will knowing how many black ballpoints versus blue felt tips make project identification easier? No – in fact, spending too much time on this granular detail now will only waste time and lead to potential lost opportunity costs.


I understand the knee-jerk reaction to traverse that DIKW (Data-Information-Knowledge-Wisdom) pyramid, I really do. It often is the right call. At the same time, there’s something to be said for taking a step back and looking at the bigger picture.

Every action we take needs to have purpose. Don’t waste time on a report today just because you ran it yesterday. Understand how your analysis fits into your organization’s goals and, if you find it doesn’t, cut ties so you can focus on more impactful endeavours.

Thanks, Brian!

Detecting that Fraud Permeating Your Supply Chain! Part II

As per a recent post, fraud is permeating your supply chain and your current iZombie platform needs to take a lot of the blame as it lulls you into a false sense of security when it should be sounding all the warning bells and sirens at its disposal.

So what kind of platform do you need?

As per our last post, simply put, a platform with good market intelligence, encoded expert intelligence, (hybrid) AI algorithms, and other modern features that can detect common types of fraud and stop it dead in its tracks. To give you a better idea of what these platforms look like, we’re going to address more types of fraud an organization may encounter and what a platform would need to detect it.

Abnormal Vendor Selection

In our last post we talked about how a good platform can detect unacceptable cost inflation via metric inflation designed to target a certain supplier. This could be done for many reasons — direct or indirect kickbacks to the buyer, financial gain to the immediate or extended family of the buyer, a tit-for-tat arrangement (where the supplier agrees to select a vendor chosen by the buyer that will directly or indirectly benefit the buyer).

But not all abnormal vendor selection is done by way of metric inflation. Some is done by way of weighting a particular geography, a particular type of responsibility or compliance program, a particular association, or something else unusual that will choose a particular vendor that would not normally be used.

A good platform with good analytics and machine learning can detect when unusual characteristics are applied to vendor selection.

Unusual Payment Patterns

Just because there is an invoice that is accepted against a (blanket) PO or for a category / amount that does not require a PO, that is approved by a senior manager or direct, that doesn’t mean that the payment is okay. But a single payment is hard to detect. However, if similar payments show up over and over again and they are not for regular recurring payments like rent, utilities, predictable support services, it might be an indicator of fraud. A good platform will be able to classify and detect repeating payments of this type that are not expected.

This requires good trend analysis applied to non-PO categories not identified as having regular payments of a specific type.

Too Frequent (Automatic) Order Triggers

When a contract for a category is cut, there is an expected demand against an expected order schedule. As a result, there are expected (re) order schedules that shouldn’t vary too much. If they do, either someone is adjusting minimum stock on hand levels or a POS is submitting sales numbers that are higher than actuals to cause too frequent re-orders. But since a good system can compare planned schedules to expected schedules based on market conditions to actuals, this can be detected.

Again, good analytics with dynamic trend analysis against plans and modified plans based on market conditions derived from market data.

Lost Returns

If a higher than usual number of products get marked as defective but a considerable percentage of these don’t make it back to the supplier for credit, that’s typically indicative of fraud. Typically, someone, somewhere is marking good products bad, marking them to be returned, but then insuring they go missing somewhere along the line. Usually a case of high-value product at a time.

But a platform that maintains a record of average defect rates by category (and supplier), average return success by category (and supplier), and average return success for the organization can compute when theft is very likely.

Analysis of rates against expected rates and identification of unusual deviations.

Fixed Asset Fraud

If the platform contains complete service history, industry metrics for average service requirements for the platform by hour of use, and average upkeep and overhead costs, and all of a sudden the service requirements and upkeep costs double for recorded hours of use, then there is a good chance that the asset is being used for non-sanctioned purposes. This is still fraud and theft from the company.

Analysis of costs and life-spans against expected costs and life-spans and identifications of costly deviations.


And again, while platforms aren’t the entire answer, as they might not be able to pinpoint whether it is a warehouse worker, a carrier (driver), or collusion between the two in “lost” return theft, they can certainly detect quickly when the fraud is happening, and then the organization can take steps to identify the perpetuator(s).