Category Archives: Best Practices

Your Procurement New Year Resolutions

To save you time, the doctor has updated his short-list of the most important.

1. I WILL NOT READ PREDICTION ARTICLES

As the doctor has stated many, many, times, most predictions are old news or remanufactured shoes, as clearly explained in our long series on The Future of Procurement, where we tackled the same predictions you hear year after year after year and explained how some are, sadly, as old as commerce itself. Thus, there is no need to waste your time on them.

2. I WILL IMPLEMENT A BoB PLATFORM (FOUNDATION)

Last year we advised you to implement at least one new BoB Module or System, because, even if your organization is in the Hackett Group top 8%, the doctor can guarantee that there is at least one major Supply Management system or Source to Pay module you are missing (or lacking critical functionality in). In order to do a great job, you need a great system.

But, in addition to a great system, you need a great platform with a centralized data store and back office capability because the best system in the world is useless unless the right people are using it and everyone is working off of the same data and results with the access appropriate for them.

And very few organizations have a shared platform for S2C or P2P activities, and even less for P2P. And while the doctor encourages using BoB platforms as they can generate spectacular results when properly applied, those results need to be realized — which can’t happen unless the right employees can access the data in the applications they need to use. This means everything needs to be connected. This requires a platform.

Moreover, a platform with an open API, an open data store, and the ability to act as a master data store for all entities and transactions in the platform. Anything less is not a platform.

3. I WILL CONTINUE TO IMPROVE AT LEAST ONE TIME CONSUMING TACTICAL PROCESS PER QUARTER

There is absolutely no value in tactical work. This is where you hand over as much as you can to the machine that can do it faster, better, and cheaper than you. You can’t do millions of calculations and comparisons a second — it can. You can’t consolidate data from 20 different sources into a 20 page report in less than a minute — it can. Plus, as per the doctor‘s series on AI in Procurement (Today Part I, Part II; Tomorrow Part I, Part II, Part III; and The Day After Tomorrow) over on Spend Matters Pro and his upcoming series on AI in Sourcing over on Spend Matters Pro [membership required], now that assisted intelligence is widely available, and augmented intelligence is coming, there’s no excuse to do unnecessary tactical work.

Plus, as we clearly indicated last year, what you need to focus on is strategic work. Analyzing the top recommendations that come out of the Cognitive Procurement system to make sure they make sense, that the system didn’t miss anything, and that it works for your organization. And then figuring out if you have the experience and expertise to ignore a system market buy recommendation to go negotiate a better deal with top (incumbent) suppliers because your 20 years of insights gives you an edge that cannot be encoded. Or if the projected results from a market auction with the top 6 suppliers is better than your team would ever do with their complete lack of category experience. Your value is your ability to use your intelligence, not your ability to push paper. Let the dumb machines do that, and do what you were hired for!

Domo Arigato, Mr. Roboto Patoron!

A decade ago, Sourcing Innovation published a piece on how Every Check Has a Cost which echoed a point made by Paul Graham that one of the big differences between big companies and startups is that big companies tend to have developed procedures to protect themselves against mistakes while a startup walks like a toddler, bashing into things and falling over all the time and, as a result, over time, gradually puts in place rules and procedures and associated checks and balances to prevent it from falling over itself, especially when the fall results in a mini-disaster (such as a contracted supplier going bankrupt).

Thus, as the company grows, it will invariably accumulate more checks, either as responses to disasters or as a result of hiring people from bigger companies who bring more checks with them for protecting against disasters which have not yet happened (and which may never happen).

But this isn’t necessarily a good thing. Unnecessary checks cost time to document,, implement, support, and maintain, especially if it’s for a situation unlikely to happen or a situation that, when it happens, will cost the company less than the cost of the check and balance it has to go through day in and day out. For example, like checking the references and solvency of an office supplies, furniture, or an off-the-shelf electronics provider. Who cares. One goes out of business, 10 more down the street.

Or mandating committee review and on-site demos for what should be a $10,000 piece of software. As described in our classic piece, the more expensive you make a sale, the more expensive that sale is going to be. If it costs a vendor $30,000 to sell you what should be a $10,000 piece of software, they’re going to charge you $50,000 — $10,000 for the software, $30,000 for the cost of sale, $5,000 for the additional support they expect, and an extra $5,000 to make up for the commissions they are losing spend all that time with you.

Similarly, it’s costly to have a manager check every purchase over $250 made by an employee just because someone decided that should be an arbitrary threshold.

Ten years ago we said review all your checks and balances and get rid of the ones that don’t make any sense or cost you more than they would save in the worst case.

But now we are saying don’t just get rid of those, get rid of ANY manual check that doesn’t add value the majority of the time — and replace it with an automated system check backed by RPA (robotic process automation) driven AI (assisted intelligence) that determines whether or not there is enough risk to warrant a manual check.

With good risk models, good training data (common situations when a “mandatory” check resulted in an approval, common situations where a “mandatory” check resulted in a denial, and exceptional situations where a “mandatory” check resulted in a request for more information by the approver), good budget/spend data, contract/catalog data (and preferred suppliers), and organizational hierarchies (with well defined roles), a system can not only easily map into (definite) yes / (definite) no / more information / forced manual review buckets and improve its knowledge of typical organizational purchase and approval patterns over time and reduce the number of manual checks to those situations that are truly risky or truly unclear. Which is, to be precise, the only time a check should be applied. (And over time, it will be able to suggest better and better check rules that help an organization understand what, and only what, it should truly be checking.)

And when you implement the right software to automate these mostly unnecessary checks (on the road to eliminating them), just like you can slowly take the foam off the table corners and the training wheels off the bike, you will grow up as a purchasing organization and, after finally finding a proper use for RPA and AI, you will say:

Domo Arigato, Mr. Roboto Patoron!

… And Stop Paying for More Analysis Software Than You Need!

Yesterday SI featured a guest post from Brian Seipel who advised you to Stop Paying for More Analysis than You Need because, simply put, a lot of analytics effort and reports yield little to no return. As Brian expertly noted,

  • Sometimes 80% classification at the transactional level is enough
    Especially if you can get 95%+ by supplier or dollar volume. Once it’s easy to see there’s no opportunity in a category (either because it’s all under contract, the spend is low, the spend versus market price on what is classified leaves little savings opportunity etc.), why classify more?
  • If you are producing a heap of reports on a regular basis, many won’t get looked at
    Especially if the reports aren’t telling you anything new. Plus, as previously explained on SI, a great Spend Analysis Report is useful 3 times. The first time it is used to detect an opportunity, midway through a project to capture an identified savings opportunity to make sure the plan is coming together, at the end of the project to gauge the realized savings. That’s it.
  • A 20% savings isn’t always meaningful
    You’re probably overspending on office supplies by 20%, but it may not matter. If office supplies (because you’ve moved to a mostly paperless office thanks to investments in 2nd monitors and tablets and secure electronic distribution and janitorial supplies is under MRO) is only 10K, and capturing that 2K would take a week of effort running a simple event and negotiating a master contract when your fully burdened cost is 2K a day, is it worth it? Heck no. You don’t spend 10K to save 2K. It’s all about the ROI.
  • Speculative analysis on categories you have no control over may not pay out
    Just because you can show Marketing they are overspending by 50% doesn’t mean they are going to do anything about it. If they solemnly believe you can’t measure talent or impact on a spend basis, and you have no say over the final award, you will be fighting an uphill battle and while the argument should be made to the C-Suite, it has to come from the CPO, so until she is ready to take the battle on, spending on an analysis you can predict from intuition and market analysis is not going to give the ROI you need today.

When you put all this together, this gives you some rules about what you should be looking for, and spending on, when you select an analytics system (especially if you are not a do-it-yourselfer, even though there are systems today that are ridiculously easy to use compared to the reporting systems that first rolled out two decades ago).

  • Don’t overpay for auto-class
    While no one wants to manually classify transactions (even though a crack analyst can classify a Fortune 500 spend by hand in 2 to 3 days to 95%+ with a powerful multi-level rules-based system with regular expression pattern match, augmented intelligence, and drag and drop reclassification capability), considering how easy it is to manually classify straggler transactions once you’ve achieved 90%+ auto-classification to a best-in-class industry categorization (with 95%+ reliability), don’t overpay for auto-class. In fact, don’t pay extra at all — there are a dozen systems with this feature that can get you there. Only pay extra for a system that makes it easy to accomplish mappings and re-mappings and maintain them in a consistent and non-conflicting manner.
  • It doesn’t matter how many reports there are out of the box
    Because, once you get through the first set of projects that fix the spend issues identified, they will all be useless anyway. What matters is how many templates there are for customizing your own. It’s all about being able to define the top X from a subset of categories, geographies, suppliers, departments, users, etc. that are likely to contain your best opportunities, not just the top X spend or transaction volume. It’s about the Schneidermann diagrams and bubble charts on the dimensions that matter on the relevant subset of data. It should be easy to define any type of report you may need to run regularly on whatever filtered subset of data that is relevant to you at the time.
  • Totals, CheckSums, and Data Validations Should be Easy
    … and auto-run on every data import. You want to be able to focus in on your mapping and verification efforts where the spend, and potential opportunity, is large enough to be worth your time, know that the totals add up (to what is expected), and that the data wasn’t corrupted on export or import. The system should verify the data is within the appropriate time window, that at least key dimensions (supplier [id], GL code, etc.) are within expected sets and ranges, and source system identifiers are present.
  • Built In Category Intelligence is only valuable if you need it
    … don’t pay for community spend intelligence, integrated market feeds, or best-practice templates for categories you don’t source (regularly) or that don’t constitute a significant savings opportunity, especially if those fees are ongoing as part of a subscription. Unless it’s intelligence you will use every month, pay for it as a one-off from a market intelligence vendor that offers that service.

The reality is that second generation spend analysis systems are now a commodity, and you can get a great enterprise platform subscription that starts in the low to mid five figures annually that does more than than most organizations need. (And personal consultant licenses to great products for much, much, less.) Don’t overpay for the software, save it for the analyst who can use it to find you savings.

Detecting that Fraud Permeating Your Supply Chain! Part II

As per a recent post, fraud is permeating your supply chain and your current iZombie platform needs to take a lot of the blame as it lulls you into a false sense of security when it should be sounding all the warning bells and sirens at its disposal.

So what kind of platform do you need?

As per our last post, simply put, a platform with good market intelligence, encoded expert intelligence, (hybrid) AI algorithms, and other modern features that can detect common types of fraud and stop it dead in its tracks. To give you a better idea of what these platforms look like, we’re going to address more types of fraud an organization may encounter and what a platform would need to detect it.

Abnormal Vendor Selection

In our last post we talked about how a good platform can detect unacceptable cost inflation via metric inflation designed to target a certain supplier. This could be done for many reasons — direct or indirect kickbacks to the buyer, financial gain to the immediate or extended family of the buyer, a tit-for-tat arrangement (where the supplier agrees to select a vendor chosen by the buyer that will directly or indirectly benefit the buyer).

But not all abnormal vendor selection is done by way of metric inflation. Some is done by way of weighting a particular geography, a particular type of responsibility or compliance program, a particular association, or something else unusual that will choose a particular vendor that would not normally be used.

A good platform with good analytics and machine learning can detect when unusual characteristics are applied to vendor selection.

Unusual Payment Patterns

Just because there is an invoice that is accepted against a (blanket) PO or for a category / amount that does not require a PO, that is approved by a senior manager or direct, that doesn’t mean that the payment is okay. But a single payment is hard to detect. However, if similar payments show up over and over again and they are not for regular recurring payments like rent, utilities, predictable support services, it might be an indicator of fraud. A good platform will be able to classify and detect repeating payments of this type that are not expected.

This requires good trend analysis applied to non-PO categories not identified as having regular payments of a specific type.

Too Frequent (Automatic) Order Triggers

When a contract for a category is cut, there is an expected demand against an expected order schedule. As a result, there are expected (re) order schedules that shouldn’t vary too much. If they do, either someone is adjusting minimum stock on hand levels or a POS is submitting sales numbers that are higher than actuals to cause too frequent re-orders. But since a good system can compare planned schedules to expected schedules based on market conditions to actuals, this can be detected.

Again, good analytics with dynamic trend analysis against plans and modified plans based on market conditions derived from market data.

Lost Returns

If a higher than usual number of products get marked as defective but a considerable percentage of these don’t make it back to the supplier for credit, that’s typically indicative of fraud. Typically, someone, somewhere is marking good products bad, marking them to be returned, but then insuring they go missing somewhere along the line. Usually a case of high-value product at a time.

But a platform that maintains a record of average defect rates by category (and supplier), average return success by category (and supplier), and average return success for the organization can compute when theft is very likely.

Analysis of rates against expected rates and identification of unusual deviations.

Fixed Asset Fraud

If the platform contains complete service history, industry metrics for average service requirements for the platform by hour of use, and average upkeep and overhead costs, and all of a sudden the service requirements and upkeep costs double for recorded hours of use, then there is a good chance that the asset is being used for non-sanctioned purposes. This is still fraud and theft from the company.

Analysis of costs and life-spans against expected costs and life-spans and identifications of costly deviations.


And again, while platforms aren’t the entire answer, as they might not be able to pinpoint whether it is a warehouse worker, a carrier (driver), or collusion between the two in “lost” return theft, they can certainly detect quickly when the fraud is happening, and then the organization can take steps to identify the perpetuator(s).

Detecting that Fraud Permeating Your Supply Chain!

As per our last post, fraud is permeating your supply chain and your current iZombie platform needs to take a lot of the blame as it lulls you into a false sense of security when it should be sounding all the warning bells and sirens at its disposal.

So what kind of platform do you need?

Simply put, a platform with good market intelligence, encoded expert intelligence, (hybrid) AI algorithms, and other modern features that can detect common types of fraud and stop it dead in its tracks. To give you a better idea of what these platforms look like, we’re going to address each type of fraud an organization may encounter and what a platform would need to detect it.

Unacceptable Cost Inflation via Metric Inflation

If the platform monitors all historical performance metrics and computes trends, it will be able to detect when a quality or reliability metric is out of whack.

If the platform also monitors market costs for the product or raw material according at different volume tiers, it will be able to detect when a cost is most likely more than percentage point above average.

If the platform uses smart algorithms, it will be able to compute a high probability of something being off when the two factors coincide on a category being sourced and alert a senior manager or executive to explore and verify the situation before a buy is made.

Double Fuel Surcharges

A good platform will also integrate with fuel price indices and transportation exchanges and know the average surcharge on fuel for any given region as well as the limits imposed by the organizational contract and immediately detect when a surcharge is out-of-whack, unjustified, or against the contract and prevent a buyer or AP professional from paying the invoice until it is corrected.

Duplicate Invoices

When an invoice comes in, a smart platform will not only insure there is a corresponding PO before it is accepted, but that the total sum of invoices against the PO doesn’t exceed the total value of the PO (and the total number of any unit invoiced doesn’t exceed the maximum authorized amount). Furthermore, it will not allow payment until the total sum of unpaid goods received at least equals the amount invoiced. This will not only make it easy for a human to identify duplicate invoices (where only the invoice number is changed) but duplicate billings, where similar invoices (for unshipped goods) are submitted with only minor changes.

T&E Fraud

You need a T&E system that can enforce spending limits, match establishments with blacklists, find duplicate charges for similar expenses on the same day, pull in expected airline fares in the proper bracket to identify policy violations, and other capabilities that can detect policy violation or over spend.

Distribution Theft

Now, if your organization is large enough, it’s pretty much a guarantee there is going to be theft somewhere along the chain. And if its external theft, that’s not something your system is going to be able to predict. But internal theft, that’s something it should be able to detect.

The fact of the matter is that if there is repeated internal theft, it will follow a patter. Similar types of inventory, coming from similar suppliers, on a small set of routes used by a smaller set of carriers — usually with a small set of common drivers involved. With enough data and data mining, a good platform can identify patterns indicative of inside jobs that can be investigated, identified, and stopped.

 

While platforms aren’t the entire answer, as they can’t detect, for example, true inside jobs by an employee cutting a camera feed or power feed (in a blind spot) on the way out, they are a very large part of the answer.