Category Archives: Best Practices

The MOST Important Clause in Your (Procure) Tech (SaaS) Contract (Part I)

While you might think there is no one most important clause as there are a lot of important clauses, especially if you ask around.

  • In Procurement, you will want implementation in the promised timeframe
  • Finance will want holdbacks and penalties if functionality is not delivered or timeframes are not met
  • Risk Management will want clauses around cyber-security and privacy
  • Legal will be very concerned about governing venue, liability, and standard termination clauses,
  • etc.

And those are all important, but the reality is:

  • regardless of what’s in the contract, the solution will be implemented when it gets implemented, and delays will be blamed on your IT team, partners, etc. especially if it is their fault
  • you have to prove it was the vendors fault to get any penalties enforced, and that will be very hard indeed
  • good clauses alone are not enough if a cyber-breach or data-breach happens, your customers will still be coming after you
  • the legal venue usually isn’t that important, the only time liability typically comes into play is if customer data is fraudulently accessed as a result of the provider’s failure in security or there is a massive prolonged system failure, and, no matter how bad the performance, the contract won’t be terminated unless there is outright fraud because the organization still needs a system
  • etc.

which means that, while important, unless there was outright fraudulent representation (or serious negligent misrepresentation) in the signing of the contract, none of these clauses really matter as they aren’t protecting you nearly as much as you think they are since any damages you would be awarded in court would be limited to fees paid, which could be dwarfed by the legal fees and mounting losses while you waited months or years for the situation to be resolved!

Moreover, when you consider that the average company is not a Fortune 500, and no longer has (multi-)million budgets for SaaS, that means that most of your purchases are going to be in the (low) six figure range. This means that the vendor knows that the cost of any legal action that would arise plus the losses that would be incurred by the organization that takes action will dwarf the fees paid, and that means that the likelihood of any action coming the vendor’s way is minimal. (Plus, after all of the glowing recommendations you gave the vendor to the C-Suite upon selection, to their customers in the all-expenses paid customer event at the fancy resort destination that was offered to you as a big new name customer, and to new potential customers in reference calls when you were still enthralled by the shiny screen, they know you won’t want to come forward and admit how wrong you were.)

This means that a good portion of you will be screwed to some extent. Let’s consider the reality.

  • Once FinTech, and then ProcureTech, became hot, you had all of the top performing sales people from across enterprise tech move in — and not all of them are altruistic; in fact, some of them are as psychopathic as they come and will promise anything to get the deal signed, even if they know the vendor organization CAN NOT deliver
  • Many providers have been capitalized at multiples of 7, 10, 15, or more by VC and PE firms looking for the next unicorn and are under pressure to reach ridiculous, and wholly unrealistic, sales targets and will effectively over promise to get sales and then underdeliver when the investors don’t allow them to hire enough support personnel due to not hitting sales targets
  • There are over 700 providers in a space that offers less than 10 core modules. That’s almost 10 times the number of providers that are needed. Most will not make/retain profitability and, thus, most will not survive. Some will go under, others will be acquired in fire sales or discount sell offs by investors who cut their losses before they lose it all. Even if your vendor gets acquired, chances are the acquirer will gut it and support levels will significantly decrease (and new development come to a standstill).
  • If the vendor needs the sale to get the bank loan, keep their jobs, make payroll, even the best providers will assume they can figure it out later with money in the bank, but this won’t always happen, especially if they are behind on promises to other customers.

In other words, even if the sales person and the provider had no will intent, you are still likely to get screwed.

This means that the most important clause in the contract is …

Finally A Good Webinar on Gen-AI in ProcureTech …

… from SAP?!?

Yes, the doctor is surprised! In ProcureTech, SAP is not known for being on the leading edge. It’s latest Ariba refresh is 3 to 6 years late. (Had it been released in 2019, before the intake and orchestration players started hitting the scene and siphoning off SAP customers with their ease of use and ability to integrate into the back-end for data storage, it would have been revolutionary. Had it been released in 2022 before these players really started to grow beyond the early adopters, it would have been leading. Now, no matter how good it is, SAP Ariba is going to be playing catch up in the market for the next two years! This is because it’s been fightiong to not only keep its current customers, but grow when it now has suites, I2O [Intake-to-Orchestrate] Providers, and mini-suites in the upper mid-market all chomping at its customer base!)

Most players in ProcureTech jumping on the Gen-AI Hype Train are just repeating the lies, damn lies, and Gen-AI bullcr@p that the big providers (Open AI, Google, DeepSeek, etc.) are trying to shove down our collective throats, especially since these ProcureTech players don’t have real AI experts in house to know what’s real and what’s not. Given that SAP Procurement is not a big AI player, one would expect that, despite their best efforts, they might be inclined to take provider and partner messaging and run with it. But they didn’t.

In fact, they went one step further and engaged Pierre Mitchell of Spend Matters (A Hackett Group Company) in their webinar (now on demand) who is one of the few analysts in our space more-or-less getting it right (and trying to piece together a plan for companies to successfully identify, analyze, and implement AI in their ProcureTech operations). (Now, the doctor doesn’t entirely agree with all of his architecture or all of his viewpoints, but the effort and accuracy of Pierre’s work is leagues beyond anything else he’s seen in our space, and if you’re careful and follow his models and advice properly, low risk. Moreover, you’re starting from sanity if you follow his guidance! More than can be said for the majority of AI approaches out there.)

When it was said that that architecting the solution and the area around [the] business data cloud and managing data and data models is really important because AI has shown that, hey, we have all this amazingly powerful data that’s out there, but we got to tap it and we got to make it more structured and we have to make it useful and that the data quality around data coming out of those models right now needs to be limited to co-pilots and chatbots because we’re not ready to turn the keys over to the LLMs and that they have to be wrapped into deterministic tooling they are not only making clear the limitations of the LLM technology but making clear they understand those limitations and that they have to do more than just plug in an LLM to deliver dependable, reliable value to their customers.

When even the leading LLM, ChatGPT, generates responses with incorrect information 52% of the time, that tells you just how unreliable LLM technology is! Moreover, it’s not going to get any better considering that OpenAI (and its peers) literally downloaded the entire internet (including illegally using all of the copyright data that had been digitized to date [until the Big Beautiful Bill that restricted Federal AI Regulation for 10 years was past, retroactively making their IP theft legal]) to train their models and the vast majority of data produced since then (which now accounts for half of the internet) is AI slop. (This means that you can only expect performance to get worse, and not better!) This means that you can’t rely on LLMs for anything critical or meaningful.

However, if you go back to the basics and focus on what LLMS are good for, namely:

  • large document search and summarization and
  • natural language processing and translation to machine friendly formats

then you realize these models can be trained with high accuracy to parse natural language requests and return machine friendly program calls that execute reliable deterministic code and then parse the programmatic strings returned and convert them to natural language responses. If you then use LLMs only as an access layer, and take the time to build up the cross-platform data integration, models, and insights a user will need in federated cubes and knowledge libraries, you can provide real value to a customer using traditional, dependable, analytics, optimization, and Machine Learning (ML) in an interface that doesn’t require a PhD to use it!

This is what they did, as they explained in their example of what should be done when your CFO asks for a breakdown of your laptop and keyboard spend to potentially identify opportunities to consolidate vendors. Traditionally, this request might take your business analyst days to compile across multiple systems stakeholders and spreadsheets but if you have SAP spend control tower with AI, they unify data across multiple sources in the platform for you. Whether your purchases are coming through existing contracts, P cards expense reports, or any other channel they federate the data by apply[ing] intelligent classifications to automatically categorize your purchases with standard UNSPSC codes to ensure that items like your Dell XPS 15 and your MacBook Pro 16 are both properly classified as laptops, despite the different naming conventions. Moreover, since they have also integrated with Dun & BradStreet, you can easily consolidate your suppliers. So rather than it looking like you’re purchasing items from three different subsidiaries, your purchases will align to the same parent company. This says they are using traditional categorizations, rules, and machine learning on the backend to build one integrated cube with summary reports, and all the LLM has to do is create an English summary, to which you can attach the supporting system generated reports.

Moreover, this also says that if you need to source 500 laptops and 500 [external] keyboards with the goal of cut[ing] current costs from what you’ve been paying by 15% it can automatically identify the target prices, identify the suppliers/distributors who have been giving you the best prices, automatically run predictive analytics to estimate the quotes you would get from awarding all of the business to one supplier (who would then be inclined to give better price breaks), and if none of those looked like they’d generate the reduction, access its anonymized community data, identify other suppliers/distributors supplying the same laptops you typically buy, compute their average price reduction over the past three months, and identify those that should be invited to an RFX or Auction to increase competition and the chances of you achieving the target price reduction while informing you of the price reduction it predicts (which might only be 10%, or 5%, if you are already getting better than average market pricing). And it will do all of this with a few clicks. You’ll simply tell the system what your demand is and what your goal is and all of these computations will be run, supplier and event (type) recommendations generated, and it will be one click to kick off the sourcing event.

Moreover, when the webinar said that if you think about this area around workflow and process orchestration, there’s no reason why you can’t take pieces of that, like on the endpoints, around intake or invoices or whatever and use AI there and bake it in a controlled way
into your processes
. Because that’s they key. Taking one tactically oriented process, that consumes too much manual intervention, at a time and using advanced tech (which need not be AI, by the way, modern Adaptive RPA [ARPA] is often more than enough) to improve it. Then, over time, stringing these together to automate more complex processes where you can gate them to ensure exceptional situations aren’t automated without over guidance. One little win at a time. And after a year it cumulatively adds up to one big win. (Versus going for a big-bang project, which always ends in a big-bang that blows a whole in your operation that you might not be able to recover from.)

The only bad part of this webinar was slide 24, Spend Matters recommendation #1: “Aggressively Implement GenAI”!

Given that Gen-AI is typically interpreted as “LLM”, as per above, this is the last AI tech you should aggressively implement given its unreliability for anything but natural language translation and search and summarization. Moreover, any tech that is highly dynamic and emerging should be implemented with care.

What the recommendation should be is aggressively implement AI because now that we have the computational power and data that we didn’t have two decades (or so) ago, which was the last time AI was really hot, tried and true (dependable) machine learning and AI is now practical and powerful!

Now, in his LinkedIn post, Pierre asked what we’d like to see next in terms of research/coverage (regardless of venue). So I’m going to answer that:

Gen-AI LLM-Free AI Transformation!

Because you don’t need LLMs to achieve all of the value we need out of AI in ProcureTech and, to be honest, any back office tech. As I have been saying recently, everything I penned in the classic Spend Matters series on AI in Procurement (Sourcing, Supplier Management) today, tomorrow, and the day after in the last decade … including the day after, was possible when I penned the series. It just wasn’t a reality because there were few AI experts in our space, data was lacking, and the blood, sweat and tears required to make it happen was significant. We didn’t have readily available stacks, frameworks and models for the machine learning, predictive analytics, and semantic processing required to make it happen. Vendors would have had to build the majority of this themselves, which would have been as much (or more) work than building their core offering. But it was possible. And with all the modern tech at our disposal, now it’s not only possible, but doable. There is zero need to embed an untested unreliable LLM in an end-user product to provide all of the advantages AI can offer. (Or, if you don’t have the time to master traditional semantic tech for NLP, zero need to use an LLM for anything more than NLP.)

So, I’d like to see this architecture and explanation of how providers can roll out safe AI and how buying organizations can use it without fear of being another failure or economic disaster when it screws up, goes rogue, and orders 100,000 units of the wrong product!

EOQ Part I: The Quantity You Can’t Depend On The Computer to Calculate!

I was reminded of this while reading Mr. Koray Köse’s great piece on how our supply chains are literally drowning in wannabes who mistake theory for expertise where he accurately and astutely noted that most of today’s so called “experts” could not pass his Economic Order Quantity (EOQ) exam question. And I totally agree. Because

1) Math (where competency in many Western nations decreases every year and where the US is literally becoming math stupid, as reflected in the latest OECD ranking which puts it 25 out of 31 “developed” countries that were globally measured with countries like Croatia coming in ahead of it).

2) No real understanding of supply chain or total supply chain cost!

3) Even less understanding that your EOQ (Economic Order Quantity) is not your suppliers EPQ (Economic Production Quantity) and for high cost/complex products, this can sometimes (but not always) be much more important (and impactful) than the classic EOQ formula would dictate.

Mr. Köse illustrates this deftly when he shared one of the questions he uses to gauge whether or not his MBA students truly understand EOQ. The core variant of the problem he shared with us was this:

  1. The purchasing manager for Spacely Sprockets orders mechanical gears from an industrial supplies distributor, Cogswell Cogs.
  2. Spacely Sprockets uses 5,000 gears per year.
  3. Annual inventory carrying costs are 20% and order costs are 3,400 per order.
  4. The following order discount price schedule is provided by Cogswell.
    • 0,200-0,999 $1300 / unit
    • 1,000-2,999 $1250 / unit
    • 3,000-4,999 $1200 / unit
    • 5,000+      $1175 / unit
    
    
  5. Determine the optimal order quantity, total cost, and actual per unit cost (once order costs and inventory carrying costs are taken into account).

Now, if you were a prepared student, you might have memorized the classic EOQ formula:

  • EOQ = √ ( (2 x ACPO x AUU) / (UC x CCP) )

where

  • ACPO = Acquisition Cost Per Order = 3,400
  • AUU = Annual Usage in Units = 5,000
  • UC = Unit Cost
  • CCP = Carrying Cost Percentage = 0.20

and this leaves you with

  • EOQ = √ ( 34,000,000 / (0.2 * UC) )

and you can work this out at each price break:

  • 1,300: √ ( 34,000,000 / 260 ) = √ (130,769) = 362
  • 1,250: √ ( 34,000,000 / 250 ) = √ (136,000) = 369
  • 1,200: √ ( 34,000,000 / 240 ) = √ (141,666) = 376
  • 1,175: √ ( 34,000,000 / 235 ) = √ (144,680) = 380

which indicates the first price bracket is the correct one for you, and you should be making 13.8, rounded to 14, orders every 26 days (and net a total volume of 5,068 units over the year) and, on average, you will carry each unit of inventory for 13 days.

  • unit cost: 5,068 * 1,300 = 6,588,400
  • inventory carrying cost: 13/365 * 0.2 * 6,588,400 = 46,931
  • order cost: 3,400 * 14 = 47,600
  • total cost: 6,682,931
  • unit cost: 1,319

But this is NOT an EPQ for the supplier, which means that you might be paying more than you need to. To figure that out, you have to analyze the costs at each breakpoint that is reasonable for you.

These are:

  • 362, your computed EOQ, with 14 orders per year
  • 1014, the first discount tier, at 5 orders per year every 73 days, with 36.5 days of inventory on average
  • 5,068, at the third discount tier, at 1 order per year every 365 days, with 183 days of inventory on average
  • … because you can’t hit the 2nd tier more than once

First run the calculation at 5,068, because your greedy executives only understand unit discounts:

  • unit cost: 5,068 * 1,175 = 5,954,900
  • inventory carrying cost: 183/365 * 0.2 * 5,954,900 = 595,490
  • order cost: 3,400 * 1 = 3,400
  • total cost: 6,553,790
  • unit cost: 1,293

You quickly see that you clearly want the discounts even if your inventory costs shoot up because 633.5K in savings is greater than 595.5K in expected inventory carrying costs.

But you’re not done yet. Now you have to run the calculation at 1,014 units an order over 5 orders, because it’s also a valid option and captures the suppliers first EPQ point:

  • unit cost: 5,068 * 1,250 = 6,335,000
  • inventory carrying cost: 36.5/365 * 0.2 * 6,335,000 = 126,700
  • order cost: 3,400 * 5 = 17,000
  • total cost: 6,478,700
  • unit cost: 1,278

which is your actual EOQ because it not only takes advantage of the supplier’s EPQ level but does so at the breakpoint that is closest to that given by your traditional EOQ calculation!

Now we’ve now clearly demonstrated why most of today’s so called experts couldn’t calculate EOQ with a computer because it’s not always the classic EOQ formula (or whatever pseudo-random formula happens to be in the forecasting system they try to use), or the supplier’s optimal EPQ level (if that leads to a significantly high storage cost for you — JIT is a core tenet of lean for a reason, inventory is costly, and while you need a safety stock, too much not only presents too much obsolescence risk but shoots your carrying costs way up), but usually somewhere in between (where the optimal curves intersect closest to their respective minima). Good luck doing that if you can’t do math, don’t know supply chain, and think Chat-GPT holds the answer to everything.

What we didn’t demonstrate is why, in reality, you often need a computer to calculate it (and that comes down to the inventory carrying costs which are often much more involved than Finance believes) and your associated supply chain costs. The reality is that you might have to re-write your formulas, which really will require a computer to constantly calculate and recalculate your true inventory carrying costs, but the reality is that you will only be able do this AFTER you understand what the proper order volumes should be (because you need to check that you worked out the formulas and calculations right for your supply chain)! We might tackle this in another article, because the only way to get costs way down is to help Finance and Operations understand the true costs and how to tackle them (because if you’re still running on an average ICC of 20%, or even worse, 25% to 30%, someone, somewhere, is performing pretty poorly in their profession).

KPIs To Ask For By ProcureTech Module: Part III

In our last series on Why Your Tech Selection Should be KPI, and not Bell-and-Whistle, Focussed if you are not technical, we reviewed Tanya Wade’s 21 KPIs that are a great start if you’re looking to put some KPIs in place to properly program and percolate procurement. Not all of these were (the most) appropriate for all modules, but if you don’t know your tech, they were a great start.

In this mini-series, we’re partitioning the performance indicators by ProcureTech module as well as indicating a few more you should be asking for. We’ve covered the core Source-to-Contract modules, and today we are concluding with the Procure to Pay Modules of e-Procurement and Invoice to Pay (Accounts Payable).

e-Procurement

Tanya Wade’s Performance KPIs

  • Supplier Performance:Supplier Lead Time
  • Compliance & Risk:PO Compliance
  • Operational Efficiency:Procurement Cycle Time
  • Operational Efficiency:Automation Rate
  • Spend Analysis:Tail Spend

For details on these, see our prior series.

Key Module KPIs

  • Compliance & Risk:Maverick Spend Reduction – maverick spend is out of control in most organizations without good (e-)Procurement systems; it is important to know what is the average improvement from implementing the provider’s system (no matter what metrics the vendor throws at you, if this isn’t substantially increasing, the system is NOT being adopted)
  • Compliance & Risk:Preferred Supplier Spend (Improvement) – how much of the off-contract spend is with preferred suppliers, and by what percentage is preferred supplier spend expected to increase
  • Compliance & Risk:Avg Improvement/Time-to-Value in Discount/Rebate Acknowledgement – many traditional savings in office suppliers / MRO are offered in the form of rebates if a volume is hit (because the provider knows it won’t be because all organizations without good e-Procurement/Contract Management have high levels of maverick spend and they know they can often substitute SKUS due to “temporary stockout” and the buyer won’t notice and this will help ensure that the volume is not hit)
  • Operational Efficiency:Automated Inventory Re-Order % – for regular inventory/MRO restocks or predictable volumes based on the manufacturing plan, the e-Procurement system should be able to submit the POs automatically
  • Operational Efficiency:Repeat Order Cycle Time Reduction – for standard orders such as employee onboarding kits, monthly storeroom re-orders where the amounts need to be human verified/input, etc., on average, how much faster can these be placed vs. pre-module implementation
  • Operational Efficiency:Quick-RFP / RFQ % Reduction – by what percentage does the e-Procurement system, with its integrated catalog and quote management functionality, reduce the percentage of quick RFP/RFQs that the organization needs to issue for non-strategic purchases
  • Operational Efficiency:% (Increase) Spend on PO – by what percentage is on-PO spend increased

e-Procurement is all about getting Spend Under Management, ensuring contracts and included pricing are adhered to, and using preferred suppliers (and products) as much as possible (to help with standardization). This requires making it easy for requisitioners/buyers to find what they need, buyers to issue POs, and on-contract/preferred supplier spend to be easily tracked. Metrics should be in place to make sure all of this happens.

Invoice-to-Pay / Accounts Payable

Tanya Wade’s Performance KPIs

  • Operational Efficiency:Procurement Cycle Time
  • Operational Efficiency:Automation Rate

For details on these, see our prior series.

Key Module KPIs

  • Operational Efficiency:Invoice Cycle Time Reduction – by how much, on average, do clients see invoice cycle time reductions
  • Operational Efficiency:Straight Through Processing Percentage – what percentage of invoices are able to be processed straight through (with m-way match) without human interverntion
  • Operational Efficiency:Average Dispute Resolution Time (Improvement) – what is the average dispute resolution time in the platform and what is the improvement over the average time reduction versus pre-system implementation
  • Operational Efficiency:Early Payment Discount Opportunity Improvement – percentage-wise, how many more invoices eligible for early payment discounts can now be paid early (that couldn’t before due to processing delays), allowing organizations to improve their working capital management

Invoice to Pay is all about invoice processing automation and minimizing the amount of time that a human needs to manually review invoices for completeness and correctness and (automated) payment according to pre-defined terms. Make sure the metrics you choose reflect this.

We don’t claim this is a complete list, or every KPI that you can, and possibly should, ask for, just that if you are non-technical, and can’t judge a solution on its technical merits, if you can at least get these KPIs and force the vendor to prove them to you, then you will at least get a solution that is bound to provide you with some improvement and that, because of the real improvement potential, may actually be used.

The best solution is to hire an independent third party who is an expert in ProcureTech and who has no stake in any provider or implementer and is solely interested in doing Project Assurance for you, but if you can’t get that, at least get something which has a history of delivering measurable value to similar organizations.