Category Archives: rants

Stop Sanctifying Savings, Recognizing ROIs without Research, or Seeking Solutions Solely in Software

If you’ve been reading the doctor for any length of time, you’re probably a bit confused about the third part of this title — as the doctor is one of the biggest proponents of sustainable software solutions that cover the extended Source-to-Pay process and enable next generation Sourcing and Procurement. However, just because he believes you should have an appropriate software solution for every stage of the source-to-pay process, that does not mean he believes all of the solutions are in software alone. Some are in systems, which are composed of talent, technology, and transformation(al processes).

Sometimes the solution to a challenge isn’t (just) a better system, it’s a better process. Take invoice overpayments, common in large organizations due to over-billings, duplicate billings, and even fraudulent billings. The current “solution” is to use a recovery firm who will take 1/3 of what they “recover” for you as their fee, but they won’t recover everything (since anything off contract is hopeless, as is any fraud that slipped through — the perpetuators are long gone, and even if the authorities find them, by the time you get a judgement in court, they’ve spent, laundered, or transferred the money to somewhere you can’t touch it). This is not much of a solution, because if only 50% of the overspend is addressable, and you lose 1/3 of that in fees, you’re only recovering 1/3 of your overspend. Ouch!

The solution here is better process enabled by technology. When an invoice comes in, the system auto-processes it and auto-matches it a purchase order and a goods receipt. If there is no PO, and it’s not a pre-defined monthly billing, it’s marked as no-pay until manually verified by the buyer that a) the invoice is for goods that were ordered and b) all of the units / services are the agreed upon prices or rates. And even then it’s held for payment until the goods are marked as received or the services delivered. If there is a PO, it must match all of the yet unmatched units (if multiple shipments, and thus invoices, are made against the PO) and each unit must be billed at the approved (contracted rate). If not, it’s flipped back to the supplier for correction. If the supplier won’t correct, possibly because the order was expedited at managerial insistence and the supplier agreed only if a premium could be charged, then it needs managerial approval before a payment can be issued, and if that is not given, it needs to enter a dispute process. In other words, no invoice is paid until matched, verified correct, and, when necessary, granted managerial approval — and the entire invoice management function is governed by a well thought out, defined, and detailed process (with flow-charts that govern process flows) that ensures every invoice is processed correctly in every situation (based upon whether or not the goods and/or services are under contract, PO, cyclic billing agreement, ordered from a catalog, requisitioned at a defined rate scale, bought in an e-auction, etc. In other words, the process comes first, and the technology enables it.

This means that while the software should enable as much of the process as possible, you shouldn’t look to the software, or even the vendor, to define the process for you. The vendor should have best practices, and should provide you with sufficient configuration options to make it work for the process you need, but you need to understand what you need before you select a solution. Some solutions on the market will be really rigid, and others will expect you to configure it to your needs. In other words, software can provide you with what you need to complete the solution, but software alone is not a complete solution — you need the right process and the right people using it. So don’t look to a software provider as the solution, look to a provider to provide software that will provide the software part of the solution.

And, more importantly, don’t accept the promised ROI without doing your own research. Most providers will promise you an ROI of 5X to 15X in an effort to convince you that NOT buying their solution wold be the stupidest thing ever as every day you’re not using their solution you’re flushing money down the toilet. And if the ROI of a solution is that high, you should definitely have a solution — but the solution that gives you that ROI might not be the one that promises it. Remember, ROI is realized return / total solution cost, and depending on how good you were doing before buying the solution, the current market conditions, your industry, and the ancillary costs of the solution (implementation, integration, training, etc.), the ROI for your organization could be drastically different than their average ROI for their average customer. For example, while the vendor’s average customer might see an ROI of 5, you might only see an ROI of 2.5, and at a multiple of less than 3, it’s likely not the solution for your organization. (Unless it’s the only solution and you need a software solution, but it’s rare that there’s only one software solution that would work.)

If you’re given an ROI, ask for the calculation the vendor uses and how you would calculate it for your own organization and do it yourself. Add padding into the price, and when you have an expected range of savings and/or cost avoidance, err on the side of caution (the lower end). That’s the number you use when considering the value, not the vendor’s number. Every situation is different, and you need to understand how different your situation is from their average customer.

However, the most important thing to understand is that you need to stop sanctifying savings and believing that the savings numbers provided by a vendor are a result of their solution. Or that you will achieve anything similar. Remember, “savings”, which is usually just “unnecessary cost avoidance”, is a function of how much the organization is spending across its addressable categories, how much overspend is across those categories, and how much was able to be captured — and this is dependent on organizational size (annual revenue), industry, and spend profile. If your organizational spend is considerably smaller, your addressable spend is less than industry average (long term locked-in contracts, etc.), or your overspend in high volume / dollar categories is less than industry average (either because you had good negotiators, or you cut the contracts at the most opportune time), then your expected “savings” will be considerably less than their average customer savings they are presenting to you. In other words, like ROI, the advertised number may not be what you get. Specifically, your savings might not be anywhere close to their advertised number.

But that’s not the most important reason you need to stop sanctifying savings — the most important reason you need to stop sanctifying savings is that there is absolutely no correlation between the savings numbers and their software. Let’s repeat that. There is absolutely no correlation between the savings numbers and their software. Why? The same reason you should not seek solutions solely in software.

The reality is that, depending on the situation at hand, sometimes most of the “savings” or “cost avoidance” results from a better process alone and has nothing to do with the software solution whatsoever. Also, sometimes the solution that is needed is simply a workflow that enforces a process, a RFX solution that collects comparable information, an e-procurement solution that supports contracted rate catalogs and rate cards, etc. These standard solutions are offered by dozens of vendors and if the majority of the “savings” or “cost avoidance” comes from a baseline solution, it literally doesn’t matter what vendor’s solution you use! Literally. So if the vendor with the significant savings number is asking 1M annually in license fees and a smaller vendor offers a solution with all the necessary baseline functionality for 120K annually, you could get the same savings for 1/8th of the cost (which would significantly impact the ROI).

In other words, when you are given a savings number, you have to do your research and figure out

  • what percentage of those savings results solely from the fact that the implemented solution enforces a proper process
  • what percentage of those savings results solely from the baseline functionality that is available in at least 3 to 5 other solutions (at a lower annual license cost)
  • … and what percentage of those savings result from advanced features found only in that solution

For example, if only 5% of the savings results from advanced functionality and your estimated annual savings from addressable spend for the first 3 years is only 10M per year, are you really willing to spend 8X as much in license fees for an incremental savings of 500K? The answer here should be a resounding NO as that incremental savings is less than the incremental solution cost! But if you’re a multibillion dollar corporate, that could save 50M a year for three years, with a 10% incremental savings from the advanced functionality, then you would be saving an extra 5M per year at an incremental cost of 800K (which is a 6X ROI) AND have advanced functionality that could be applied to all categories that might squeeze out an extra percent here and there.

In other words, what solution you should buy depends on which solution you expect will give YOU the greatest ROI based upon YOUR calculation, not the vendor’s customer averages (or outrageous quotes from multinationals who spend 10X what you do). Furthermore, don’t misread the title — you do need software to enable your Sourcing, Procurement, and Supply Chain, but the software is not the total solution — which requires the right process driven by the right people. So don’t expect the vendor to solve all your problems, just the software portion (which you should only buy after identifying what you need, and the vendor you should choose should be that which has the greatest expected ROI for your organization, as calculated by you).

Reporting is Not Analysis — And Neither Are Spreadsheets, Databases, OLAP Solutions, or “Business Intelligence” Solutions

… and one of the best explanations the doctor has ever read on this topic (which he has been writing about for over two decades) was just published over on the Spendata blog on Closing the Analysis Gap. Written by the original old grey beard himself (who arguably built what was the first stand alone spend analysis application back in 2000 and then redefined what spend analysis was not once, but twice, in two subsequent start-ups that built two, entirely new, analytics applications that took a completely different, more in-depth approach), it’s one of the first articles to explain why every current general purpose solution that you’re currently using to try and do analysis actually doesn’t do true analysis and why you need a purpose built analysis solution if you really want to find results, and in our world, do some Spend Rappin’.

We’re not going to repeat the linked article in its entirety, so we’ll pause for you to go read it …

 

… we said, go to the linked article and read it … we’ll wait …

 

READ IT! Then come back. Here’s the the linked article again …

 

Thank you for reading it. Now we’ll continue.

As summarized by the article, we have the following issues:

Tool Issue Resolution Loss of Function
Spreadsheet Data limit; lack of controls/auditability Database No dependency maintenance; no hope of building responsive models
Database performance on transactional data (even with expert optimization) OLAP Database Data changes are offline only & tedious, what-if analysis is non-viable
OLAP Database Interfaces, like SQL, are inadequate BI Application Schema freezes to support existing dashboards; database read only
BI Application Read only data and limited interface functionality Spreadsheets Loss of friendly user interfaces and data controls/auditability

In other words, the cycle of development from stone-age spreadsheets to modern BI tools, which was supposed to take us from simple calculation capability to true mathematical analysis in the space age using the full breadth of mathematical techniques at our disposal (both built-in and through linkages to external libraries), has instead taken us back to the beginning to begin the cycle anew, while trying to devour itself like an Ouroboros.


Source: Wikipedia

 

Why did this happen? The usual reasons. Partly because some of the developers couldn’t see a resolution to the issues when they were first developing these solutions, or at least a resolution that could be implemented in a reasonable timeframe, partly (and sometimes mostly) because vendors were trying to rush a solution to market (to take your money), and partly (and sometimes largely) because the marketers keep hammering the message that what they have is the only solution you need until all the analysts, authors, and columnists repeat the same message to the point they believe it. (Even though the users keep pounding their heads against the keyboard when given a complex analysis assignment they just can’t do … without handing it off to the development team to write custom code, or cutting corners, or making assumptions, or whatever.) [This could be an entire rant on its own how the rush to MVP and marketing mania sometimes causes more ruin than salvation, but considering volumes still have to be written on the dangers of dunce AI, we’ll have to let this one go.]

The good news is that we now have a solution you can use to do real analysis, and this is much more important than you think. The reality is that if you can’t get to the root cause of why a number is as it is, it’s not analysis. It’s just a report. And I don’t care if you can drill down to the raw transactions that the analysis was derived from, that’s not the root cause, that’s just supporting data.

For example, profit went down because warranty costs increased 5% is not helpful. Why did warranty costs go up? Just being able to trace down to the transactions where you see 60% of that increase is associated with products produced by Substitional Supplier is not enough (and in most modern analysis/BI tools, that’s all you can do). Why? Because that’s not analysis.

Warranty costs increasing 5% is the inevitable result of something that happened. But what happened? If all you have is payables data, you need to dive into the warranty claim records to see what happened. That means you need to pull in the claim records, and then pull out the products and original customer order numbers and look for any commonalities or trends in that data. Maybe after pulling all this data in you see, of the 20 products you are offering (where each would account for 5% of the claims if all things were equal) there are 2 products that account for 50% of the claims. Now you have a root cause of the warranty spend increase, but not yet a root cause of what happened, or how to do anything about it.

To figure that out, you need to pull in the customer order records and the original purchase order records and link the product sent to the customer with a particular purchase order. When you do this, and find out that 80% of those claims relate to products purchased on the last six monthly purchase orders, you know the products that are the problem. You also know that something happened six months or so ago that caused those products to be more defective.

Let’s say both of these products are web-enabled remote switch control boxes that your manufacturing clients use to remotely turn on-and-off various parts of their power and control systems (for lighting, security monitoring, etc.) and you also have access, in the PLM system, to the design, bill of materials (BOM), and tier 2 suppliers and know a change takes 30 to 60 days to take effect. So you query the tier 1 BOM from 6, 7, 8, and 9 months ago and discover that 8 months ago the tier 2 supplier for the logic board changed (and nothing else) for both of these units. Now you are close to the root cause and know it is associated with the switch in component and/or supplier.

At this point you’re not sure if the logic board is defective, the tier 1 supplier is not integrating it properly, or the specs aren’t up to snuff, but as you have figured out this was the only change, you know you are close to the root cause. Now you can dive in deep to figure out the exact issue, and work with the engineering team to see if it can be addressed.

You continue with your analysis of all available data across the systems, and after diving in, you see that, despite the contract requiring that any changes be signed off by the local engineering team only after they do their own independent analysis to verify the product meets the specs and all quality requirements, you see that the engineering, who signed off on the specs, did not sign off on the quality tests which were not submitted. You can then place a hold on all future orders for the product, get on the phone with the tier 1 supplier and insist they expedite 10 units of the logic board air freight for quality testing, and get on the phone with engineering to make sure they independently test the logic boards as soon as they arrive.

Then, when the product, which is designed for 12V power inputs, arrives and the engineers do their stress tests and discover that the logic board, which was spec’ed to be able to handle voltage spikes to 15V (because some clients power backup systems off of battery backups that run off of chained automotive batteries) actually burns out at 14V, you have your root cause. You can then force the tier 1 supplier to go back to the original board from the original supplier, or find a new board from the current supplier that meets the spec … and solve the problem. [And while it’s true you can’t assume that all of the failure increases were the logic board without examining each and every unit of each and every claim, in this situation, statistically, most of the increase in failures will be due to this [as it was the only change].]

In other words, true analysis means being able to drill into raw data, bring in any and all associated data, do analysis and summaries of that data, drill in, bring in related data, and repeat until you find something you can tie to a real world event that led to something that had a material impact on the metrics that are relevant to your business. Anything less is NOT analysis.

“Generative AI” or “CHATGPT Automation” is Not the Solution to your Source to Pay or Supply Chain Situation! Don’t Be Fooled. Be Insulted!

If you’ve been following along, you probably know that what pushed the doctor over the edge and forced him back to the keyboard sooner than he expected was all of the Artificial Indirection, Artificial Idiocy & Automated Incompetence that has been multiplying faster than Fibonacci’s rabbits in vendor press releases, marketing advertisements, capability claims, and even core product features on the vendor websites.

Generative AI and CHATGPT top the list of Artificial Indirection because these are algorithms that may, or may not, be useful with respect to anything the buyer will be using the solution for. Why?

Generative AI is simply a fancy term for using (deep) neural networks to identify patterns and structures within data to generate new, and supposedly original, content by pseudo-randomly producing content that is mathematically, or statistically, a close “match” to the input content. To be more precise, there are two (deep) neural networks at play — one that is configured to output content that is believed to be similar to the input content and a second network that is configured to simply determine the degree of similarity to the input content. And, depending on the application, there may be a post-processor algorithm that takes the output and tweaks it as minimal as possible to make sure it conforms to certain rules, as well as a pre-processor that formats or fingerprints the input for feeding into the generator network.

In other words, you feed it a set of musical compositions in a well-defined, preferably narrow, genre and the software will discern general melodies, harmonies, rhythms, beats, timbres, tempos, and transitions and then it will generate a composition using those melodies, harmonies, rhythms, beats, timbres, tempos, transitions and pseudo-randomization that, theoretically, could have been composed by someone who composes that type of music.

Or, you feed it a set of stories in a genre that follow the same 12-stage heroic story arc, and it will generate a similar story (given a wider database of names, places, objects, and worlds). And, if you take it into our realm, you feed it a set of contracts similar to the one you want for the category you just awarded and it will generate a usable contract for you. It Might Happen. Yaah. And monkeys might fly out of my butt!

CHATGPT is a very large multi-modal model that uses deep learning that accepts image and text as inputs and produces outputs expected to be inline with what the top 10% of experts would produce in the categories it is trained for. Deep learning is just another word for a multi-level neural network with massive interconnection between the nodes in connecting layers. (In other words, a traditional neural network may only have 3 levels for processing with nodes only connected to 2 or 3 nearest neighbours on the next level while a deep learning network will have connections to more near neighbors and at least one more level [for initial feature extraction] than a traditional neural network that would have been used in the past.)

How large? Large enough to support approximately 100 Trillion parameters. Large enough to be incomprehensible in size. But not in capability, no matter how good its advocates proclaim it to be. Yes, it can theoretically support as many parameters as the human brain has synapses, but it’s still computing its answers using very simplistic algorithms and learned probabilities, neither of which may be right (in addition to a lack of understanding as to whether or not the inputs we are providing are the right ones). And yes it’s language comprehension is better as the new models realize that what comes after a keyword can be as important, or more, than what came before (as not all grammars, slang, or tones are equal), but the probability of even a ridiculously large algorithm interpreting meaning (without tone, inflection, look, and other no verbal cues when someone is being sarcastic, witty, or argumentative, for example) is still considerably less than a human.

It’s supposed to be able to provide you an answer to any query for which an answer can be provided, but can it? Well, if it interprets your question properly and the answer exists, or a close enough answer exists and enough rules for altering that answer to the answer that you need exists, then yes. Otherwise, no. And yes, over time, it can get better and better … until it screws up entirely and when you don’t know the answer to begin with, how will you know the 5 times in a hundred it’s wrong and which one of those 5 times its so wrong that if you act on it, you are putting yourself, or your organization, in great jeopardy?

And its now being touted as the natural language assistant that can not only answer all your questions on organizational operations and performance but even give you guidance on future planning. I’d have to say … a sphincter says what?

Now, I’m not saying properly applied these Augmented Intelligence tools aren’t useful. They are. And I’m not saying they can’t greatly increase your efficiency. They can. Or that appropriately selected ML/PA techniques can’t improve your automation. They most certainly can.

What I am saying are these are NOT the magic beans the marketers say they are, NOT the giant beanstalk gateway to the sky castle, and definitely NOT the goose that lays the golden egg!

And, to be honest, the emphasis on this pablum, probabilistic, and purposeless third party tech is not only foolish (because a vendor should be selling their solid, specialty built, solution for your supply chain situation) but insulting. By putting this first and foremost in their marketing they’re not only saying they are not smart enough to design a good solution using expert understanding of the problem and an appropriate technological solution but that they think you are stupid enough to fall for their marketing and buy their solution anyway!

Versus just using the tech where it fits, and making sure it’s ONLY used where it fits. For example, how Zivio is using #ChatGPT to draft a statement of work only after gathering all the required information and similar Statements of Work to feed into #ChatGPT, and then it makes the user review, and edit as necessary, knowing that while the #ChatGPT solution can generate something close with enough information and enough to work with, every project is different and an algorithm never has all the data and what is therefore produced will never be perfect. (Sometimes close enough that you can circulate it is a draft, or even post it for a general purpose support role, but not for any need that is highly specific, which is usually the type of need an organization goes to market for.)

Another example would be using #ChatGPT as your Natural Language Interface to provide answers on performance, projects, past behaviour, best practices, expert suggestions, etc. instead of having the users go through 4+ levels of menus, designing complex reports/views and multiple filters, etc. … but building in logic to detect when a user is asking a question on data versus asking for a prediction on data vs. asking for a decision instead of making one themself … and NOT providing an answer to the last one, or at least not a direct answer. For example, how many units of our xTab did we sell last year is a question on data the platform should serve up quickly. How many units do we forecast to sell in the next 12 months is a question on prediction the platform should be able to derive an answer for using all the data available and the most appropriate forecasting model for the category, product, and current market conditions. How many units should I order is asking the tool to make a decision for the human so either the tool should detect it is being asked to make a decision where it doesn’t have the intelligence or perfect information to do and respond with I’m not programmed to make business decisions or return an answer that the current forecast for the next quarter’s demand for xTab for which we will need stock is 200K units, typically delivery times are 78 days, and based on this, the practice is to order one quarter’s units at a time. The buyer may not question the software and blindly place the order, but the buyer still has to make the decision to do that.

And no third party AI is going to blindly come up with the best recommendation as it has to know the category specifics, what forecasting algorithms are generally used, why, the typical delivery times, the organization’s preferred inventory levels and safety stock, and the best practices the organization should be employing.

AI is simply a tool that provides you with a possible (and often probable, but never certain) answer when you haven’t yet figured out a better one, and no AI model will ever beat the best human designed algorithm on the best data set for that algorithm.

At the end of the day, all these AI algorithms are doing is learning a) how to classify the data and then b) what the best model is to use on that data. This is why the best forecasting algorithms are still the classical ones developed 50 years ago, as all the best techniques do is get better and better and selecting the data for those algorithms and tuning the parameters of the classical model, and why a well designed, deterministic, algorithm by an intelligent human can always beat an ill designed one by an AI. (Although, with the sheer power of today’s machines, we may soon reach the point where we reverse engineer what the AI did to create that best algorithm versus spending years of research going down the wrong paths when massive, dumb, computation can do all that grunt work for us and get us close to the right answer faster).

For all I care, they can ban all the Social Media Platforms!

For those who haven’t heard, Montana is the first state to try and ban TikTok, presumably because it’s owned by China that is harvesting the data. Following that logic, shouldn’t they ban every platform that has Chinese investment?

… and then every social media platform that has a presence in China and, as such, must adhere to Chinese law?

Of course, if the real reason they want to ban Social Media platforms is because they realize the damage that social media platforms have been doing to us (and are using the Chinese ownership as an excuse), they can ban them all — including their home-grown American platforms. After all, Twitter made us dumber than a doornail and Facebook is a Toilet so please feel free to take them away too.

Remember, even if you overlook the fact that Facebook is primarily used for sharing conspiracy theories and information that is NOT fact-checked, seeking attention, cyber-stalking your favourite celebrities, and other uses besides the good, wholesome, community aspects they tirelessly promote, if you acted in real life like you acted on Facebook, as the image below suggests, you’d be the subject of multiple psychological assessments and suspicious individual #1 at your local precinct. (Credit to the original source, which I wish I knew!)

Don’t Cheat Yourself with Cheat Sheets, Kid Yourself with KPI Quick Lists, or Rip Yourself Off with Bad RFPs!

In an effort to quickly catch up on the parts of S2P the doctor hasn’t been covering as much in the past few years, when he was focussed primarily on Analytics, Optimization, Modelling, and advanced tech in S2P (inc. RPI, ML, “AI”, etc.), he’s been paying more attention to LinkedIn. Probably too much, even though he can (speed) read very fast and skim a semi-infinite scroll page in a minute. Why? Because a lot of what he’s been seeing is troubling him, and as per last Friday’s post, sometimes angering him when predatory sales-people and consultants are giving other sales-people and consultants bad advice (presumably to increase their follower count or coaching sales or whatever) that will not only hurt what could be a well-intentioned sales-person or consultant (they still exist, though sometimes it seems they are fewer by the year as more sales people bleed into our space from enterprise software, looking for the next hot software solution and the next big payday), but also the individuals, and companies, those influenced sales people sell to in the thoughtless, emotionless, uncaring aggressive style the predatory sales coaches are mandating. (Not to say that a sales person shouldn’t be aggressive about getting a sale, just that they should be focussed on the companies they can actually help and be focussed on getting the customer all the information and insight that customer needs to make the right choice, feel comfortable about it, and feel prepared to defend it. The aggression should be channeled into making sure their company does everything it can to properly educate the potential client before that client commits to a long term relationship.)

A few of the things that have been repeatedly troubling him is

  1. all the cheat sheets he’s been seeing for those looking to get a better grip on Procurement and how it integrates into the rest of the business, that supposedly summarize everything you need to know about accounting, finance, payments / accounts payable, etc. to help you make good choices about Procurement in general;
  2. all the 10/20/50 Procurement, Spend, Manufacturing, etc. KPIs that you need to keep tabs on your Procurement, cashflow plan, product lifecycle, etc.; and
  3. all the RFP outlines or guidances that are being made available, sometimes by leaving your email, to help buyers acquire a certain technology.

And it’s not because they’re bad. They’re not. Some of them are actually quite good. A few are even excellent. Some of the cheat sheets and KPI lists the doctor has seen are incredibly well thought out, incredibly clear, and incredibly useful to you. Some are so good that, as a buyer, likely with little support from your organization and even less of a training budget, you should be profusely thanking whomever was so kind to create this for you and give it away for free.

Nor is it because the doctor suspects any ill intent or malice behind the efforts (in the vast majority of the cases). Many of these people giving away the cheat sheets or the KPI lists are generally trying to help their fellow humans get better at the job and improve the profession overall. And when the RFP outline is coming from a former practitioner, it’s also the case that they are typically trying to help you out (and maybe sell their services as a consultant, but they are providing proof of value up-front).

So why has it been troubling the doctor so? It took a while and some thought to put his finger on it, and the answer is, surprisingly, one of the reasons [but not the obvious one] that the doctor hates software vendor RFPs and despises any vendor that gives you one.

Now, the primary reason the doctor despises those RFPs, which became popular when Procuri started doing it en-masse in the mid-to-late 2000s (before being acquired by Ariba and quietly sunsetted as the integration never finished by the time Ariba sold to SAP, for those of you who remember the APE circus), is that these RFIPs are always written to be entirely one sided and ensure the vendor giving them away ALWAYS comes out on top. The feature list is exactly what the vendor offers, the weightings correspond exactly to the vendor core strengths, etc. etc. etc. And don’t tell me you can start with a vendor RFP and alter it to suit other vendors, because you can’t. You’d have to know all the features as the vendor focussed on point features, not integrated functions, and you, as a buyer who’s never used a modern system, have no knowledge of how to equate features (when vendor specific terminology is used), or how to determine if one feature is more advanced than another. (That was the reason the doctor co-developed Solution Map, to help rate and evaluate technology, which is the one thing most buying organizations can’t do well. Not the things they can do well, and better than most analyst firms, like rate the appropriateness of services to them, assess whether or not the vendor has a culture that will be a good fit, define their business needs and goals, etc.)

But the primary reason doesn’t apply here. So what’s the secondary reason? When an average, overworked, underpaid, and overstressed buyer got their hands on one of these free vendor RFPs, especially when the RFP was thick, heavy, and professionally edited and prepared to look polished and ready for use, and was more detailed than what the buyer could do, they thought they had their answer and could run with it. They thought it was all they needed to know, for now, and that they could send it out, collect the responses, and get back to fire-fighting. They were lulled into a false sense of security.

And that’s why these cheat sheets and KPI guides and former buyer/consultant RFPs are so troubling. When you’ve been struggling without even the basics, and these are so good that they teach you all the basics, and more, it seems like they have all the answers you need and that when you learn those basics, encapsulate them in the tool, and start running your business against them, things will be better. Then you configure your tool to respect the basics, encode the KPIs, and things are better. Significantly better, and for once processes start going smoothly. And then you believe you know everything you need to in that area (that’s not your primary area) to interface with those functions and that those KPIs will be enough to keep you on the Procurement track and let you know if there are any issues to be addressed. And you start operating like that’s the case. But it’s not.

And that’s the problem — these cheat sheet, guides, and templates, which are much better than what you’d get in the past, can make such a drastic difference when you first learn and implement them that they instill a false sense of security. You get complacent with your integrations, reports, and KPI monitors, not recognizing that they only capture and catch what they were encoded to capture and catch. However, real world conditions are constantly changing, the supply base is constantly changing, and external events such as natural disasters, political squabbles, and endemics are coming fast and furious. If the risk metric doesn’t take into account external events, real-time slips in OTD (as it is based on risk profiles upon onboarding, and updates upon contract completion), or past regulatory compliance violations (as an indicator of potential violations in the future), the organization could be blindsided by a disruption the buyer thought the KPI would prevent. Similarly, the wrong cash-flow related KPIS can give a false sense of liquidity and financial security and the wrong inventory metrics can lead to the wrong forecasts in outlier categories (very fast moving, very slow moving, or recently promoted).

In other words, by giving you the answers, without the rationale behind them, or deep insight into how appropriate those answers are to your situation, you will cheat yourself, kid yourself, or, even worse, rip yourself off. And that’s worrisome. So please, please, please remember what these are — learning aids and starting points only — not the end result. (Especially if it’s an RFP template.)