Monthly Archives: August 2024

Eyvo, heigh-ho, off to procure we go …

We receive requests by the score
A thousand items, sometimes more
It’s up to you what you use ’em for
We just process those POs …

Eyvo was founded in 2012 to develop a Procurement solution that worked. One that people would use because it mimicked and enabled the process they did every day and made it easier to use the system than bypass it. The founders’ goal, to use the 30 years of experience they had at the time (now 40 years) to build a SaaS system that enabled everyone involved in Procurement.

They did this by starting with they processes they did on a daily basis, defining the processes that would make it easy for the organizational consumers (the requesters) to find what they need it and for them to get real-time updates into what it was, documenting the processes that management used for approvals, and designing a system that would support all three. They focussed on designing it to be highly configurable to account for the small variances in processes between organizations (and we mean small, as we’ve noted a few times, the fundamentals of Procurement haven’t changed since the first manual was published back in 1887 (Handbook of Railway Supplies). Then they focussed on working with their first few customers, and the buyers in particular, to make sure that the system worked for the customers, and the buyers (approvers, and requesters) in particular.

And they also did this by picking their niche … the intersection of primary industries — market size — S2P modular focus that they saw as the intersection between the greatest market need and their expertise. In particular, they choose to focus on service (particularly legal, finance, and insurance related) and hospitality (restaurant, hotel, and spa/retreat chains). And, most importantly, to build a unique brand of augmented Procurement solution with built-in Inventory and Asset Management as well as unique capabilities and configurations for typical procurement departments in those industries.

The platform is a modern cloud-hosted multi-tenant Procurement Spend Management application with automated workflow, user and role-based security, multi-currency support, multi-lingual support for every language supported by Google Translate, and extensive rules-based configuration capability. A platform that supports full request intake, freeform / catalog / RFX procurement, full invoice / receiving / 3-way match / payment management, inventory management (including fulfillment from inventory), asset management (for assets), carbon tracking, supplier onboarding and information / insurance / compliance (& diversity) management, and deep reporting (with a new analytics module slated for Q4). It’s everything the clients in their target industries need. And that’s the point.

So what does it do? Let’s take it piece-by-piece.

Request (Intake) Portal

One of the core modules is the request portal which provides a (multi-tab) one screen interface for organizational employees to make the purchasing requests they need quickly and easily and without any hassle. (Intake, no third-party solution with nothing under the kimono required!) Through this portal, they can also access a complete list of their requests and see exactly where the request is in the procurement process, or go direct to a past request with the request number.

On the main request screen, the user can see the current status (likely “in process”), and the auto-populated request number, approval status (required), delivery name, address, method, account, etc. The user just has to add their items (from a catalog, punch-out, upload file, “basket”, “recipe” or free-form) and submit. If anything needs to be modified from the defaults, then the user can click over to the distribution tab, terms & conditions tab, and attached documents. On the main tab, they can bring up the budget assigned to them, or, if they have budgets per category, the budget for the category they are assigning the requisition to.

Catalog search is as simple as an online storefront search. Simply enter the search terms and all of the relevant items are returned. These can be quickly filtered by category and supplier. When it comes to punch-out catalogs, it’s simply a matter of selecting the supplier, popping over to the site, adding the product to the cart, and pushing it back, as with any other e-Procurement system.

If a requester wants to buy items across more than one category, they can individually assign each item to a (non-default) category. And the approval rules and chains will update depending on what products are selected, what suppliers are selected, and what the total request amount is.

If the organization typically orders sets of items together, such as suite furnishings for a hotel, or a new employee starter kit for a services firm, these can be pre-defined and the selection thereof will populate a request with all appropriate items. If the client is in hospitality and contains a restaurant, the platform also supports recipe management, which is not just a basket of items, but also calculates how much is needed of each item for a certain number of recipes and specifying the recipe and forecasted order volume populates a purchase order accordingly.

Request Approvals

Approvals are easy and can be done using the preferred method of the approver. They can approve natively in the platform (if they log in for other reasons), through an app (if they have a modern phone and like apps), or through email through a single click (which takes them to a web-based approval screen where they can enter any notes or simply send off the orders to the suppliers).

Request Management

Requests can be accessed and processed on a request-by-request basis at any time, they can be changed to be fulfilled from inventory, and batched together to be fulfilled through an RFX, PO to a preferred supplier, and then from inventory when the RFX or batch PO is complete.

Receiving, Fulfillment, Inventory, and Asset Management

When goods are received, they buyer can quickly enter the quantity of goods received, as well as any that are rejected, and then the goods can be entered into inventory and assigned an inventory location or sent direct to the buyer. If the good is considered an asset (such as an expensive laptop that the organization wants to track), each individual good can have identifying information tracked and be associated with the organizational employee it is assigned to.

If a request is for an item that is inventory, then the request can be fulfilled from inventory, and a buyer can override the catalog request to fulfill from inventory. If an item is considered an asset, it can be booked as an asset the minute that the inventory arrives and depreciation, which the system also tracks, can start from the data of acquisition or the date of assignment to a user.

Invoice Management

Invoices can come in as a PO flip through the vendor portal (which we’ll discuss later) or through e-mail. Invoices that come in through e-mail are auto-parsed through AI (to the extent possible with high-confidence) and presented to a designated user for verification (and, if necessary, completion) and approval before it is entered into the system.

Once an invoice is received, it is three-way matched to the goods receipt and purchase order before it goes into the payment queue, and as soon as it is accessed by an accounts payable clerk, they can bring up the 3-way match to identify any discrepancies that may need resolution before payment. The 3-way match contains all of the relevant information on a line-item basis: units ordered, purchase order amount, units received, invoiced amount, un-invoiced amount, amount paid to date (if any partial payments were made), and unpaid amount to date.

Carbon Tracking

Given that carbon management is becoming top of mind at many companies, and top of priority at any that need to do carbon reporting, Eyvo decided to build it in at a line-item level so that a requester can see the carbon impact of every requisition from the time of requisition and buyers can see it at the time of approval and Purchase Order generation, invoice receipt, and then roll it up by the category and by time period so that the organization can produce reports by time period by category, and then roll those up across the organization.

RFQs: Requests for Quotes

RFQs are very straight forward. The buyer selects the items of interest it, the platform suggests suppliers, the buyer cherry picks the ones she wants, and can even invite potential new suppliers with a custom URL (and, if the supplier responds, this will kick off an onboarding process where the supplier will be accepted or rejected). Suppliers can respond positively simply by specifying the unit price, lead time, and, if allowed, quantity they can supply. (Or they can give the buyer permission to record a response on their behalf and the buyer can do so and the system will explicitly record that the buyer recorded on the supplier’s behalf for audit purposes.) They can also decline (and provide a reason), and if they don’t respond in the requested timeline, the buyer can record the non-response (as statistics are kept on supplier response rate to help a buyer decide whether or not they should invite a supplier to an RFQ in the future).

When all the bids are in, or time is up, the buyer can see a list of bids side by side and do a full award or partial award to each supplier and then generate the purchase orders.

Also, the platform can also publish open RFQs to a buyer portal that can be accessed publicly by any current or potential supplier that can bid, if they are an existing onboarded supplier, or express interest and register for onboarding, if a new supplier.

Dashboard

The entry point to the application is a widget-based dashboard that can be custom configured from dozens of pre-built widgets that provide report-based insights into different parts of the application (requests, POs, invoices, inventory, assets, carbon, and spend, among others) with built-in filters that allow users to drill down to get the insights they need. (And some widgets can be customized to have a transaction level view.)

The Buy Side

The best part of the platform is the integration — it’s one seamlessly integrated application experience for the buyer that makes it easy for the buyer to walk up and down the process from any point. In a requisition, they can approve it, generate the purchase orders (once approved) or kick off an RFQ. They can then jump into the purchase order, see the status, and if there are any receipts or invoices associated with it, jump into those. Once in the invoice, they can bring up the 3-way match and, if all is A-OK, kick off a payment. If the supplier submits an e-mail invoice, the system auto-extracts as much data as possible (and sometimes all of it if the invoice format is known), and once they approve the extraction, they can jump into any associated good receipts or the purchase order, and walk all the way back to the source requisition if needed.

From a supplier, they can see all associated RFQ events, orders, and invoices and walk into those, or access their complete profile with profile data, insurance, compliance, and diversity data. And, of course, they can get to that supplier profile from any associated artifact. Also, the ability to jump from an invoice to receipt to inventory management helps the buyers get a grip on inventory, as well as identify any goods as assets and start tracking them accordingly. It really fits the needs of the mid-sized service and hospitality industries they go after with multiple locations that need to do regular, tactical, ordering, track the inventory, and make sure it is appropriately utilized while allowing a central buy desk to group requisitions, identify common needs, and do procurement for fulfillment from central inventory.

Configuration

All of the “data” and “codes” are fully customizable to the client organization. This includes, but is not limited to:

  • account code structures
  • category code structures
  • currencies and conversions
  • delivery points
  • invoice points
  • items
  • documents
  • supplier fields
  • standard terms and conditions
  • punchouts
  • etc.

and the majority of this data can be modified by the end client at any time. (Except for account code structures and category code structures of course, since that drives the applications and the integrations, although the individual codes can be updated at any time.)

Onboarding

Each potential vendor is provided with a unique URL where they are walked through a customized onboarding workflow that collects the basic supplier information, contact details, financial details, insurance details, compliance details, and necessary documents. Optionally, if desired by the buyer, the supplier can also specify categories, items, and their associated carbon footprints. This is customized for each client, and can be further customized based on the categories the supplier is expected to supply and/or indicates they wish to supply.

Vendor Portal

Once a vendor is onboarded, they have access to the vendor portal where they can

  • maintain their information (and the updates go into an approval queue in the suppliers section of the application for the buyer to accept or reject)
  • maintain their catalog (and item updates, including prices, also go into an approval queue)
  • access, and respond to, current open RFQs
  • access, and review, their past RFQ responses (whether or not successful)
  • maintain all of their documents (and they will receive auto-reminders when necessary insurance or compliance documents need to be updated)
  • can access their purchase orders, and fulfill them
  • can create invoices, either as PO flips or PDF uploads
  • can track invoice status

Like the buyer application, it is a seamlessly integrated experience that is very easy to use (and, like the onboarding process, it can be customized by client on implementation) and minimizes the amount of time and effort needed by the supplier. (A non-strategic non-critical commodity goods supplier shouldn’t need to go through 27 steps to be onboarded or fulfill an order.)

Summary

Eyvo is, as we stated, a procurement spend management solution for the mid-sized service and hospitality industries that delivers what it claims — an end to end procurement and inventory management solution that meets all of the needs of a typical organization in this industry niche for all of the employees impacted by Procurement — requesters/receivers, buyers, managers, and vendors — where each has their own customized portal view that is tailored to their needs. And while there are no fancy visuals of Matt with his approved request, Gen-AI chatbots that mistake your request for a new iphone for an old rotary wired landline phone shaped like an apple when you ask for an apple phone, and no animated visuals of where the request is in the process, there is something a lot of new-generation intake-based procurement apps lack — actual Procurement functionality tailored to your needs that gets the job done quickly and easily so you can get in, get it done, and get out — with the financials, auditing, and reporting that makes accounting, compliance, and management happy too.

So if you are a mid-sized service or hospitality organization looking for a real Procurement solution, be sure to add Eyvo to your short-list. Even if they aren’t the perfect fit, we guarantee that just the understanding that you gain of what a real Procurement solution does will be worth your time.


It ain’t no trick to get done quick
If you do your work with a solution hand-picked
Like Eyvo. Like Eyvo. Like Eyvo. Like Eyvo.
Your Procurement done on time!

Why are there so many tech failures?

Those following along know that this is a primary concern of both THE REVELATOR and the doctor because, if we were truly progressing in technology, we wouldn’t still be seeing the same enterprise technology implementation failure rates of 80%+ that we saw two decades ago! (This is why the doctor decided to update, expand, and republish his Project Assurance series series from a decade ago. See Part 1, Part 2, and Part 3.)

THE REVELATOR asked this question again in his recent article on Why is AI such a hard sell?, in comments in my recent piece on Vendor Onboarding for Payment Assurance because it reminded him on how so many vendors miss critical solution elements required by the business in their technology-first push*, and in comments to his recent article on DPW & Comdex.

The answers are varied, and regardless of which one applies in the failure at hand, none of them are good. In fact, they are mostly so bad that THE REVELATOR, who is as fed up as the doctor with all of the sales and marketing bullcr@p, flat out stated in his most recent article that after 40-plus years, I say this with the deepest sincerity -– 90% of salespeople aren’t worth the gum stuck on the bottom of a shoe. And while the doctor would like to think the number wasn’t that high, given the failure rate, it can’t be that far off.

A lot of commentary as to why can be found in the comments to these (and other recent articles), but most of them revolve around the following reality (which the doctor also knows all too well with over 25 years in tech and Procurement).

At the majority of tech enterprises,

  • sales people are compensated on how much they sell, not how successful the solution is for the customer
  • sales people are pressured to hit numbers, or be cut if they have even ONE quarter in the bottom 10%
  • sales people don’t stick around long enough for success to matter — as THE REVELATOR has noted,
    • sales people could make a good living selling next to nothing for 18 to 24 months drawing a good 5-figure salary every month (once they made a few sales and had a “track record”) and then changing jobs as soon as they closed a few mega deals (which could sometimes net them a six-figure departure bonus)
    • sales people make more money by changing jobs just after closing a few F500 clients (and negotiating a bigger salary building on their recent high)
    • … and even more if they can do it during the rapid rise in spending (that translates into top engineers and top sales people at any cost) at the fore-front of a hype cycle (when early vendors believe they can make the biggest sales first if they just have the “best” sales people, defined as those who just closed the biggest deals at their last job with F500/G3000 customers)

It’s all about how much, how fast they can sell … not about actually selling a solution and making a client successful (and building a pipeline for upsell over time as they learn the customers’ business and create newer, better solutions for the clients who would happily fork over fistfuls of dollars to a vendor with a track record of delivering solutions that actually worked).

As to THE REVELATOR‘s paraphrased question with regards to why don’t these sales people care that the solution they are selling is going to fail, it becomes pretty obvious when you consider the above:

  • they aren’t compensated to solve customer problems; only to sell as much as possible as fast as possible and do so at ANY and ALL costs
  • if it’s a big enterprise suite deal with an F500/G3000 being implemented by a third party consultancy, chances are the implementation won’t even be finished before they move on to their next job (and if it fails, then it’s the consultancy’s fault for sending the B-team)
  • caring would weigh down on their conscience until they had to find a new occupation (and if they had no other significant skill, then what would they do?)

And if they are actually caring people?

Then they convince themselves the solution can be configured to work with the right tweaks, even if, in reality, it can’t.

So what is a buyer to do? What the doctor has been saying for years.

Their research!
And, most importantly, get unbiased third-party help with need identification, vendor identification, and proposal review!

Why, because, as the doctor has said many times, including in the comments in response to THE REVELATOR‘s comments, everyone needs to remember:

  • there are no silver bullet tech solutions
  • many “solution” providers riding the current hype cycle are just proffering a new form of silicon snake oil
  • some providers don’t have anything except this snake oil, and the minute the third party fails, so do they
  • relying on the wrong tech is dangerous, just like relying on airplanes made under poor quality control processes … you’ll get a few good flights out of them, and then the door will suddenly blow off as the landing gear falls off on the same flight, and then what do you do? (Unless you have ““Sully” at the helm, you pray to whatever deity you believe in, because at that point, there is nothing you can do.)

* whereas PaymentWorks, chronicled in that piece, started by identifying what their clients’ biggest business issues were, and solving that first — so while it’s not the broadest Supplier Management suite on the market, it is one that contains the necessary functionality to solve a very specific set of pain points that almost no other vendor does; which most of you should find shocking given that there are over 100 Supplier Management vendors, illustrating THE REVELATOR‘s comments that not enough technology providers put solving customer problems first).

Proper Project Planning is Key to Procurement Project Prosperity! Part 3

In Part 1 we noted that we wrote about the importance of Project Assurance, and how it was a methodology for keeping your Supply Management Project on track, ten years ago and that this typically ignored area of project management is becoming more important than ever given that the procurement technology failure rate, as well as the technology failure rate as a whole, hasn’t improved in the last decade, and is still as high as 80% (or more) depending on the study you select.

Then, in Part 2, we told you that even before we dove into the project steps for which both assurance, and guidance (because assurance isn’t enough if the project [plan] isn’t right), is needed, we were going to give you one critical action that you needed to undertake to ensure everything starts off, and stays right. And that particular action is to:

  • engage an independent expert to guide you through the entire process and help where needed

because the complexity of Procurement and Procurement Technology has reached a point where it just overwhelms the average Procurement professional. It’s been more than two decades since global conditions impacting Procurement have been so complex and technology has reached the point where even experts are struggling to make sense of the market madness, meaningless buzzwords, and the overwhelming onslaught of Hogwash.

We also pointed out that this expert must be truly independent and cannot be:

  • a resource of the company,
  • a resource of the vendor, or
  • a resource of the implementation provider.

This resource is critical in each of the phases we described in our original Project Assurance series (Part I, Part II, Part III, Part IV, and Part V). Here’s a high level description of why.

  • Strategy: the first step is a “health assessment” that pinpoints where the organization is in Procurement Maturity, and what it should be looking for to get to the next level (otherwise, what’s the point?), and this is where an expert can do a maturity and gap analysis
  • Acquisition: the expert can help craft the right RFP for the organization, identify which vendors have the appropriate technology (to ensure every response received would at least address some of the key pain points, and that the responses would be comparable), and help with the evaluation and review (acting as sale-speak to plain English translators)
  • Planning: once one or more solution (and implementation) vendors are selected, the expert is key in the creation of a realistic, and logical, project plan that ensures the organization doesn’t agree to a “big-bang” implementation proposal (which always results in a “big-bang” and has led to major supply chain failures), that the resource requirements won’t be too strenuous on the organization, and that the most critical capabilities are implemented first
  • Design/Plan Review: the plan is compared to the strategy, RFP, and overall business goals to make sure everything is aligned before the project progresses
  • Development/Implementation: the expert ensures each phase starts, completes, and is properly tested and verified on time; uncovers the reasons for delays and the root causes to prevent future problems; and when changes are required, helps to define and supervise change management (plans)
  • Testing & Training: the expert will not only ensure that the proper tests are designed, but that they are properly implemented and repeated until complete success is the result

In other words, the right expert is your guide to ensuring each step is designed right as well as conducted right, who can also take over any tasks you don’t have the expertise to do so in house. And, most importantly, the right expert is your key to Procurement Project Prosperity!

GEN-AI IS NOT EMERGENT … AND CLAIMS THAT IT WILL “EVOLVE” TO SOLVE YOUR PROBLEMS ARE ALL FALSE!

A recent article in the CACM (Communications of the ACM) referenced a paper by Dan Carter last year that demonstrated that the claims of Wei et.al in their 2022 “Emergent Abilities of Large Language Models” were unsubstantiated and merely wrong interpretations of visual artifacts produced by computing graphs using an inappropriate semi-log scale.

Now, I realize the vast majority of you without advanced degrees in mathematics and theoretical computer science won’t understand the majority of technical details, but that’s okay because the doctor, who has advanced degrees in both, does, can verify the mathematical accuracy of Dan’s paper, and the conclusion:

LLMs — Large Language Models — the “backbone” of Gen-AI DO NOT have any emergent properties. As a result, they are no better than traditional deep learning neural networks, and are, at the present time, ACTUALLY WORSE since our lack of deep research and understanding means that we don’t have the same level of understanding of these models, and, thus, the ability to properly “train” them for repeatable behaviour or the ability to accurately “measure” the outputs with confidence.

And while our understanding of this new technology, like any new technology, will likely improve over time, the realities are thus:

  • no amount of computing power has ever hastened the development of AI technology since research began in the late 60s / early 70s (depending on what you accept as the first paper / first program), it’s always taken improvements in algorithms and the underlying science to make slow, steady progress (with most technologies taking one to two DECADES to mature to the point they are ready for wide-spread industrial use)
  • the technology currently takes 10 times the computing power (or more) to compute “results” that can be readily computed by existing, more narrow, techniques (often with more confidence in the results)
  • the technology is NOT well suited to the majority of problems that the majority of enterprise software companies (blindly jumping on the bandwagon with no steering wheel and no brakes for fear of missing out on the hype cycle that could cause a tech market crash unequally by any except the dot-com bust of the early 2000s) are trying to use it for (and yes, the doctor did use the word “majority” and not “all” because, while he despises it, it does have valid uses … in creative (writing, audio, and video) applications [not business or science applications] where it has almost unequalled potential compared to traditional ML designed for math and science based applications)

And the market realities that no one wants to tell you about are thus:

  • former AI evangelists and some of the original INVENTORS of AI are turning against the technology (out of a realization that it will never do what they hoped it would, that its energy requirements could destroy the planet if we keep trying, and/or that maybe there are some things we should just not be meddling with at our current stage of societal and technological evolution), including Weizenbaum and Hinton
  • Brands are now turning against AI … and even the Rolling Stone is writing about it
  • big tech and companies that depend on big tech (like Pharma) are starting to turn against AI … and CIOs are starting to drop Open AI and Microsoft CoPilot because, even when the cost is as low as $30 a user, the value isn’t there (see this recent article in Business Insider)

Now, the doctor knows there are still hundreds of marketers and sales people in our space who will consistently claim that the doctor is just a naysayer and against progress and innovation and AI and modern tech and blah blah blah because they, like their companies, have gone all in on the hype cycle and don’t want their bubble burst, but the reality is that

the doctor is NOT against “AI” or modern tech. the doctor, whose complete archives are available on Sourcing Innovation back to June 2006 when he started writing about Procurement Tech, has been a major proponent of optimization, analytics, machine learning, and “AI” since the beginning — his PhD is in advanced theoretical computer science, which followed a math degree — and, after actually studying machine learning, expert systems, and AI, he used to build optimization, analytics, and “AI” systems (including the first commercial semantic social search application on the internet)

what the doctor IS against is Gen-AI and all the false claims being made by the providers about its applicability in the enterprise back office (where it has very limited uses)

because the vast majority of the population does not have the math and computer science background to understand

  1. what is real and what is not
  2. what technologies (algorithms) will work for a certain type of problem and will not
  3. whether the provider’s implementation will work for their problem (variation)
  4. whether they have enough data to make it work

and, furthermore, this includes the vast majority of the consultants at the Big X who graduate from Business Schools with very basic statistics and data analytics training and a crash course in “prompt engineering” who can barely use the tech, couldn’t build the tech, and definitely couldn’t evaluate the efficacy and accuracy of the underlying algorithms.

The reality is that it takes years and years of study to truly understand this tech, and years more of day-in and day-out research to make true advancement.

For those of you who keep saying “but look at how well it works” and produce 20 examples to prove it, the reality is that it’s only random chance that it works.

With just a bit of simplification, we can describe these LLMs as essentially just super sophisticated deep neural networks with layers and layers of nodes that are linked together in new and novel configurations, with more feedback learning, and structured in a manner that gives them an ability to “produce” responses as a collection of “sub-responses” from elements in its data archive vs just returning a fixed response. As a result they can GENerate a reply vs just selecting from a fixed one. (And that’s why their natural language abilities seem far superior to traditional neural network approaches, which need a huge archive of responses to have a natural sounding conversation, because they can use “context” to compute, with high probability, the right parts of speech to string together to create a response that will sound human.)

Moreover, since these models, which are more distributed in nature, can use an order of magnitude more (computational) cores, they can process an order of magnitude more data. Thus, if there is ten to one hundred times the amount of data (and it’s good data), of course they are going to work reasonably well for expected queries at least 95% of the time (whereas a last generation NN without significant training and tweaking might only be 90% out of the box). If you then incorporate dynamic feedback on user validation, that may even get to 99% for a class of problems, which means that it will appear to be working, and learning, 99 times out of 100 instead of 19 out of 20. But it’s NOT! It’s all probabilities. It’s all random. You’re essentially rolling the bones on every request, and doing it with less certainty on what a good, or bad, result should look like. And even if the dice come “loaded” so that they should always roll a come out roll, there are so many variables that there are never any guarantee you won’t get craps.

And for those of you saying “those odds sound good“, let me make it clear. They’re NOT.

  • those odds are only for typical, expected queries, for which the LLM has been repeatedly (and repeatedly) trained on
  • the odds for unexpected, atypical queries could be as low as 9 in 10 … which is very, very, bad when you consider how often these systems are supposed to be used

But the odds aren’t the problem. The problem is what happens when the LLM fails. Because you don’t know!

With traditional AI, you either got no response, an invalid response with low confidence, or a rare (compared to Gen-AI) invalid response with high confidence, where the responses were always from a fixed pool (if non-numeric) or fixed range (if numeric). You knew what the worst case scenario would be if something went wrong, how bad that would be, how likely that was to happen, and could even use this information to set bounds and tweak the confidence calculation on a result to minimize the chance of this ever happening in a real world scenario.

But with LLMs, you have no idea what it will return, how far off the mark the result will be, or how devastating it will be for your business when that (eventually) happens (which, as per Murphy’s law, will be after the vendor convinces you to have confidence in it and you stop watching it closely, and then, out of the blue, it decides you need 1,000 custom configurations of a high end MacBook Pro in inventory [because 10 new sales support professionals need to produce better graphics] in a potentially recoverable case or it decides to change your currency hedge on a new contract to that of a troubled economy (like Greece, Brazil, etc.) because of a one day run on the trading markets in a market heading for a hyperinflation and a crash [and then you will need a wheelbarrow full of money to buy a loaf of bread — and for those who think it can’t happen, STUDY YOUR HISTORY: Germany during WWII, Zimbabwe in 2007, and Venezuela in 2018, etc.]). You just don’t know! Because that’s what happens when you employ technology that randomly makes stuff up based on random inputs from you don’t know who or what (and the situation gets worse when developers [who likely don’t know the first thing about AI] decide the best way to train a new AI is to use the unreliable output of the old AI).

So, if you want to progress, like the monks, leave that Genizah Artificial Idiocy where it belongs — in the genizah (the repository for discarded, damaged, or defective books and papers), and go find real technology built on real optimization, analytics, machine learning, and AI that has been properly researched, developed, tested, and verified for industrial use.

Analytics Is NOT Reporting!

We’ve covered analytics, and spend analysis, a lot on this blog, but seeing the surge in articles on analytics as of late, and the large number that are missing the point, it seems we have to remind you again that Analytics is NOT Reporting. (Which, of course, would be clear if anyone bothered to pick up a dictionary anymore.)

As defined by the Oxford dictionary, analytics is the systematic computational analysis of data or statistics and a report is a written account of something that has been observed, heard, done, or investigated. In simple terms, analysis is what is done to identify useful information and reporting is the process of displaying that information in a fancy-shmancy graph. One is useful, one is, quite frankly, useless.

A key requirement of analysis is the ability to do arbitrary systematic computational analysis of data as needed to find the information that you need when you need it. Not just a small set of canned analysis on discrete data subsets that become completely and utterly useless once they are run the first time and you get the initial result — which will NEVER change if the analysis can’t change.

Nor is analysis a random AI application that applies a random statistical algorithm to bubble up, filter out, or generate a random “insight” that may or may not be useful from a Procurement viewpoint. Sometimes an outlier is indicative of fraud or a data error, and sometimes an outlier is just an outlier. Maybe the average transaction value with the services firm is 15,000 for the weekly bill; which makes the 3,000 an outlier, but it’s not fraud if the company only needed a cyber-security expert for one day to test a key system — in fact, the insight is useless.

As per our recent post on a true enterprise analytics solution, real analysis requires the ability to explore a hunch and find the answer to any question that pops up when it pops up. To build whatever cube is needed, on whatever dimensions are required, that rolls up data using whatever metrics are required to produce whatever insights are needed to determine if an opportunity is there and if it is worth being pursued. Quickly and cost-effectively in real-time. If you have to wait for a refresh, or spend days doing offline computation in Excel to answer a question that might only save you 20K, you’re not going to do it. (Three days and 6K of your time from a company perspective is not worth a 20K saving if that time spent preparing for a negotiation on a 10M category can save an extra 0.5%, which would equate to 50K. But if you can dynamically build a cube and get an answer in 30 minutes, that 30 minutes is definitely worth it if your hunch is right and you save 20K.)

Analysis is the ability to ask “what if” and pursue the answer. Now! Not tomorrow, next week, or next month on the cube refresh, or when the provider’s personnel can build that new report for you. Now! At any time you should be able to ask What if we reclassify the categories so that the primary classification is based on primary material (“steel”) and not usage (“electrical equipment”); What if the savings analysis is done by sourcing strategy (RFX, auction, re-negotiation, etc.) instead of contract value; and What if the risk analysis is done by trade lane instead of supplier or category. Analysis is the process of asking a question, any question, and working the data to get the answer using whatever computations are required. It’s not a canned report.

Analytics is doing, not viewing. And the basics haven’t changed since SI started writing about it, or publishing guest posts by the Old Greybeard himself. (Analytics I, II, III, IV, V, and VI.)