Category Archives: Services

Gen-AI is Bad for Consulting Firms … But Even Worse For You When the Consulting Firms Blindly Use It!

A recent post on LinkedIn noted how there’s a wave of AI products flooding the consultancy and advisory space and how they are, frankly mediocre, overpriced wrappers on public models with minimum innovation, if any.

This is sad, but true, and it’s not the worst of it. The worst of it is that some of the Big X firms are training tens of thousands of consultants and f6ckw@ds on these tools to generate hundred page pitch decks and three hundred page strategy and implementation guides of standard generic, meaningless, drivel to deliver to you as “highly tailored guidance and expertise from their leading partners with 20 years experience delivering high-value projects” and charge you tens of thousands of dollars for the privilege.

This is especially egregious when you can use free/cheap (and I’m talking put it on your personal credit card cheap because you won’t notice the fee that is less than your monthly coffee charge from the coffee shop) to build the exact same pitches, strategy, and implementation guides from the thousands of freely available documents on the web in a few hours with a few generic prompts over a Sunday morning coffee. (And then, when the coffee kicks in, realize it’s all a load of cr@p and put in the bit bucket, but at least you will know what a load of cr@p looks like in pitch deck, strategy guide, and implementation plan form and will recognize it the next time an overpriced Big X tries to sell it to you for a ridiculous price tag and will have learned something from the exercise.)

Now that there are companies selling overpriced “custom” products to these consultancies, the situation is only getting worse, especially when the “customization” is just a wrapper with some pre-engineered prompts that aren’t well tested, only work at a point in time, don’t really give the consultancies what they need, and sometimes translate mediocre inputs to inputs that are even worse. Moreover, when you consider the price is sometimes a 100X multiple on the products they build on top of, it’s disgusting. Consultancies are paying more for less, and, in return, you are paying even more for even less!

Which makes no sense when the current publicly available LLM tech is being offered cheap (to try and hook you on it, even though, as we’ve repeatedly explained, the tech is not ready for prime time and will never deliver more than a fraction of what they are promising), and new implementations will get a lot cheaper. Just look at how DeepSeek undercuts the cost by a factor of 100 and gets 90% of ChatGPT (as long as you don’t mind exposing all of your secrets to the CCP). LLMs are nothing more than a fancy next-gen “deep learning” Neural Networks that construct responses vs. serving up canned responses (which is why hallucinations and lies are a core function, not an error that can be trained out) which gets us closer (but no cigar) to decent natural language processing (NLP) for the express purpose of the generation of desired outputs from inputs, but not there (and now, in addition to all the false positives and false negatives, we had to deal with, we now get to deal with hallucinations and lies as well). It’s not secret magic, it’s layers and layers of interconnected statistics and probabilities that no human can understand, in rather standard models that any Theoretical CS and Applied Math PhDs can build, and implementations that are better and cheaper are going to keep appearing as time goes on.

This means three things to any consultancy thinking about using these custom “AI” solutions

  • you still have to be even more tech savvy to use them to any degree of effectiveness
  • it’s not “the art of the prompt“, it’s the art of the training (even though they don’t really learn because they are NOT intelligent) because that determines the maximum level of effectiveness you will ever reach with them (and you need to provide them with sufficient correct data, which needs to be in the high gigabytes at a minimum, and, preferably, in the petabytes)
  • you don’t have to worry about when they are right (enough), which will happen between 90% and 95% of the time with proper training and proper prompting, or when they are obviously wrong, which will happen a very low percentage of the time (say 5% to 9%), but when they are oh so wrong but the response is constructed in a way that is oh so convincing that an above average person in intellect and experience wouldn’t know otherwise (that danger zone between obviously wrong and good enough that is likely only 1% to 2% of the time).

Now remember that your consultants aren’t that tech savvy, and you should know right off the bat incorporating and using these is going to be difficult and time consuming. (There’s a reason we are constantly advising you to be very careful about using Big X for tech selection and tech projects, and that’s because, even though they say it is, it’s NOT their forte. They weren’t built on tech, and they don’t have the best talent in tech — that talent goes to the big tech companies who can offer the 500K salaries to leading devs or the wild-west startups that leading devs think are cool.)

You only have so much clean and complete data you can use for training. You can’t just throw in the 1000s of decks you’ve built as you can’t share work you’ve explicitly created and sold to past clients, and the AI won’t anonymize the decks and suggestions (even though you think it will). It won’t know that “Ford” is the name of your client and might think that “Ford Data” is another term for shallow data and copy sections from that custom strategy straight into your pitch deck for General Motors (and chances are your overworked junior consultant won’t catch it when skimming that 200 page deck with only 2 hours to go before the meeting). And we know what happens then … (and it ends with the consultancy not keeping either client).

It will take a lot of analysis to identify those 1% to 2% of cases where it is very, very wrong but so convincingly right that you will miss some. What happens when you do and give your client advice that explodes in their faces? (We’ll let you answer that one.)

And for you as a consumer, if your consultancy is using this Bogus AI tech, it means that:

  • the situation that results from solution delivered might be even worse than the situation you started with (as should be evidenced not just by the tech project failure rate that is approaching 92% but the fact that 42% of projects are being abandoned during implementation!)

A solution designed by Gen-AI is not a solution. A real solution is a solution designed by human intelligence that uses real, augmented intelligence, to research and validate that solution. Remember that if you are going to hire a consultant!

An Update on Promena: The Rich Caffeinated Citrusy Turkish Punch

In our last write up of Promena in November of 2023, we introduced you to this two-decade old mid-market Source-to-Contract player (with some e-Procurement capability) based in Türkiye that does over 3 Billion in annual transaction volume. While a lot of that volume is still in Türkiye, Promena is expanding throughout Europe and into North America with its multi-lingual solution that supports 13 languages.

In that update we covered the core of their:

  • RFX: which supports RFX survey forms, product selection (from the product library), supplier selection, bid management, response comparison (with lowest bid for each line item outlined), and award selection
  • e-Auctions: which supports English, Dutch, and Japanese auctions; allows parameters to be configured; suppliers to be invited; and the auction to be run
  • Contracts: when an award is selected, the system supports contract creation and indexing up to the signature (which needs to be wet, scanned, and uploaded) and acts as an integrated e-filing cabinet
  • Supplier Management: information, relationship, baseline (KPI) performance management, and onboarding with a supplier suitability score (that is assessed through supplier responses to the buyer’s form-based questions) and quick access to all events and corrective action requests the supplier is involved in, products the supplier offers, and contracts the supplier is bound to
  • Corrective Actions: are supported and can be buyer created, sent to suppliers who need to respond, and then buyers can accept or reject the response and can always search the complete history of requests
  • ESG Management: which was mainly a section for collecting surveys and centralizing KPIs related to ESG
  • Product Management: collects and stores the descriptions of the products and services the buying organization needs in the organization’s category hierarchy and the foundation of the catalogs supported by Promena
  • Catalogs: from which organizational users to self-serve purchase standard, approved, on contract products and services
  • Purchase Orders: that can be generated off of a sourcing award, contract or a catalog item

Since then, Promena has made updates in the following areas:

  • Supplier Discovery: for identifying suppliers not in your database
  • Supplier Certifications: for centralization and tracking thereof
  • Contract Drafting: enhanced capability to pull in platform data
  • Enhanced ESG/Supplier Profiles: on organizational suppliers
  • Configurable Workflows: for platform processes and approvals
  • Completely Redesigned UX!

Supplier Discovery

Promena has built an internal supplier directory with default (public) profiles for all of the suppliers across its customer database which can be searched by any organization on product/service and location and, if relevant, added to their internal supplier database (and then invited to events, etc.). (They are investigating integrating with an external supplier discovery platform to assist global organizations in more extensive supplier discovery.)

Enhanced ESG/Supplier Profiles

The ESG profiles have been enhanced, with graphical displays for easy analysis, and integrated news feeds with recent articles related to a suppliers sustainability efforts/carbon footprint (in beta, and publicly available in the next release).

Configurable Workflows

This is one of the two major enhancements to the platform that we felt you should know about. In the new version of the platform, all of the RFX and auction, contract management, supplier onboarding, and approval workflows throughout the application can be individually configured for each buying organization. Furthermore, they are all exposed in the administration section and a sufficiently capable administrator can edit the core workflows themselves or work with Promena on implementation to customize the workflows to their liking.

In addition to having standard workflow definitions for each module/section of the platform, the platform uses these to generate wizard-like path-based walkthroughs in every section that make it clear (at the top of the screen) exactly where the user is in the process, what they just did, and what they will need to do next.

Completely Redesigned UX

Much of the year last year was spent updating the stack, building modern workflows, and redesigning the entire user experience to be easier, cleaner, and much more obvious so that users can get up and running with little to no experience, and they have done a great job in this regard. It’s cleaner, more streamlined, and the enhanced use of workflow and templates allow organizations to define the right process with the right template that minimizes the work buyers have to do on a daily basis and makes it easy for users to find what they want in catalogs, follow along with events (as they can be invited in viewer roles), fill out surveys, and so on.

And while this did delay some of the planned functionality for the second half of 2024 (like line-item price breakdowns, which is now coming the second half of this year), it was definitely worth it because, once implemented (, configured to the organization’s process, populated with default forms and documents, and loaded with the existing catalog) most buyers will be able to sit down, dive in, and get an event live with no training.

Roadmap

As per above, items for this year include:

  • enhanced supplier discovery through Promena‘s supplier marketplace (and possibly a third party supplier discovery player)
  • enhanced ESG profiles with real-time news updates and AI summaries
  • line-item cost breakdowns for deeper cost insight (and comparisons in the platform)
  • Brazilian auctions
  • Auction Guidance which will mathematically analyze the past 5 years of results (if available) on the product/category and advise you on which auction will likely generate the best results

For a mid-market company, it’s a fairly extensive platform that’s easy to use, easy to customize, and easy to grow on. This makes Promena a platform that should definitely be considered for your short-list if you are looking for a modern S2C mid-market platform with purchase order support and services support if you need it (either from Promena direct where their account teams manage over 5,000 sourcing events a year or from 11 Global Partners who can deliver integration and support services).

Simplify Services Sourcing By NVELOPing Your Bid Packages!

Services sourcing in most organizations is a complex nightmare. It’s not simple like indirect sourcing where you identify a finished product need, send out an RFQ for a standard product spec, get some quotes, do a landed cost calc using your pre-negotiated or market spot-buy freight rates and current tariffs, and select the lowest cost bid. Easy-peasy. It’s not even straightforward like BoM (Bill of Material) or Program management in a direct sourcing application where you send out a quote package with a set of components, detailed drawings and specs on each component, detailed cost breakdown requests, anticipated production schedules, and compliance and regulatory requirement documents. (And yes, while this is a lot of work to put together even with the best platform, including platforms that can suck in the majority of requirements from the ERP and the PLM, it’s still relatively straightforward for an engineer.)

Services sourcing is complex. While services might have categories for the chart of accounts, and services professionals might have standard roles, and any subsequent request for the same service on the chart of account from someone in the same role with the same experience will be similar, they won’t be the same. Installing a cable line is not just installing a cable line. Is it a home line or business line? Do you have to connect to the poll, a junction box, or a rack mount? Do you have to drill through walls? Are they wood or cement? Implementing an ERP is not implementing an ERP is not implementing an ERP even if it is SAP or Oracle in all instances. What version? What cloud platform does it have to run on / integrate with? Which P2P and back office systems have to be connected? And so on.

On top of the basic project requirements, services projects require a lot of terms and conditions, NDAs, professional certifications and insurance requirements, key performance requirements, confidential information on current state and desired state, evaluation criteria, etc. In a typical organization, a bid package will consist of a huge stack of e-documents, hastily assembled (and riddled with errors due to the haste), zipped up, and sent off to bidders who, hopefully, when struggling to fill out the overly convoluted RFPs on a tight deadline, don’t miss any key requirements before sending it back. (When the organization is in a crunch, which it always is, a lot of this work will often be done by third party consultants who will be less familiar with the requirements than the overworked staff who don’t have time to do it, leading to oversights as well as errors.)

Nvelop was founded to solve these woes, namely

  • the time requirement to put together the core of the RFP
  • the need to ensure that RFPs contain all the required bid / information fields
  • the effort to collect all the corresponding documents
  • the need to ensure the terms and conditions satisfy legal
  • the need to ensure suppliers see and respond to all mandatory terms and conditions
  • the need to communicate with vendors in a secure, trustable, method
  • etc.

We’re going to discuss their solution by addressing these points.

The RFP Core

Services RFPs are extensive and time consuming to put together. So Nvelop gets around this by jump-starting the process with LLMs that will create a starting RFP given:

  • project type (RFI, RFP Lite, RFP)
  • domain (Technology, IT Services, Facility Services, Legal Services, Marketing & Advertising, etc.)
  • expected issues (high competition, business criticality, environmental risk, etc.)
  • pricing model (fixed price, target price, time & materials, etc.)
  • engagement type (staff augmentation, system integration, software implementation, consulting & advisory, etc.)
  • business domain (Sales & Marketing, Finance, HR, Supply Chain, etc.)
  • Technology [stack] (AWS, Google Cloud, Oracle, etc.)
  • Enterprise Software (SAP, Oracle, Salesforce, etc.)
  • Business Criticality (1 to 5 scale)
  • Background Material (any documents with relevant information that will not be given to bidders)
  • brief project description
  • meta-data for management and indexing (due dates, owner, team, categorization, etc.)

It will use the LLM that is trained on generic project documents that match the request as well as historical projects from the organization to generate a starting RFP that will break the project requirements down into core processes and sub-processes with supporting requirements for the project type for each sub-process. For example, for an ERP Application Maintenance project, it will define the core processes of:

  • application maintenance
  • ERP system management
  • integration services
  • reporting and analytics

and for application maintenance, for example, it will identify the core sub-processes of:

  • issue resolution
  • incident management
  • change management
  • performance monitoring
  • user support

with a detail description of each sub-process and bidder requirements. For example, for issue resolution, it might break down into

  • vendor must allow users to log issues through a portal and issue confirmations and ticket numbers
  • vendor must investigate all issues within one business day, regardless of criticality
  • vendor must respond to critical issues within one hour with a resolution team and estimated timeframe for resolution
  • vendor must maintain a knowledge base updated bi-weekly with common issues and resolutions for self-help
  • etc.

Once the initial RFP has been auto-generated, the buying team can

  • add internal comments during team discussions and/or collectively prioritize requirements
  • manually add, edit, or remove any process, sub-processes, requirement or description
    with status (generated, edited, etc.) tracked
  • approve when happy (and the platform can be configured to prevent issuance until all requirements are approved)

All Requirements Accounted For

Nvelop is not a new-age rapidly developed Chat-GPT wrapper parading as a Procurement solution when it really isn’t. It is a new startup founded by consultants who spent 20 years doing services projects while constantly thinking to themselves that there has to be a better way, who came together and defined what the requirements of such a sourcing platform should be, who built a real platform (that walks buyers through a 7-step process), and who only use LLMs for generating content where it makes sense.

The platform has a fairly extensive administration component where, for a project type, you can define:

  • the starting bid templates
  • core documentation requirements
  • mandatory terms & conditions

and the generation logic will ensure that all of these are included in the starting RFP (and associated package) that is generated, either through custom LLM instructions or forced overrides.

It also has a document library where you can store all of your standard documents on company profile, insurance requirements, compliance requirements, general service requirements for personnel on your site, etc. that can be pulled into all relevant projects.

Effort to Collect Corresponding Documentation

Since the platform, as described above,

  1. has a document library
  2. can, privately, store all related documents relevant in RFP construction on a project basis

It’s very easy to automatically include all of the relevant documents in an RFP, as the majority will already be in the system, and the rest can simply be uploaded and the majority of relevant content auto-extracted by the LLM during the initial generation process.

Terms and Conditions Satisfy Legal

Not only can you include a standard clause library, in the administration section, but you can configure the application to use pre-approved legal clauses verbatim in requirements and draft agreement generation, and the LLM will only be used to generate the parts of the RFQ and draft agreement for which there is not a verbatim clause. In addition, if a clause needs to be adjusted for different geographies, categories, etc., you are able to configure the application to force the LLM to generate its response based on a library clause. In other words, if you already have something acceptable, you can be sure it makes it into the RFQ or draft agreement vs. just rolling the bones.

Forced Supplier Response on Mandatory Ts and Cs

RFIs and RFP packages can be designed to force a supplier to respond to each mandatory and critical requirement, where they can simply select a Yes/No or Yes/Partal/No response with additional commentary, if required. This way a supplier can never say “they didn’t know” something was a requirement in the final stages of a negotiation as they will have seen and responded to it during the initial bid and requirements traceability is a core part of the solution platform.

Secure Communication

Since the RFP process is now going through a platform, where all documents can be securely downloaded and uploaded, all communications securely maintained in their own, auditable, stream, and all required confidential documentation can be accessed at any time once the NDAs have been accepted, the platform solves the security issue that buyers and vendors have with sensitive documents and bids being sent back and forth through email or common FTP portals.

Solution Summary

As we hinted above, the solution will walk a buying team through a seven-step process that consists of:

  1. Planning – enter all the data you have and instruct it to generate a starting RFQ
  2. Requirements – edit and finalize the requirements listing
  3. RFQ – finalize the RFQ (by approving or editing), which will consist of
    • overview information (introduction, your client info, submission and evaluation process, technical landscape, services overview requirements, services timeline, and other relevant information sections)
    • Questions – the requirements you worked on last phase, which can be extended with other questions about the vendor not core to the services requirements
    • Pricing – where the vendor will submit the bid sheets in the appropriate format that is automatically identified based on procurement type and category (including fixed price bids, rate cards, etc.)
    • Evaluation Criteria – where you define the criteria (and weighting)
    • Attachments – automatically pulled in
  4. Tendering – where you can see the RFP Preview (as the vendors will see it), select the vendors, and handle the Q&A; and where you can resend if you have to make an update or do a subsequent round
  5. Responses – which captures the vendor responses
  6. Evaluation – where you evaluate the responses once the RFQ is closed, and can compare them side-by-side at a high level, drill down into the details, and have the system generate overall scores based on the evaluation criteria (and weighting) you define
  7. Deal Room – where you kick off a negotiation process with one or more vendors, assisted by an automatically generated assessment of deviations from specifications or requirements that you will need to address (which will be based not just on clicked boxes but comments, likely intention, degree of deviation, summary of the deviation, assessed negotiation complexity, and likely relative importance)

Moreover, a supplier can easily access, and respond to the RFQ, through the vendor portal that allows them to quickly access the relevant sections, check the boxes, provide their responses, and upload documents. They can also engage in Q&A, and see the status of each project they have been invited to. The Q&A capability includes a LLM-powered chatbot that will search all of the available documentation and provide answers to questions already answered, including pointers to where to find that information in the RFQ package, and that will, when an answer is not found, direct the user to directly message the buyer for the answer.

Why SI is Covering and Recommending Nvelop for Shortlist Configuration

Those who follow SI will know that the doctor despises the rampant proliferation of untested Gen-AI and the random application of Gen-AI to every problem, even a problem so obviously far beyond what Gen-AI was suited for that even a complete idiot would abandon the tech once they say how bad it worked, so why does the doctor recommend this Gem-AI powered platform?

  1. it’s not Gen-AI / LLM driven
    the core is a solid workflow app that follows a process that the founders, each of whom have over two decades of services sourcing support, know works
  2. LLMs are being used for what they are good for
    massive document store summarization and document generation off of standard requirements and similar projects
  3. the LLM can be fine tuned
    and you can direct it to (re)generate an entire package, single document, single section, single process, or single line-level requirement description with additional instructions
  4. everything can be overwritten or manually generated in the first place
    Gen-AI was never meant to be the solution, but a starting point that can get you 90% to 95% of the way there when you don’t have an out-of-the-box solution, so it is designed to generate content where it can get “close enough” and where it’s easier to manually edit the generated output than to even generate a starting document from scratch through cut-and-paste
  5. every single line MUST be manually approved
    and while you can click that “approve” button without reading the associated content, if something is issued wrong, then everyone knows who didn’t do their job and who is ultimately responsible

Moreover, it will get you through a complex sourcing project mostly correct and mostly complete in a matter of weeks, with little to know external (consulting) support required, even if you’ve never done that particular complex sourcing project before! And while no solution is perfect, we’d hazard a guess to say even a neophyte would do a better job with this platform than a grizzled veteran who had to do everything manually under a severe time crunch. (While the grizzled veteran likely wouldn’t make any mistakes on anything they touched, they would be likely to miss something important in the virtual stacks of paperwork with more pages than a copy of “War and Peace” [Simon & Schuster edition] given the time crunch they are always under.)

Nvelop might be new, but it’s solid, which is what you get when you realize it is a solution built by veteran complex services sourcing professionals specifically to support the processes that complex services sourcing professionals use. So if you are in an industry with a lot of services sourcing requirements, and your current sourcing solution (designed for indirect and direct) is letting you down (and it is), then we recommend you at least give this solution a look. The responsible use of AI impressed the doctor, so we would fathom a guess that it should impress you too.

One of these things is not like the other — it’s the right choice!

This originally published March 6 (2024).  It is being reposted due to the criticality of the subject matter (and the fact that One Trillion was wasted on services last year).

Note the Sourcing Innovation Editorial Disclaimers and note this is a very opinionated rant!  Your mileage will vary!  (And not about any firm in particular.)

Three bids for that spend analytics project from the three leading Big X firms come in at 1 Million. One bid for that spend analytics project from a specialized niche consultancy you pulled out of the hat for bid diversity comes in at 250 Thousand. Which one is right?

Those of you who only partially paid attention to the education Sesame Street was trying to impart upon you when you were growing up will simply remember the “one of these things is not like the other” song and think that any of the bids from the Big X firm is right and the niche consultancy is wrong because it’s different, and therefore must be thrown out because it’s too low when, in fact, it’s just as likely that the three bids from the Big X firms that are wrong and the bid from the niche consultancy that was right.

Those of us who paid attention knew that Sesame Street was trying to show us how to detect underlying similarities so we could properly cluster objects for further analysis. What we should have learned is that the Big X bids were all the same, built on the same assumption, and can be compared equally. And that the outlier bid needed further investigation — a further investigation that can only be undertaken against an appropriately sized set of sample set of bids from other specialized niche consultancies to compare against. And without that sample set of bids, you can’t properly evaluate the lower bid, which, the doctor can tell you, is just as likely to be closer to correct than what could be wildly overpriced Big X bids.  (Newer firms often have newer tech and methods — and if these are the right methods and tech for your problem … )

As per our recent post, if you want to get analytics and AI right, most of these guys don’t have the breadth and depth of expertise they claim to have (as most don’t have the educational background to know just how broad, deep, and advanced AI and analytics can get, especially when you dig deep into the math and computer science and all of the variable models and strengths and weaknesses, and instead are trained on what is essentially marketing content from AI and analytics providers). In the group that sells you, there will be a leader who is a true expert (and worth his or her weight in platinum), a few handpicked lieutenants who are above average and run the projects, and a rafter of juniors straight out of private college with more training in how to dress, talk, and follow orders than training in actual analytics … and no guarantee they even have any real university level mathematics beyond basic analysis in operational research (and thus a knowledge of what analytics is and isn’t and can and can’t do).  And unless you know what you need, and why, you can’t judge the response.  (Furthermore, you can’t expect them to figure out your problem and goals with only partial information!)

While there was a time big analytics projects were (multi) million dollar projects, that was twenty years ago when Spend Analysis 1.0 was still hitting the market; when there were limited tools for data integration, mapping, cleansing, and enrichment; and when there weren’t a lot of statistics on average savings opportunities across internal and external spend categories. Now we have mature Spend Analysis 3.0 technologies (some taking steps towards spend analysis 4.0 technologies); advanced technologies for automatic data integration, mapping, cleansing, and even enrichment; deep databases on projects and results by vertical and industry size; extensive libraries for out-of-the-box analytics across categories and potential opportunities; and a whole toolkit for spend analysis that didn’t exist two decades ago. This new toolkit, built by best of breed vendors used, and sometimes [co-]owned by these best of breed niche consultancies (that don’t try to do everything, and definitely don’t pretend they can), allows modern spend analysis projects to be done ten times as efficiently and effectively, in the hands of a master — a master that isn’t necessarily on your project if you hire a Big X or Mid-Sized Consultancy without doing your homework, vetting the proposal, and vetting the people. [See when should you be using Big X.]

In contrast, a dedicated niche consultancy should have all these tools, and only have masters on the project who do these projects day in and day out. Compared to the bigger consultancies who don’t specialize in these projects, which will have a team of juniors using the manual playbook from the early 2000s, and one lieutenant to guide them. That’s often why sometimes their project bids are five times as much — and why you should be inviting multiple niche best-of-breed consultancies to bid on your project as well as multiple Big X consultancies (including those that are truly focusing on analytics and AI, and you can identify some of these by their recent acquisitions in the area) and be focusing in just as much on the six figure bids for the one that provides the best value, not just the seven figure Big X bids.  (And, FYI, if you invite enough Big X, you might find some come in at six figures and not seven because they have acquired the newer tech, took the time to understand your request, and figured out how they could get you the same value for less cost, leaving you funds for the follow on project where you should consider the Big X!)

(This is also the case for implementations. The Big X always have a rafter on the bench to assign to any project you give them, but there’s no guarantee any of them have ever implemented the system you chose before, or if they did, no guarantee they’ve ever connected it to the systems you need to connect to. You need specialists if you want a new system implemented as cost effectively as possible, especially if its a narrow focused specialist application and not a big enterprise application the Big X always implements. At the end of the day, even if you’re paying those specialists 500 or more an hour because getting a system up in 2 months at 40K is considerably better than a small team of juniors taking 4 months at 200 an hour and a total cost of 80K.  But again, mileage will vary — if the solution you select is a Big X partner, then the Big X will be best.  If it’s a solution they never heard of, you will need to evaluate multiple bids from multiple parties. )

Remember, where any group of vendors on the same page are concerned, All of us is as dumb as One of us!

Don’t fall for the Collectivism MindF6ck! that if multiple parties agree on something, that’s the right answer!  the doctor does NOT want to do say it again, but since a month still is not going by where he’s hearing about niche consultancies being thrown out for “being too cheap” or “obviously not understanding the problem” (which means the enterprise throwing them out is too uninformed and not recognizing that the Big X bids could just as likely the outliers because they aren’t inviting enough expert consultancies to the table), apparently he has to keep writing (and screaming) this truth. (the doctor isn’t saying that you can’t get a million dollars of value from some of these consultancies, just that you won’t by giving them a project they are not suited for;  again, see when should you use big X to identify when that million dollar project will generate a five million ROI — it’s people doing these projects at the end of the day, and where are those people?)

Remember, most of these firms got big in management, or accounting and tax, or marketing and sales consulting, not technology consulting. The only reason these big consultancies started offering these services is because of the amount of money flowing into technology, money which they want, but while the best of the best of the best in more traditional accounting, management, and marketing fields flocked to them, the best of the best in technology flocked to startups and c00l big tech firms  Now, some of these firms double downed, went and recruited those people, built small teams, learned, bought tech companies to expand the team, and now have great offerings in a number of areas.  But we have tens of thousands of tech companies for a reason, not everyone can build every type of technology, and not everyone can be an expert in every type of technology.  So while they will have expertise in some areas, they just can’t have expertise in all areas.  No one can.  Find the best provider for you.  Sometimes it will be Big X.  Sometimes Mid-Market.  Sometimes Niche.  It all depends on your problem at hand.)

And yes, sometimes the niche vendor will be wrong and woefully undersize the project or your needs.  But as per the above, if you don’t do give them a chance, and deep dive into their bid, how will you know?

 

Did you ever try eating a mitten? the doctor bets some of those clients did! (He feels you’re not all there if you think glorified reporting projects should still cost One Million Dollars by default and might actually try to eat your mittens! [Joking, but you get the point.]  Deep analytics projects that require the most advanced tech, especially AI tech, will cost a lot, but standard spend analysis, sales analysis, etc. where we have been iterating and improving on the technology for two decades should not.)

SpendKey: Your Solution-Oriented Key to Spend Insights

Preamble:

As the doctor wrote on Spend Matters back in November of 2021, shortly after SpendKey‘s initial release, SpendKey was formed in 2020 by a senior team of Procurement and Spend Analysis professionals with experience at big consultancies (Deloitte, E&Y, etc.), big companies (Thomas Cook, Marks and Spencer, etc. ), big banks and Finance Institutions (Barclays, London Stock Exchange, etc.), and Managed Service Providers (Cloudaeon, Zensar Technologies, etc.) who identified a market need for faster, more accurate data processing and better analytics across the board as well as better expert advice and guidance to accompany those analytics to help companies make quick and optimal decisions to get on the right track the first time around.

After less than a year and a half of development, their initial service-based offering was already sufficient for turn-key consultant led projects and their roadmap had them on track for a completely stand-alone SaaS offering by 2023, which they delivered to the market last year.

So where are they now and what do they do? That’s what we’ll dive into in this article.

Introduction:

SpendKey has evolved from a dashboard driven spend analysis solution to a comprehensive spend, contract tracking and decision intelligence platform with a mission to provide deep insight for sourcing and procurement.

SpendKey‘s unique selling proposition is its ability to index every part, product, services and vendor with context. The product ontology and interoperability creates relationships with any attribute; providing end-to-end visibility; and a data foundation for autonomous workflows (on the roadmap), which can currently be used to power a client’s existing stack.

The SpendKey platform supports the creation of customized reports tailored to client-specific requirements. With a wide array of out-of-the-box dynamic dashboards, SpendKey offers standard insights into spend across categories and suppliers. These dashboards are augmented with advanced analysis tools like ABC analysis, trend analysis, Pareto analysis, Inside/Outside evaluations, order-to-actual correlations, and what-if scenarios, delivering a full-spectrum view of spending.

In addition to its customizable options, SpendKey provides a variety of standard reports to analyse spend, costs, goods, services, and information flows. The platform includes pre-defined reports that cover essential areas of spend analysis with customization for every client need.

SpendKey’s reporting suite has been expanded to include contract reports, budgeting reports, and dynamic MIS reports, offering a comprehensive toolkit for monitoring and optimising spend.
These tools were designed by procurement experts with decades of experience in spend analysis, ensuring that organizations can identify opportunities to not only reduce costs but also enhance overall efficiency and profitability.

SpendKey has an advanced spend-intake process that maps all of an organisation’s spend to any taxonomy (which can be theirs, yours, or a hybrid) using a multi-stage hybrid mapping process that uses known mappings, AI, human corrections, and overrides that feedback into the next mapping cycle. Once the client has worked with SpendKey to do the initial spend upload and mapping, the client can subscribe to incremental updates (that will be handled fully by SpendKey) or do self-serve via file-based incremental uploads.

So, if you read the initial analysis, what’s new?

  • improved data intake pipeline (which increases auto-mapping completeness and shortens the intake cycle)
  • project tracking
  • budget approvals
  • document analytics (and contract tracking)
  • commitments, budgets, and actuals comparison capability
  • ability to index parts, products, and services
  • line item auditability and more security controls
  • more spend sources
  • new dashboards

And what hasn’t changed (much)?

  • still no DIY (do-it-yourself) report builder
  • limited mapping audit access through the front end

And we’ll talk about each of these in turn.

Data Intake Pipeline

The data-intake pipeline is multi-step and works something like this:

1. Upload a raw data file in CSV or Excel or integrate via API

2. Validate the file against column descriptions, data formats, and language requirements (auto-translating to English if required) and apply any necessary transformations and cleansings to create records for classifications.

3. Run the current corpus of mapping rules.

3a. Push the mapped data into the live spend database.

3b. Package the unmapped transactions for web-processing.

4. Extract the supplier, product, and related information and use web-scraping (including Gen-AI models) to extract supplier and line of business information that can be used for classification.

5. Create suggested mappings where there is sufficient confidence for a human to review.

6. Push the verified mappings into the mapping rules and then retrain the machine learning on the new corpus of mapping rules to map the remaining unmapped spend and push through anything with sufficient confidence to the live system, having a human deal with the rest (or push it to an unclassified bucket).

By using multiple techniques, they are able to get to a high accuracy very quickly and turn around the client’s spend cube rather quickly compared to most consultancies using traditional methodologies. For even their largest clients, they are typically live with high mapping accuracy within 10 days.

Project Tracking

When an analyst or buyer identifies a potential savings project, they can record their find/proposal in the tool, get approval, track status, and keep stakeholders informed. All they need to do to define a project (for tracking) is to define the item or category, supplier(s), aggregated spend amount, project period, project type, and expected savings. They can add custom organizational tags or note key stakeholders if required, and then send it off for approval. Once approved, they just have to update the status and savings-to-date on a regular basis until the project is complete.

It’s not meant to be a project management tool, since most of the projects will be sourcing, procurement, contract, or other events or processes managed by other tools, just a tracking tool to track usage of the platform as well as approvals on projects before buyers or analysts go off on their own savings goose chases.

Budget and Forecasting Management

Budgeting and forecasting are pivotal components of financial management that empower businesses to plan, manage resources effectively, and navigate toward strategic goals. SpendKey platform offers advanced budgeting and forecasting tools for the financial year ahead. With predefined templates for easy budget setup, bulk data upload and download capabilities, and the option to assign specific budgets to each supplier.

SpendKey’s budget management module has specific processes for classification and mapping of the budget and spend data, aligning budget allocations with actual spend patterns. It empowers users with advanced budgeting and forecasting functionalities. With comprehensive reports, a user-friendly interface, and the ability to create, manage, and analyse budgets, users can make well-informed financial decisions. SpendKey enables users to optimise their budget allocations, monitor variances, and gain valuable insights for successful investment strategies.

Document Analytics

Spend Under Management is one of the ultimate keys to Procurement success, and this often requires a lot of Spend Under Contract to ensure supply and mitigate risk. This requires understanding the spend under contract, which requires that the contract meta data be stored in the system. As well as contract prices (to track agreed upon to invoiced to paid).

But no one wants to enter meta-data, so they built a machine learning and document analytics application that can automatically parse documents, identify key meta data, extract price tables, and present it to a human for final verification before the data is stored in the system.

The analytics can also be used on POs and invoices for verification purposes, and the user can decide whether or not to store that data in the system (or associated it with contracts).

More Spend Sources

Not only do they now support contract meta-data and contracted prices, but they also support the upload of asset-based data (for an organization to analyze the current and future value of organizational assets), payroll data (since that’s a significant amount of organizational spend), contingent workforce management data (to track services / contingent worker spend), and PO data in addition to AP data (which is the typical data source analyzed by simple “analytics” applications). In addition, if available, they will also load ESG Ranking data.

Their goal is to allow a complete understanding of organizational spend from budget to commitment to ordered to received to paid to projection using both standard cash views as well as amortization, accrual, and projected spend views.

New Dashboards

There are a slew of new dashboards, which include, but are not limited to:

  • Incliner/Decliner: highlights suppliers with increased or decreased spend compared to a user defined period
  • Contract Overview: provides analytics on different type of contract documents types, their expiry date, contract length
  • Contract Details: navigate and review the summary of data for each contract and the ability to view the respective contract
  • End-to-End Visibility: connects data from spend, contract, budget and other systems to provide end to end visibility e.g. spend vs budget vs contracted spend
  • ESG Summary: provides insights ESG score by suppliers and their relevant spend, including average ESG rating by industry and analytics on performance on each of the E, S and G areas
  • ESG Supplier Ranking: provides insights into ESG ranking for each individual supplier
  • Budget Overview: provides an overview of budget allocation and spending trends, highlighting key variances between actual spend and budget across different suppliers and categories.
  • Budget by Category: shows Budget by Category breakdown, displaying spend, budget, and variances across different levels of categories and suppliers
  • Budget by Suppliers: highlights spend, budget, and variance for key suppliers, along with an overall budget variance by category
  • Budget Distribution: shows the distribution of spend, budget, and variance across different transaction brackets, along with the corresponding transaction counts
  • Budget Detail: details supplier-specific budget, spend, and variance, including non-PO spend and transaction counts
  • Supplier Reclassification: allows you to reclassify supplier spend into a different taxonomy
  • Supplier Fragmentation: allows you to to track the number of suppliers in any category or subcategory
  • Key Insights: presents key spend insights, highlighting potential savings, category spend, new suppliers, and contract renewal dates

Add these to the existing dashboards that include, but are not limited to:

  • Main Dashboard : provides an overview of the spend across all categories of spend
  • Category Breakdown : enables the user to drill deep into any category and sub-category of spend to get deeper insights
  • Contract Kanban View : summarizes contract expiry in a kanban view to help identify contracts and suppliers to prioritise for renegotiations
  • MIS Dashboard: provides the user the ability to create their own pivot style report by connecting different data sets to generate views that were not available before
  • PO vs Non PO Analysis : provides an overview of spend compliant with purchase orders
  • Reseller Insights : provides insights to understand purchase of products from resellers
  • Savings Opportunity : provides ability to get a quick high level business case on potential savings based on certain user defined parameters.
  • Spend Summary : provides a narrative on the spend
  • Spend By Country : provides a summary of spend by different geographies and the ability to drill further by country
  • Spend Distribution : provides insights on spend by different transaction brackets to help identify low value low risk spend and suppliers
  • Spend Detail : provides view of the raw data and the enrichment from SpendKey to this raw data at the individual transaction level
  • Spend by Category : provides insights for each category and the relevant sub-categories based on the defined taxonomy tree
  • Supplier Hierarchy : provides insights at supplier level to help understand the parent and all the relevant child entities under that parent
  • Supplier Performance : provides a summary on the reduction in supplier count post data cleansing and supplier normalization
  • Supplier Segmentation : provides the ability to segment or tag a supplier based on user preferences
  • Tail Spend : provides insights and summary into tail spend (bottom 20% to 40% of the spend)
  • What-If : gives the user the ability to try different permutations and combinations of parts/products/services to understand potential savings opportunities
  • IT OPEX Budget : provides the user with the ability to view budget at supplier level or by category or cost centre, material code, etc.
  • Set Budget : provides ability to a user to set and define budget for a user-defined period
  • Forex Rate : gives the user option to set the FX rates for various currencies for a defined date range / period to enable the platform to convert all transactions into the base currency based on your company’s defined FX rates
  • Key Management : this provides the user with the ability to set distribution keys for spend allocation to business units, departments, functions etc. to help calculate recharge
  • Project Tracker : provides the ability to the user to create projects such as savings initiatives and track them in the tool. Also provides a workflow for approval of project milestones such as delivering on your savings targets.
  • User Management : allows the administrator to add new users and define their access control

And it’s a fairly extensive offering for an organization looking for a services-oriented solution to give them insights out of the box.

No DIY Report Builder

Now, companies looking for a services-oriented spend analysis solution aren’t looking for DIY initially, but as they mature in spend analysis, they will likely want the ability to modify the dashboards and reports on their own, which is baseline DIY. As they continue to mature, a few organizations will eventually want to start building their own reports and views, so it’s important that DIY is on the roadmap for an organization looking to mature in their analytics capability over time.

Limited Mapping Audit Access through the Front End

In the backend, they keep a complete audit trail of how and why every transaction was mapped where it was mapped. In the front end every single edit and amend that is made by a user is logged, along with supported commentary by the user. However, when a user goes to edit and amend a mapping in the front end, she doesn’t know if a transaction was initially mapped by rule, SpendKey‘s home-grown self-trained AI, or Gen-AI, and whether or not there was ever a human in the loop.

It’s critical that this data be pushed through to the front end because, among other things,

  1. there will always be someone who questions a mapping,
  2. when that happens, you need to know how it was mapped, and
  3. you need to know the ratio of human vs AI mapping in a category for confidence.

As of now, users can reclassify transactions within the tool, so if there is an error, they can push that to the admin or a “parking lot” for review, where, if the admin agrees, it can be pushed straight to the back end.

Showing who, or what, (initially) mapped the data, and why, in the front end is on the roadmap, and hopefully it appears sooner than later.

Summary

All-in-all, SpendKey is definitely a solution you should be looking at if you are a mid-market (plus) in the UK/Western Europe looking for a services-oriented spend analysis solution to help you analyze your spending and come up with strategies to get it under control.