Category Archives: SaaS

The Sourcing Innovation Source-to-Pay+ Mega Map!

Now slightly less useless than every other logo map that clogs your feeds!

1. Every vendor verified to still be operating as of 4 days ago!
Compare that to the maps that often have vendors / solutions that haven’t been in business / operating as a standalone entity in months on the day of release! (Or “best-of” lists that sometimes have vendors that haven’t existed in 4 years! the doctor has seen both — this year!)

2. Every vendor logo is clickable!
the doctor doesn’t know about you, but he finds it incredibly useless when all you get is a strange symbol with no explanation or a font so small that you would need an electron microscope to read it. So, to fix that, every logo is clickable so you can go to the site and at least figure out who the vendor is.

3. Every vendor is mapped to the closest standard category/categories!
Furthermore, every category has the standard definitions used by Sourcing Innovation and Spend Matters!
the doctor can’t make sense of random categories like “specialists” or “collaborative” or “innovative“, despises when maps follow this new age analyst/consultancy award trend and give you labels you just can’t use, and gets red in the face when two very distinct categories (like e-Sourcing and Marketplaces or Expenses and AP are merged into one). Now, the doctor will also readily admit that this means that not all vendors in a category are necessarily comparable on an apples-to-apples basis, but that was never the case anyway as most solutions in a category break down into subcategories and, for example, in Supplier Management (SXM) alone, you have a CORNED QUIP mash of solutions that could be focused on just a small subset of the (at least) ten different (primary) capabilities. (See the link on the sidebar that takes you to a post that indexes 90+ Supplier Management vendors across 10 key capabilities.)

Secure Download the PDF!  (or, use HTTP) [HTML]
(5.3M; Note that the Free Adobe Reader might choke on it; Preview on Mac or a Pro PDF application on Windows will work just fine)

SaaS Procurement for S2P+ Goes Beyond Basic Buying Etiquette for IT Procurement

Medium recently posted an article from ArmourZero, a cyber-security platform provider*, on IT Procurement Etiquette for User and Vendor, which I guess goes to show the lack of knowledge on how to buy among some organizations. It doesn’t go nearly far enough on what S2P buyers need to know, but it does provide basics we can build on.

The advice it provides for a user are:

  1. Do Your Homework (Create a Proper SoW): take the time to provide a proper Scope of Work (and don’t just take a vendor’s sample SoW, edit it slightly, and send it out, especially to the vendor you took it from)
  2. Professional: be neutral and don’t favour any specific vendor
  3. Transparent: be clear about the process, and if all bids exceed the budget and a reduced bid is required, be clear about the reason for going back and any modifications to the SoW to allow vendors to be within a budget range
  4. Fair: stick to the rules; not even incumbents get to submit late; if you have a minimum number of bids in by the deadline, you work with those; you weight on the same scales; etc.
  5. No Personal Interest: don’t accept gifts; don’t vote on the bid where you have a relationship; etc.

However, in our space, you have to start with:

  • Do Your Tech Market Research: make sure you understand the different types of solutions in the market, what the baselines are, and what the standard terminology is (sourcing != procurement)
  • Do Your Deep Dive Tech Market Research: once you figure out the major area, figure out the right sub area — a Strategic Sourcing Solution is not a Strategic Sourcing Solution is not a Strategic Sourcing Solution; a CLM (Contract Lifecycle Management) is not a CLM is not a CLM; and an SXM is definitely not an SXM which is definitely not an SXM; in the case of Strategic Sourcing, do you mean RFX? e-Auction? or optimization-backed sourcing? in the case of CLM, do you mean Negotiation, Analysis, or Governance? in the third case, which element(s) of the CORNED QUIP mash are you looking for: compliance? orchestration? relationship? network? enablement? discovery? quality? uncertainty? information? performance? No vendor does more than half of these, and those vendors will only do a couple of areas really deep and more-or-less fake the rest!
  • Write a Process and Results Oriented RFP (& SoW): it’s not features or functions (beyond the foundational functions all applications in the class need to support) it’s the processes you need to support, the systems you need to integrate with, and the results you need to get — let the vendors describe how they will solve them, not just check meaningless yes/no boxes … they might have a more efficient way to support your process, a faster way to get results, etc.; the same goes for any implementations, integrations, services, etc. — make sure it focusses on what you need to accomplish, not meaningless check-the-box exercises
  • Do Your Due Diligence Vendor Research: once you have figured out the solutions you need and the primary capabilities you are looking for, make sure the vendors you invite not only offer the type of solution, but have (most of) the foundations of the capabilities you are looking for; use analyst firms, maps, tech matches, and expert analyst consultants to build your short-list of mandarin to tangerine to orange vendors vs random google searches that, if you are lucky, will give you apples to oranges, and if you are not, will give you rutabagas to oranges to tofu vendor matches

Then apply the rest of the advice in the linked article by ArmourZero.

You’ll have better success in your RFP, negotiations, and your implementation if you do all of your homework first, even though it is a lot more extensive than you want it to be. (But remember, there are expert analyst consultants who can help you. No one says you can’t hire an expert tutor! And the reality is that you should spend five figures before making a six to seven figure investment (as there will be implementation, integration, and support costs on top of that six-plus license fee), and maybe even do a six-figure deep dive process and technical maturity assessment, market scan, and custom RFP/SoW generation project with an expert analyst consultant before signing a recurring [high] seven figure suite deal.

* A CyberSecurity firm is the last vendor you’d expect to be authoring such a post (given the massive increase in CyberAttacks since 2019), but I guess it shows just how bad buying can be if they felt the need to write on this vs. a SaaS Management Vendor

The B2B Software Marketplaces Will Rise. Then the Hammer will Fall!

Thanks to Apple, every consumer thinks there’s an app for that. And for most consumer desires, there probably is. Especially since Apple’s App Commerce climbed to 1.1 Trillion in 2022. Yes, that’s 1,100,000,000,000 US Dollars! That’s a lot of money, especially when most apps are being sold for a few bucks.

When you consider:

  • consumer app marketplaces are now a Trillion dollar business
  • enterprises are buying more SaaS than ever, as every employee in every department wants an app(lication) to support every task they do
  • enterprises pay 10X to 100X what individuals pay per user license, and, thus, the opportunity of enterprise app marketplaces is in the tens (to hundreds) of Trillions
  • enterprises want easy, centralized, acquisition to limit the number of vendors they need to deal with / handle subscription invoices from

It’s easy to see why all the big software / cloud vendors are opening their own app marketplaces. A recent article on IOT Analytics shouted the rise of the B2B software marketplaces while quoting their B2B Technology Marketplaces Market Report (2024-2030) that noted that:

  • they are the fastest growing procurement channel (for software)
  • dedicated platform providers are seeing success
  • some sellers make Billions

And they will continue to grow for a few years. But then, the hammer will fall.

What one has to remember is the following:

  • many of these marketplaces are taking a big cut, like 30% or more, which is what a sales partner would have taken to compensate its employee(s) that actively sold the product, but they are doing NOTHING but creating a listing, making it searchable, taking an order, collecting a payment, and providing a license key … even when you consider cloud fees, payment processor fees, platform maintenance fees, they could be very profitable at 13% (remember that recent article on how roughly half a trillion dollars will be wasted on SaaS spend this year … well, this is only going to increase that as you’re paying almost 20% more than you need to for the licenses you do need and use)
  • apps, licenses, and overspend is going to proliferate rapidly as “approved” app stores make it easy for every employee with a p-card to buy what they want, when they want
  • those SaaS audits and rationalizations that identify 33%+ overspend are only going to reclaim at most 20% of that, if you’re lucky, because, even if the software developer is willing to refund unused licenses, they’re not going to refund that 30%+ they already paid the marketplace … and that’s if they’ll even talk to you because you acquired the license through a third party
  • there’s no real negotiation opportunity when you buy from a marketplace

So as businesses race to digitization, they will embrace the marketplace as it will help them get part of the way there very quickly, but then when they realize just how much they are spending on app(lication)s, and turn Procurement on strategic procurement of SaaS, the first thing to go will be the app marketplace purchases … and then … it will be time for the hammer to fall.

Spendata: The Power Tool for the Power Spend Analyst — Now Usable By Apprentices as Well!

We haven’t covered Spendata much on Sourcing Innovation (SI), as it was only founded in 2015 and the doctor did a deep dive review on Spend Matters in 2018 when it launched (Part I and Part II, ContentHub subscription required), as well as a brief update here on SI where we said Don’t Throw Away that Old Spend Cube, Spendata Will Recover It For You!. the doctor did pen a 2020 follow up on Spend Matters on how Spendata was Rewriting Spend Analysis from the Ground Up, and that was the last major coverage. And even though the media has been a bit quiet, Spendata has been diligently working as hard on platform improvement over the last four years as they were the first four years and just released Version 2.2 (with a few new enhancements in the queue that they will roll out later this year). (Unlike some players which like to tack on a whole new version number after each minor update, or mini-module inclusion, Spendata only does a major version update when they do considerable revamping and expansion, recognizing that the reality is that most vendors only rewrite their solution from the ground up to be better, faster, and more powerful once a decade, and every other release is just an iteration, and incremental improvement of, the last one.)

So what’s new in Spendata V 2.2? A fair amount, but before we get to that, let’s quickly catch you up (and refer you to the linked articles above for a deep dive).

Spendata was built upon a post-modern view of spend analysis where a practitioner should be able to take immediate action on any data she can get her hands on whenever she can get her hands on it and derive whatever insights she can get for process (or spend) improvement. You never have perfect data, and waiting until Duey, Clutterbuck, and Howell1 get all your records in order to even run your first report when you have a dozen different systems to integrate data from, multiple data formats to map, millions of records to classify, cleanse and enrich, and third party data feeds to integrate will take many months, if not a year, and during that year where you quest for the mythical perfect cube you will continue to lose 5% due to process waste, abuse, and fraud, and 3% to 15% (or more) across spend categories where you don’t have good management but could stem the flow simply by identifying them and putting in place a few simple rules or processes. And you can identify some of these opportunities simply by analyzing one system, one category, and one set of suppliers. And then moving on to the next one. And, in the process, Spendata automatically creates and maintains the underlying schema as you slowly build up the dimensions, the mapping, cleansing, and categorization rules, and the basic reports and metrics you need to monitor spend and processes. And maybe you can only do 60% to 80% piecemeal, but during that “piecemeal year”, you can identify over half of your process and cost savings opportunities and start saving now, versus waiting a year to even start the effort. When it comes to spend (related) data analysis, no adage is more true than “don’t put off until tomorrow what you can do today” with Spendata, because, and especially when you start, you don’t need complete or perfect data … you’d be amazed how much insight you can get with 90% in a system or category, and then if the data is inconclusive, keeping drilling and mapping until you get into the 95% to 98% accuracy range.

Spendata was also designed from the ground up to run locally and entirely in the browser, because no one wants to wait for an overburdened server across a slow internet connection, and do so in real time … and by that we mean do real analysis in real time. Spendata can process millions of records a minute in the browser, which allows for real time data loads, cube definitions, category re-mappings, dynamically derived dimensions, roll-ups, and drill downs in real-time on any well-defined data set of interest. (Since most analysis should be department level, category level, regional, etc., and over a relevant time span, that should not include every transaction for the last 10 years because beyond a few years, it’s only the quarter over quarter or year over year totals that become relevant, most relevant data sets for meaningful analysis even for large companies are under a few million transactions.) The goal was to overcome the limitations of the first two generations of spend analysis solutions where the user was limited to drilling around in, and deriving summaries of, fixed (R)OLAP cubes and instead allow a user to define the segmentations they wanted, the way they wanted, on existing or newly loaded (or enriched federated data) in real time. Analysis is NOT a fixed report, it is the ability to look at data in various ways until you uncover an inefficiency or an opportunity. (Nor is it simply throwing a suite of AI tools against a data set — these tools can discover patterns and outliers, but still require a human to judge whether a process improvement can be made or a better contract secured.)

Spendata was built as a third generation spend analysis solution where

  • data can be loaded and processed at any point of the analysis
  • the schema is developed and modified on the fly
  • derived dimensions can be created instantly based on any combination of raw and previously defined derived dimensions
  • additional datasets from internal or external sources can be loaded as their own cubes, which can then be federated and (jointly) drilled for additional insight
  • new dimensions can be built and mapped across these federations that allow for meaningful linkages (such as commodities to cost drivers, savings results to contracts and purchasing projects, opportunities by size, complexity, or ABS analysis, etc.)
  • all existing objects — dimensions, dashboards, views (think dynamic reports that update with the data), and even workspaces can be cloned for easy experimentation
  • filters, which can define views, are their own objects, can be managed as their own objects, and can be, through Spendata‘s novel filter coin implementation, dragged between objects (and even used for easy multi-dimensional mapping)
  • all derivations are defined by rules and formula, and are automatically rederived when any of the underlying data changes
  • cubes can be defined as instances of other cubes, and automatically update when the source cube updates
  • infinite scrolling crosstabs with easy Excel workbook generation on any view and data subset for those who insist on looking at the data old school (as well as “walk downs” from a high-level “view” to a low-level drill-down that demonstrates precisely how an insight was found
  • functional widgets which are not just static or semi-dynamic reporting views, but programmable containers that can dynamically inject data into pre-defined analysis and dimension derivations that a user can use to generate what-if scenarios and custom views with a few quick clicks of the mouse
  • offline spend analysis is also available, in the browser (cached) or on Electron.js (where the later is preferred for Enterprise data analysis clients)

Furthermore, with reference to all of the above, analyst changes to the workspace, including new datasets, new dashboards and views, new dimensions, and so on are preserved across refresh, which is Spendata’s “inheritance” capability that allows individual analysts to create their own analyses and have them automatically updated with new data, without losing their work …

… and this was all in the initial release. (Which, FYI, no other vendor has yet caught up to. NONE of them have full inheritance or Spendata‘s security model. And this was the foundation for all of the advanced features Spendata has been building since its release six years ago.)

After that, as per our updates in 2018 and 2020, Spendata extended their platform with:

  • Unparalleled Security — as the Spendata server is designed to download ONLY the application to the browser, or Spendata‘s demo cubes and knowledge bases, it has no access to your enterprise data;
  • Cube subclassing & auto-rationalization — power users can securely setup derived cubes and sub-cubes off of the organizational master data cubes for the different types of organizational analysis that are required, and each of these sub-cubes can make changes to the default schema/taxonomy, mappings, and (derived) dimensions, and all auto-update when the master cube, or any parent cube in the hierarchy, is updated
  • AI-Based Mapping Rule Identification from Cube Reverse Engineering — Spendata can analyze your current cube (or even a report of vendor by commodity from your old consultant) and derive the rules that were used for mapping, which you can accept, edit, or reject — we all know black box mapping doesn’t work (no matter how much retraining you do, as every “fix” all of a sudden causes an older transaction to be misclassified); but generating the right rules that can be human understood and human maintained guarantees 100% correct classification 100% of the time
  • API access to all functions, including creating and building workspaces, adding datasets, building dimensions, filtering, and data export. All Spendata functions are scriptable and automatable (as opposed to BI tools with limited or nonexistent API support for key functions around building, distributing, and maintaining cubes).

However, as we noted in our introduction, even though this put Spendata leagues beyond the competition (as we still haven’t seen another solution with this level of security; cube subclassing with full inheritance; dynamic workspace, cube, and view creation; etc.), they didn’t stop there. In the rest of this article, we’ll discuss what’s new from the viewpoint of Spendata Competitors:

Spendata Competitors: 7 Things I Hate About You

Cue the Miley Cyrus, because if competitors weren’t scared of Spendata before, if they understand ANY of this, they’ll be scared now (as Spendata is a literal wrecking ball in analytic power). Spendata is now incredibly close to negating entire product lines of not just its competitors, but some of the biggest software enterprises on the planet, and 3.0 may trigger a seismic shift on how people define entire classes of applications. But that’s a post for a later day (but should cue you up for the post that will follow this on on just precisely what Spendata 2.2 really is and can do for you). For now, we’re just going to discuss seven (7) of the most significant enhancements since our last coverage of Spendata.

Dynamic Mapping

Filters can now be used for mapping — and as these filters update, the mapping updates dynamically. Real-time reclassify on the fly in a derived cube using any filter coin, including one dragged out of a drill down in a view. Analysis is now a truly continuous process as you never have to go back and change a rule, reload data, and rebuild a cube to make a correction or see what happens under a reclassification.

View-Based Measures

Integrate any rolled up result back into the base cube on the base transactions as a derived dimension. While this could be done using scripts in earlier versions, it required sophisticated coding skills. Now, it’s almost as easy as a drag-and-drop of a filter coin.

Hierarchical Dashboard Menus

Not only can you organize your dashboards in menus and submenus and sub-sub menus as needed, but you can easily bookmark drill downs and add them under a hierarchical menu — makes it super easy to create point-based walkthroughs that tell a story — and then output them all into a workbook using Spendata‘s capability to output any view, dashboard, or entire workspace as desired.

Search via Excel

While Spendata eliminates the need for Excel for Data Analysis, the reality is that is where most organizational data is (unfortunately) stored, how most data is submitted by vendors to Procurement, and where most Procurement Professionals are the most comfortable. Thus, in the latest version of Spendata, you can drag and drop groups of cells from Excel into Spendata and if you drag and drop them into the search field, it auto-creates a RegEx “OR” that maintains the inputs exactly and finds all matches in the cube you are searching against.

Perfect Star Schema Output

Even though Spendata can do everything any BI tool on the market can do, the reality is that many executives are used to their pretty PowerBI graphs and charts and want to see their (mostly static) reports in PowerBI. So, in order to appease the consultancies that had to support these executives that are (at least) a generation behind on analytics, they encoded the ability to output an entire workspace to a perfect star schema (where all keys are unique and numeric) that is so good that many users see a PowerBI speed up by a factor of almost 10. (As any analyst forced to use PowerBI will tell you, when you give PowerBI any data that is NOT in a perfect star schema, it may not even be able to load the data, and that it’s ability to work with non-numeric keys at a speed faster than you remember on an 8088 is nonexistent.)

Power Tags

You might be thinking “tags, so what“. And if you are equating tags with a hashtag or a dynamically defined user attribute, then we understand. However, Spendata has completely redefined what a tag is and what you can do with it. The best way to understand it is a Microsoft Excel Cell on Steroids. It can be a label. It can be a replica of a value in any view (that dynamically updates if the field in the view updates). It can be a button that links to another dashboard (or a bookmark to any drill-down filtered view in that dashboard). Or all of this. Or, in the next Spendata release, a value that forms the foundation for new derivations and measures in the workspace just like you can reference a random cell in an Excel function. In fact, using tags, you can already build very sophisticated what-if analysis on-the-fly that many providers have to custom build in their core solutions (and take weeks, if not months, to do so) using the seventh new capability of Spendata, and usually do it in hours (at most).

Embedded Applications

In the latest version of Spendata, you can embed custom applications into your workspace. These applications can contain custom scripts, functions, views, dashboards, and even entire datasets that can be used to instantly augment the workspace with new analytic capability, and if the appropriate core columns exist, even automatically federate data across the application datasets and the native workspace.

Need a custom set of preconfigured views and segments for that ABC Analysis? No sweat, just import the ABC Analysis application. Need to do a price variance analysis across products and geographies, along with category summaries? No problem. Just import the Price Variance and Category Analysis application. Need to identify opportunities for renegotiation post M&A, cost reduction through supply base consolidation, and new potential tail spend suppliers? No problem, just import the M&A Analysis app into the workspace for the company under consideration and let it do a company A vs B comparison by supplier, category, and product; generate the views where consolidation would more than double supplier spend, save more than 100K on switching a product from a current supplier to a lower cost supplier; and opportunities for bringing on new tail spend suppliers based upon potential cost reductions. All with one click. Not sure just what the applications can do? Start with the demo workspaces and apps, define your needs, and if the apps don’t exist in the Spendata library, a partner can quickly configure a custom app for you.

And this is just the beginning of what you can do with Spendata. Because Spedata is NOT a Spend Analysis tool. That’s just something it happens to do better than any other analysis tool on the market (in the hands of an analyst willing to truly understand what it does and how to use it — although with apps, drag-and-drop, and easy formula definition through wizardly pop-ups, it’s really not hard to learn how to do more with Spendata than any other analysis tool).

But more on this in our next article. For The Times They Are a-Changin’.

1 Duey, Clutterbuck, and Howell keeps Dewey, Cheatem, and Howe on retainer … it’s the only way they can make sure you pay the inflated invoices if you ever wake up and realize how much you’ve been fleeced for …

The Best Way Procurement Chiefs Can Create a Solid Foundation to Capitalize on AI

As per our recent post on how I want to be Gen AI Free, the best way to capitalize on Gen-AI is to avoid it entirety. That being said, the last thing you should avoid is the acquisition of modern technology, including traditional ML-AI that has been tried and tested and proven to work extremely well in the right situation.

That being said, if you ignore the reference to Gen-AI, a recent article on Acceleration Economy on 5 Ways Procurement Chiefs Can Create a Solid Foundation had some good tips on how to go about adopting ML-AI with success.

The five foundations were quite appropriate.

1. Organize

A plan for

  1. exactly where the solution will be deployed,
  2. what use cases it will be deployed for,
  3. how valid use cases will be identified, and
  4. how the solution is expected to perform on them.

There’s no solution, even AI, that can do everything. Even limited to a domain, no AI will work for all situations that may arise. As a result, you need a methodology to identify the valid use cases and the invalid use cases and ensure that only the valid uses cases are processed. You also need to ensure you know the expected ranges of the answers that will be provided. Then you need to implement checks to ensure that no only are only valid situations processed but that only output in an expected range is accepted in any automated process, and if anything is outside the expected norms anywhere, a human with appropriate education and training is brought into the loop.

2. Create a Policy

No technology should be deployed in critical situations without a policy dictating valid, and invalid, use. Moreover, any technology definitely shouldn’t be used by people who aren’t trained in both the job they need to do and proper use of the tool. Even though most AI is not as dangerous as Gen-AI, any AI, if improperly used, can be dangerous. It’s critical to remember that computers cannot think, and only thunk on the data they are given (performing millions of calculations in the time it takes an average person to perform two). As such, the quality of output is limited both to the quality of data input and the knowledge built into the model used. Neither will be complete or perfect, and there will always be external factors not considered, which, even if normally not relevant, could be relevant — and only an educated and experienced human will know that. (Moreover, that human needs to be involved in the policy creation to ensure the technology is only used where, when, and how appropriate.)

3. Understand Your Platform(s) of Choice

Just like there are a plethora of Gen-AI applications, a lot of different vendors offer AI applications, and even if most are similar, not all are created equal. It’s important to understand the similarities and differences between them and select the one that is right for your business. (Consider the algorithms and models used, the extent of human validated training available, typical accuracy / results, and the vendor’s experience in your use case in particular when evaluating an AI solution.)

4. Practice

Introducing new tools requires process changes. Before introducing the tool, make sure you can execute the associated process changes, first by executing training exercises on the different types of output you might get and then, possibly by way of a third party who uses a tool on your behalf, using real inputs and associated outputs. While the AI may automate more of the process, it’s even more critical that you respond appropriately to parts of the process that cannot be automated or where the application throws an exception because the situation is not appropriate to either the use of AI or the use of the AI output. (And if you don’t get any exceptions, question the AI … it’s not likely not working right! And if you get too many exceptions, it’s not the right AI for you.)

5. ALWAYS Ask Yourself: “Does that Make Sense?”

Just like Gen-AI hallucinates, traditional AI, even tried-and-true AI that is highly predictable, will sometimes give wrong results. This will usually happen if bad data slips in, if the use case is on the boundary of expected use cases, or the external situation has changed considerably since the last time the use case arose. Thus, it’s always important to ask yourself if the output makes sense. For tried-and-true AI where the confidence is high, it will make sense the vast majority of the time, but there will still be the occasional exception. Human confirmation is, thus, always required!

With proper use, AI, unlike Gen-AI (which fails regularly and sometimes hallucinates so convincingly that even an expert has a hard time identifying false results), will give great results the majority of the time — so you should seek it out and implement it. Just also implement checks and balances to catch those rare situations it doesn’t and put a human in the loop when that happens. Because traditional use-cases are more constrained, and predictable, it’s a lot easier to identify and implement these checks and balances. So do it … and see great success!