Category Archives: Vendor Review

Vendor Coverage on Sourcing Innovation

This page will be regularly maintained and link to all current vendor coverage on Sourcing Innovation, where “current” is defined to be within the last three years for the most recent vendor coverage.

Note that, as of December 3, 2025, all vendor coverage is sponsored. Any coverage included prior to December 3, 2025 is because the vendor has sponsored new or upcoming coverage (and the older coverage is deemed to still be valid). All vendors covered adhere to Sourcing Innovation’s policies and review requirements as defined in the FAQ.

Company Primary Offering(s) Coverage
Spendata Analytics (2024 Mar 21)
The Power Tool for the Power Analyst
(2024 Jul 01)
A True Enterprise Analytics Solution
(2026)
New coverage coming soon!

Spendata: A True Enterprise Analytics Solution

As we indicated in our last article, while Spendata is the absolute best at spend analysis, it’s not just a spend analysis platform. It’s a general-purpose data analytics platform that can be used for much more than spend analysis.

The current end-state vision for business data analytics is a “data lake” database with a BI front end. The Big X consultancies (aided and abetted by your IT department, which is only too eager to implement another big system) will try to convince you of the data paradise you’ll have if you dump all of your business data into a data lake. Unfortunately, reality doesn’t support the vision, because organizational data is created only to the extent necessary, never verified, riddled with errors from day one, and left to decay over time as it’s never updated. The data lake is ultimately a data cesspool.

Pointing a BI tool at the (dirty) lake will spice up the data with bars, pies, waves, scatters, multi-coloured geometric shapes, and so on, but you won’t find much insight other than the realization that your data is, in fact, dirty. Worse, a published BI dashboard is like a spreadsheet you can’t modify. Try mapping new dimensions, creating new measures, adding new data, or performing even the simplest modification of an existing dimension or hierarchy, and you’ll understand why this author likes to point out that BI should actually stand for Bullsh!t Images, not Business Intelligence.

So how does a spend analysis platform like Spendata end up being a general-purpose data analytics tool? The answer is that the mechanisms and procedures associated with spend analysis and spend analysis databases, specifically data mapping and dimension derivation, can be taken to the next level — extended, generalized, and moved into real time. Once those key architectural steps are taken, the system can be further extended with view-based measures, shared cubes where custom modifications are retained across refreshes, and spreadsheet-like dependencies and recalculation at database scale.

The result is an analysis system that can be adapted not only to any of the common spend analysis problems, such as AP/PO analysis or commodity-specific cubes with item level price X quantity data, but also to savings tracking and sourcing and implementation plans. Extending the system to domains beyond spend analysis is simple: just load different data.
The bottom line is that to do real data analysis, no matter what the domain, you need:

  • the ability to extend the schema at any time
  • the ability to add new derived dimensions at any time
  • the ability to change mappings at any time
  • the ability to build derivations, data views, and mappings that are dependent on other derivations, mappings, views, inputs, linked datasets, and so on, with real-time “recalc”
  • the ability to create new views and reports relevant to the question you have … without dumping the data to Excel
  • … and preserve all of the above on cube data refreshes
  • … in your own copy of the cube so you don’t have to wait for anyone to agree
  • … and get an answer today, not on the next refresh next month when you’ve forgotten why you even had the question in the first place

You don’t get any of that from a spend analysis solution, or a BI solution, or a database pointing at a data lake. You only get that in a modern data analysis solution — which supports all of the above, and more, for any kind of data. A data analysis system works equally well across all types of numeric or set-valued data, including, but not limited to sales data, service data, warranty data, process data, and so on.

As Spendata is a real data analysis solution, it supports all of these analyses with a solution that’s easier and friendlier to use than the spreadsheet you use every day. Let’s walk through some examples so you can understand what a data analysis solution really can do.

SALES ANALYSIS

Spending data consists of numerical amounts that represent the price, tax, duty, shipping, etc. paid for items purchased. Sales data is numerical amounts that represent the price, tax, duty, shipping, etc. paid for items sold.

They are basically the inverse of each other. For every purchase, there is a sale. For every sale, there is a purchase. So, there’s absolutely no reason that you shouldn’t be able to apply the exact the same analysis (possibly in reverse) to sales data as you apply to spend data. That is, IF you have a proper data analysis tool. The latter part is the big IF because if you’re using a custom tool that needs to map all data to a schema with fixed semantics, it won’t understand the data and you’re SOL.

However, since Spendata is a general-purpose data analysis tool that builds and maintains its schema on the fly, it doesn’t care if the dataset is spend data or sales data; it’s still transactional data and it’s happy to analyze away. If you need the handholding of a workflow-oriented UI, that can also be configured out of the box using Spendata‘s new “app” capability.

Here are three types of sales analysis that Spendata supports better than CRM/Sales Forecasting systems, and that can’t be done at all with a data lake and a BI tool.

Sales Discount Variation Analysis Over Time By Salesperson … and Client Type

You run a sales team. Are your different salespeople giving the same mix of discounts by product type to the same types of customers by customer size and average sales size?

Sounds easy right? Can’t you simply plot the product/price ratio by month by salesperson in a bubble chart (where volume size correlates to bubble size) against the average trend line and calculate which salespeople are off the most (in the wrong direction)? Sure, but how do you handle client type? You could add a “color” dimension, but when the bubbles overlap and the bubbles blur, can you see it visually? Not likely. And how do you remember a low sales volume customer which is a strategic partner, so has a special deal? Theoretically you could add another column to the table “Salesperson, Product/Price Ratio, Client Type, Over/Under Average”, and that would work as long as you could pre-compute the average discount by Product/Price Ratio and Client Type.

And then you realize that unless you group by category, you have entirely different products in the same product/price ratio and your multi-stage analysis is worthless, so you have to go back and start again, only to find out that the bubble chart is only pseudo-useful (as you can’t really figure it out visually because what is that shade of pink (from the multiple red and white bubbles overlapping) — Fuchsia, Bright, or Barbie — and what does it mean) and you will have to focus on the fixed table to extract any value at all from the analysis.

But then you’ll realize that you still need to see monthly variations in the chart, meaning you want the ability to drag a slider or change the month and have the bubble chart update. Uh-oh, you forgot to individually compute all the amounts by month or select the slider graph! Back to square one, doing it all over again by month. Then you notice some customers have long-term, fixed prices on some products, which messes up the average discount on these products as the prices for these customers are not changing over time. You redo the work for the third (or is it the fourth? time), and then you realize that your definitions of client type “large, medium, and small” are slightly off as a client that should be in large is in medium and two that should be in small were made medium. Aaarrrggghhh!!!

But with Spendata, you simply create or modify dimensions to the cube to segment the data (customer type, product groups, etc.) You leverage a dynamic view-based measure by customer type to set the average prices per time period (used to calculate the discount). You then use filters to define the time range of interest, another view with filters to click through the months over time, a derived view to see the performance by quarter, another by year. If you change the definition of client type (which customers belong to which client type), which products for customers are fixed prices, which SKU’s that are the same type, time range of interest, etc. you simply map them and the entire analysis auto-updates.

This flexibility and power (with no wasted effort) gives you a very deep analysis capability NOT available in any other data analysis platform. For example, you can find out with a few clicks that your “best” salesperson in terms of giving the lowest average discount is actually costing you the most. Turns out, he’s not serving any large customers (who get good discounts) and has several fixed price contracts (which mess up the average discounts). So, the discounts he’s giving the small clients, while less than what large customers get, are significantly more than what other salespeople provide to other small customers. This is something you’d never know if you didn’t have the power of Spendata as your data consultant would give up on the variance analysis at the global level because the salesman’s overall ratio looked good.

Post-Merger White-Space Analysis

White space sales analysis is looking for spaces in the market where you should be selling but are not. For example, if you sell to restaurants, you could look at your sales by geography, normalized by the number of establishments by type or the sales of the restaurants by type in that geography. In a merger, you could measure your penetration at each customer for each of the original companies. You can find white space by looking at each customer (or customer segment) and measuring revenue per customer employee across the two companies. Where is one more effective than the other?

You might think this is no big deal because this was theoretically done during the due diligence and the opportunity for overlap was deemed to be there, as well as the opportunity for whitespace, and whatever was done was good enough. The reality couldn’t be further from the truth.

If the whitespace analysis was done with a standard analytics tool, it has all the following problems:

  • matching vendors were missed due to different name entries and missing ids
  • vendors were not familied by parent (within industry, geography, etc.)
  • the improperly merged vendors were only compared against a target file built by the consultants and misses vendors
  • i.e. it’s poor, but no worse than you’d do with a traditional analytics tool

But with Spendata, these problems would be at least minimized, if not eliminated because:

  • Spendata comes with auto-matching capability
  • … that can be used to enrich the suppliers with NAICS categorization (for example)
  • Spendata comes with auto-familying capability so parent-child relationships aren’t missed
  • Spendata can load all of the companies from a firmographic database with their NAICS codes in a separate cube …
  • … and then federation can be used to match the suppliers in use with the suppliers in the appropriate NAICS category for the white space analysis

It’s thus trivial to

  1. load up a cube with organization A’s sales by supplier (which can be the output from a view on a transaction database), and run it through a view that embeds a normalization routine so that all records that actually correspond to the same supplier (or parent-child where only the parent is relevant) are grouped into one line
  2. load up a cube with organization B’s sales by supplier and do the same … and now you know you have exact matches between supplier names
  3. load up the NAICS code database – which is a list of possible customers
  4. build a view that pulls in, for each supplier in the NAICS category of interest, Org A spend, Org B Spend, and Total Spend
  5. create a filter to only show zero spend suppliers — and there’s the whitespace … 100% complete. Now send your sales teams after these.
  6. Create a filter to show where your sales are less than expected (eg. from comparable other customers or Org A or Org B). This is additional whitespace where upselling or further customer penetration is appropriate.

Bill Rate Analysis

A smart company doesn’t just analyze their (total) spend by service provider, they analyze by service role and against the service role average when different divisions/locations are contracting for the same service that should be fulfilled by a professional with roughly the same skills and same experience level. Why? Because if you’re paying, on average, 150/hr for an intermediate DBA across 80% of locations and 250/hr across the remaining 20%, you’re paying as much as 66% too much at those remaining locations, with the exception being San Francisco or New York where your service provider has to pay their locals a cost-of-living top-up just so they can afford to live there.

By the same token, a smart service company is analyzing what they are getting by role, location, and customer and trying to identify the customers that are (the most) profitable and those that are the least (or unprofitable when you take contract size or support requirements into account), so they can focus on those customers that are profitable, and, hopefully, keep them happy with their better talent (and not just the newest turkey on the rafter).

However, just like sales discount variation analysis over time by client type, this is tough as it’s essentially a variation of that analysis, except you are looking at services instead of products, roles instead of client types, and customer instead of sales rep … and then, for your problem clients, looking at which service reps are responsible … so after you do the base analysis (using dynamic view based measures), you’re creating new views with new measures and filters to group by service rep and filter to those too far beyond a threshold. In any other tool, it would be nigh impossible for even an expert analyst. In Spendata, it’s a matter of minutes. Literally.

And this is just the tip of the iceberg in terms of what Spendata can do. In a future article, we’ll dive into a few more areas of analysis that require very specialized tools in different domains, but which can be done with ease in Spendata. Stay tuned!

Spendata: The Power Tool for the Power Spend Analyst — Now Usable By Apprentices as Well!

We haven’t covered Spendata much on Sourcing Innovation (SI), as it was only founded in 2015 and the doctor did a deep dive review on Spend Matters in 2018 when it launched (Part I and Part II, ContentHub subscription required), as well as a brief update here on SI where we said Don’t Throw Away that Old Spend Cube, Spendata Will Recover It For You!. the doctor did pen a 2020 follow up on Spend Matters on how Spendata was Rewriting Spend Analysis from the Ground Up, and that was the last major coverage. And even though the media has been a bit quiet, Spendata has been diligently working as hard on platform improvement over the last four years as they were the first four years and just released Version 2.2 (with a few new enhancements in the queue that they will roll out later this year). (Unlike some players which like to tack on a whole new version number after each minor update, or mini-module inclusion, Spendata only does a major version update when they do considerable revamping and expansion, recognizing that the reality is that most vendors only rewrite their solution from the ground up to be better, faster, and more powerful once a decade, and every other release is just an iteration, and incremental improvement of, the last one.)

So what’s new in Spendata V 2.2? A fair amount, but before we get to that, let’s quickly catch you up (and refer you to the linked articles above for a deep dive).

Spendata was built upon a post-modern view of spend analysis where a practitioner should be able to take immediate action on any data she can get her hands on whenever she can get her hands on it and derive whatever insights she can get for process (or spend) improvement. You never have perfect data, and waiting until Duey, Clutterbuck, and Howell1 get all your records in order to even run your first report when you have a dozen different systems to integrate data from, multiple data formats to map, millions of records to classify, cleanse and enrich, and third party data feeds to integrate will take many months, if not a year, and during that year where you quest for the mythical perfect cube you will continue to lose 5% due to process waste, abuse, and fraud, and 3% to 15% (or more) across spend categories where you don’t have good management but could stem the flow simply by identifying them and putting in place a few simple rules or processes. And you can identify some of these opportunities simply by analyzing one system, one category, and one set of suppliers. And then moving on to the next one. And, in the process, Spendata automatically creates and maintains the underlying schema as you slowly build up the dimensions, the mapping, cleansing, and categorization rules, and the basic reports and metrics you need to monitor spend and processes. And maybe you can only do 60% to 80% piecemeal, but during that “piecemeal year”, you can identify over half of your process and cost savings opportunities and start saving now, versus waiting a year to even start the effort. When it comes to spend (related) data analysis, no adage is more true than “don’t put off until tomorrow what you can do today” with Spendata, because, and especially when you start, you don’t need complete or perfect data … you’d be amazed how much insight you can get with 90% in a system or category, and then if the data is inconclusive, keeping drilling and mapping until you get into the 95% to 98% accuracy range.

Spendata was also designed from the ground up to run locally and entirely in the browser, because no one wants to wait for an overburdened server across a slow internet connection, and do so in real time … and by that we mean do real analysis in real time. Spendata can process millions of records a minute in the browser, which allows for real time data loads, cube definitions, category re-mappings, dynamically derived dimensions, roll-ups, and drill downs in real-time on any well-defined data set of interest. (Since most analysis should be department level, category level, regional, etc., and over a relevant time span, that should not include every transaction for the last 10 years because beyond a few years, it’s only the quarter over quarter or year over year totals that become relevant, most relevant data sets for meaningful analysis even for large companies are under a few million transactions.) The goal was to overcome the limitations of the first two generations of spend analysis solutions where the user was limited to drilling around in, and deriving summaries of, fixed (R)OLAP cubes and instead allow a user to define the segmentations they wanted, the way they wanted, on existing or newly loaded (or enriched federated data) in real time. Analysis is NOT a fixed report, it is the ability to look at data in various ways until you uncover an inefficiency or an opportunity. (Nor is it simply throwing a suite of AI tools against a data set — these tools can discover patterns and outliers, but still require a human to judge whether a process improvement can be made or a better contract secured.)

Spendata was built as a third generation spend analysis solution where

  • data can be loaded and processed at any point of the analysis
  • the schema is developed and modified on the fly
  • derived dimensions can be created instantly based on any combination of raw and previously defined derived dimensions
  • additional datasets from internal or external sources can be loaded as their own cubes, which can then be federated and (jointly) drilled for additional insight
  • new dimensions can be built and mapped across these federations that allow for meaningful linkages (such as commodities to cost drivers, savings results to contracts and purchasing projects, opportunities by size, complexity, or ABS analysis, etc.)
  • all existing objects — dimensions, dashboards, views (think dynamic reports that update with the data), and even workspaces can be cloned for easy experimentation
  • filters, which can define views, are their own objects, can be managed as their own objects, and can be, through Spendata‘s novel filter coin implementation, dragged between objects (and even used for easy multi-dimensional mapping)
  • all derivations are defined by rules and formula, and are automatically rederived when any of the underlying data changes
  • cubes can be defined as instances of other cubes, and automatically update when the source cube updates
  • infinite scrolling crosstabs with easy Excel workbook generation on any view and data subset for those who insist on looking at the data old school (as well as “walk downs” from a high-level “view” to a low-level drill-down that demonstrates precisely how an insight was found
  • functional widgets which are not just static or semi-dynamic reporting views, but programmable containers that can dynamically inject data into pre-defined analysis and dimension derivations that a user can use to generate what-if scenarios and custom views with a few quick clicks of the mouse
  • offline spend analysis is also available, in the browser (cached) or on Electron.js (where the later is preferred for Enterprise data analysis clients)

Furthermore, with reference to all of the above, analyst changes to the workspace, including new datasets, new dashboards and views, new dimensions, and so on are preserved across refresh, which is Spendata’s “inheritance” capability that allows individual analysts to create their own analyses and have them automatically updated with new data, without losing their work …

… and this was all in the initial release. (Which, FYI, no other vendor has yet caught up to. NONE of them have full inheritance or Spendata‘s security model. And this was the foundation for all of the advanced features Spendata has been building since its release six years ago.)

After that, as per our updates in 2018 and 2020, Spendata extended their platform with:

  • Unparalleled Security — as the Spendata server is designed to download ONLY the application to the browser, or Spendata‘s demo cubes and knowledge bases, it has no access to your enterprise data;
  • Cube subclassing & auto-rationalization — power users can securely setup derived cubes and sub-cubes off of the organizational master data cubes for the different types of organizational analysis that are required, and each of these sub-cubes can make changes to the default schema/taxonomy, mappings, and (derived) dimensions, and all auto-update when the master cube, or any parent cube in the hierarchy, is updated
  • AI-Based Mapping Rule Identification from Cube Reverse Engineering — Spendata can analyze your current cube (or even a report of vendor by commodity from your old consultant) and derive the rules that were used for mapping, which you can accept, edit, or reject — we all know black box mapping doesn’t work (no matter how much retraining you do, as every “fix” all of a sudden causes an older transaction to be misclassified); but generating the right rules that can be human understood and human maintained guarantees 100% correct classification 100% of the time
  • API access to all functions, including creating and building workspaces, adding datasets, building dimensions, filtering, and data export. All Spendata functions are scriptable and automatable (as opposed to BI tools with limited or nonexistent API support for key functions around building, distributing, and maintaining cubes).

However, as we noted in our introduction, even though this put Spendata leagues beyond the competition (as we still haven’t seen another solution with this level of security; cube subclassing with full inheritance; dynamic workspace, cube, and view creation; etc.), they didn’t stop there. In the rest of this article, we’ll discuss what’s new from the viewpoint of Spendata Competitors:

Spendata Competitors: 7 Things I Hate About You

Cue the Miley Cyrus, because if competitors weren’t scared of Spendata before, if they understand ANY of this, they’ll be scared now (as Spendata is a literal wrecking ball in analytic power). Spendata is now incredibly close to negating entire product lines of not just its competitors, but some of the biggest software enterprises on the planet, and 3.0 may trigger a seismic shift on how people define entire classes of applications. But that’s a post for a later day (but should cue you up for the post that will follow this on on just precisely what Spendata 2.2 really is and can do for you). For now, we’re just going to discuss seven (7) of the most significant enhancements since our last coverage of Spendata.

Dynamic Mapping

Filters can now be used for mapping — and as these filters update, the mapping updates dynamically. Real-time reclassify on the fly in a derived cube using any filter coin, including one dragged out of a drill down in a view. Analysis is now a truly continuous process as you never have to go back and change a rule, reload data, and rebuild a cube to make a correction or see what happens under a reclassification.

View-Based Measures

Integrate any rolled up result back into the base cube on the base transactions as a derived dimension. While this could be done using scripts in earlier versions, it required sophisticated coding skills. Now, it’s almost as easy as a drag-and-drop of a filter coin.

Hierarchical Dashboard Menus

Not only can you organize your dashboards in menus and submenus and sub-sub menus as needed, but you can easily bookmark drill downs and add them under a hierarchical menu — makes it super easy to create point-based walkthroughs that tell a story — and then output them all into a workbook using Spendata‘s capability to output any view, dashboard, or entire workspace as desired.

Search via Excel

While Spendata eliminates the need for Excel for Data Analysis, the reality is that is where most organizational data is (unfortunately) stored, how most data is submitted by vendors to Procurement, and where most Procurement Professionals are the most comfortable. Thus, in the latest version of Spendata, you can drag and drop groups of cells from Excel into Spendata and if you drag and drop them into the search field, it auto-creates a RegEx “OR” that maintains the inputs exactly and finds all matches in the cube you are searching against.

Perfect Star Schema Output

Even though Spendata can do everything any BI tool on the market can do, the reality is that many executives are used to their pretty PowerBI graphs and charts and want to see their (mostly static) reports in PowerBI. So, in order to appease the consultancies that had to support these executives that are (at least) a generation behind on analytics, they encoded the ability to output an entire workspace to a perfect star schema (where all keys are unique and numeric) that is so good that many users see a PowerBI speed up by a factor of almost 10. (As any analyst forced to use PowerBI will tell you, when you give PowerBI any data that is NOT in a perfect star schema, it may not even be able to load the data, and that it’s ability to work with non-numeric keys at a speed faster than you remember on an 8088 is nonexistent.)

Power Tags

You might be thinking “tags, so what“. And if you are equating tags with a hashtag or a dynamically defined user attribute, then we understand. However, Spendata has completely redefined what a tag is and what you can do with it. The best way to understand it is a Microsoft Excel Cell on Steroids. It can be a label. It can be a replica of a value in any view (that dynamically updates if the field in the view updates). It can be a button that links to another dashboard (or a bookmark to any drill-down filtered view in that dashboard). Or all of this. Or, in the next Spendata release, a value that forms the foundation for new derivations and measures in the workspace just like you can reference a random cell in an Excel function. In fact, using tags, you can already build very sophisticated what-if analysis on-the-fly that many providers have to custom build in their core solutions (and take weeks, if not months, to do so) using the seventh new capability of Spendata, and usually do it in hours (at most).

Embedded Applications

In the latest version of Spendata, you can embed custom applications into your workspace. These applications can contain custom scripts, functions, views, dashboards, and even entire datasets that can be used to instantly augment the workspace with new analytic capability, and if the appropriate core columns exist, even automatically federate data across the application datasets and the native workspace.

Need a custom set of preconfigured views and segments for that ABC Analysis? No sweat, just import the ABC Analysis application. Need to do a price variance analysis across products and geographies, along with category summaries? No problem. Just import the Price Variance and Category Analysis application. Need to identify opportunities for renegotiation post M&A, cost reduction through supply base consolidation, and new potential tail spend suppliers? No problem, just import the M&A Analysis app into the workspace for the company under consideration and let it do a company A vs B comparison by supplier, category, and product; generate the views where consolidation would more than double supplier spend, save more than 100K on switching a product from a current supplier to a lower cost supplier; and opportunities for bringing on new tail spend suppliers based upon potential cost reductions. All with one click. Not sure just what the applications can do? Start with the demo workspaces and apps, define your needs, and if the apps don’t exist in the Spendata library, a partner can quickly configure a custom app for you.

And this is just the beginning of what you can do with Spendata. Because Spedata is NOT a Spend Analysis tool. That’s just something it happens to do better than any other analysis tool on the market (in the hands of an analyst willing to truly understand what it does and how to use it — although with apps, drag-and-drop, and easy formula definition through wizardly pop-ups, it’s really not hard to learn how to do more with Spendata than any other analysis tool).

But more on this in our next article. For The Times They Are a-Changin’.

1 Duey, Clutterbuck, and Howell keeps Dewey, Cheatem, and Howe on retainer … it’s the only way they can make sure you pay the inflated invoices if you ever wake up and realize how much you’ve been fleeced for …

Algorhythm: Twenty Years Later and the Optimization Rhythm Has Not Missed a Beat

It’s been almost a decade since we covered Algorhythm (Part I and Part II), and that’s because the last time the doctor caught up with them mid-decade, they were deep into creating their new accelerated cloud-native rapid application development platform, called AppliFire, with native mobile-first development support capabilities. And while it was very interesting, it was not Supply Chain focussed at the time, and not the core of what SI covers.

But fast forward about five years later, and Algorhythm has re-built their entire Supply Chain Planning, Optimization and Execution Management platform on top of this new development platform and now has one of the most modern cloud-native suites on the market — which not only has the capabilities of big name peers like Kinaxis, E2 Open and Infor, but also the ability to run on any mobile platform with seamless integration across modules and platforms.

And their optimization capabilities are still among the best on the market, and possibly only rivaled by Coupa Sourcing Optimization (powered by their Trade Extensions acquisition) — demonstrated by the fact that whether you are dealing with a demand plan, manufacturing plan, production plan, supply plan, logistics plan, route plan, or any other plan supported by the system, their system can find the optimal solution no matter how many demand locations, plans, sites, suppliers, products, lanes, etc. — and can do so rapidly if the user doesn’t overload the scenario with unnecessary constraints. (Even without constraints, these models can get huge, as the doctor knows all too well, but yet they solve rather rapidly in the Algorhythm platform.)

The Algorhythm suite of twelve (12) integrated Supply Chain Planning, Optimization, and Execution Management Modules is not only one of the most complete end-to-end suites on the market, but one of the most seamlessly integrated as well. It’s very easy to take the output of the “Demand Planner” (which allows the entire organization to collaborate on forecasts) and pump it into the “Manufacturing Network” (which integrates with the “Distribution Network” and “Inventory Planner”) to create a manufacturing (site) plan and then pump that into the “Production Planner” to create a manufacturing schedule by site and then push that into the “Logistics Planner” to determine the best logistics plan and then push that output into the “Route Planner” to optimize lanes and so on. (The suite also includes a “Supply Planner” to optimize individual shipments for JIT manufacturing; a S&OP planner to help sales and operations balance demand vs. supply; a “Manufacturing Execution System” to break PDI (Production Parameters) down, fetch actual production data, and validate results; a “Distributor Ordering” Management module to automatically create distributor orders across thousands of distributors; and a “Beat Planner” to optimize last mile delivery for outbound supply chain for distributors or CPG companies in geographies — like Asia — where last mile is difficult (due to inability to send large trucks, need to restock daily, etc.) With the exception of strategic sourcing and initial supplier selection, they basically have inbound demand to outbound supply covered in terms of supply chain optimization and management once you know the suppliers you are going to buy from and the products that are acceptable to you.

The UI is homogenous across the suite, and the modern web-based components such as drill-down menus, buttons, pop-ups, and so on make the suite easy to use — especially when it comes to tables and reports. The application supports built-in dynamic Excel like grids and tables which can be altered dynamically on the fly with built-in pagination to make navigation and view-control navigable, especially on tablets (for users on the go). It also supports standard (Excel-like) charts and graphs with drill-down, as well as modern calendar and interactive Google Map components. Navigation is easy, with bread-crumb trails so a user doesn’t get lost, and response time is great. It’s powerful and useable, which is exactly what you need to manage your supply chain on-the-go.

There’s a reason they have some of the biggest names in the F500 as clients, and that reason is their unique combination of

  1. power,
  2. ease of use, and
  3. and understanding of the Asian supply chain needs (especially around last-mile delivery).

The last point is especially relevant as many of the big name American (and even German) supply chain companies don’t really understand the unique complexities of (last-mile) supply chains in India and Asia. However, Algorhythm’s unique capability combined with their understanding has made their platform a force to be reckoned with in a market that is one of the hardest in the world. And as a result, they have built a platform that is more than sufficient for every other market as well. the doctor is looking forward to seeing more of Algorhythm outside of the Asian market as, at least in his view, the supply chain market in general needs a good kick in the pants as innovation there-in has considerably lagged the Source-to-Pay market that we primarily cover here on SI.

So if you need a good Supply Chain Orchestration solution, the doctor strongly encourages you to check out Algorhythm … you won’t be disappointed.

Don’t Throw Away That Old Spend Cube, Spendata Will Recover It For You!

And if you act fast, to prove they can do it, they’ll recover it for free. All you have to do is provide them 12 months of data from your old cube. More on this at the end of the post, but first …

As per our article yesterday, many organizations, often through no fault of their own, end up with a spend cube (filled with their IP) that they spent a lot of money to acquire, but which they can’t maintain — either because it was built by experts using a third party system, built by experts who did manual re-mappings with no explanations (or repeatable rules), built by a vendor that used AI “pattern matching”, or built by a vendor that ceased supporting the cube (and simply provided it to the company without any of the rules that were used to accomplish the categorization).

Such a cube is unusable, and unless maintainable rules can be recovered, it’s money down the drain. But, as per yesterday’s post, it doesn’t have to be.

  1. It’s possible to build the vast majority of spend cubes on the largest data sets in a matter of days using the classic secret sauce described in our last post.
  2. All mappings leave evidence, and that evidence can be used to reconstruct a new and maintainable rules set.

Spendata has figured out that it’s possible to reverse engineer old spend cubes by deriving new rules by inference, based on the existing mappings. This is possible because the majority of such (lost) cubes are indirect spending cubes (where most organizations find the most bang for their buck). These can often be mapped to 95% or better accuracy using just Vendor and General Ledger code, with outliers mapped (if necessary) by Item Description.

And it doesn’t matter how your original cube was mapped — keyword matching algorithms, the deep neural net de jour, or by Elves from Rivendell — because supplier, GL-code, and supplier and GL-code patterns can be deduced from the original mappings, and then poked at with intelligent (AI) algorithms to find and address the exceptions.

In fact, Spendata is so confident of its reverse-engineering that — for at least the first 10 volunteers who contact them (at the number here) — they’ll take your old spend cube and use Spendata (at no charge) to reverse-engineer its rules, returning a cube to you so you can see the results (as well as the reverse-engineering algorithms that were applied) and the sequenced plain-English rules that can be used (and modified) to maintain it going forward.

Note that there’s a big advantage to rules-based mapping that is not found in black-box AI solutions — you can easily see any new items at refresh time that are unmapped, and define rules to handle them. This has two advantages.

  1. You can see if you are spending where you are supposed to be spending against your contracts and policies.
  2. You can see how fast new suppliers, products, and human errors are entering your system. [And you can speak with the offending personnel in the latter case to prevent these errors in the future.]

And mapping this new data is not a significant effort. If you think about it, how many new suppliers with meaningful spending does your company add in one month? Is it five? Ten? Twenty? It’s not many, and you should know who they are. The same goes for products. Chances are you’ll be able to keep up with the necessary rule additions and changes in an hour a month. That’s not much effort for having a spend cube you can fully understand and manage and that helps you identify what’s new or changed month over month.

If you’re interested in doing this, the doctor is interested in the results, so let SI know what happens and we’ll publish a follow-up article.

And if you take Spendata up on the offer:

  1. take a view of the old cube with 13 consecutive months of data
  2. give Spendata the first 12 consecutive months, and get the new cube back
  3. then add the 13th month of data to the new cube to see what the reverse-engineered rules miss.

You will likely find that the new rules catch almost all of the month 13 spending, showing that the maintenance effort is minimal, and that you can update the spend cube yourself without dependence on a third party.