Category Archives: Spend Analysis

The Future of Spend Analysis

In this post, we welcome back Eric Strovink of BIQ [acquired by Opera Solutions, rebranded ElectrifAI].

At the incessant prodding of Michael Lamoureux, here are some prognostications for Spend Analysis in 2008.

  1. The distinction between data-warehouse-based spend analysis tools and Business Intelligence tools will erode, as it is already doing. The only requirements on the BI side are a mapping engine that can handle spend transactions reasonably, and an improved ability to import data from sources other than the ERP system. A third party may well step up to the plate and offer these (and perhaps other) add-ons for a particular BI platform, creating instant (and possibly fatal) competition for existing spend data warehouse suppliers.The consequences of this erosion will be more decisions in 2008 to use existing BI tools within the enterprise for spend visibility, rather than to acquire procurement-centric tools from an e-sourcing vendor. These decisions may well be made by IT or by Finance, perhaps overruling or ignoring Procurement completely. This erosion will mark the beginning of the end for big-ticket data warehouse spend analysis implementations.
  2. The distinction between spend analysis tools and spend data warehouse tools will widen. It is impossible to perform ad hoc analysis with a fixed-schema data warehouse, especially in a space like Procurement where many different views of the data are required before useful insight can be obtained. BI solutions to spend analysis will suffer from this problem as well.This will mean additional Procurement business in 2008 for data analysis vendors such as SAS. However, the real keys to capturing the spend analysis business will be the flexibility of the analysis tool, its approachability by business users, and its ability to quickly create and manipulate large datasets. Data analysis providers (as well as decision optimization providers) will remain niche players until business users can operate their products without assistance.
  3. OLAP technology is awkward, and presents many limitations for spend analysis. This will become increasingly apparent as more general purpose datasets (datasets which may seem simple, but whose organization is far more complex than mere A/P spend cubes) are considered by spend analysts. Spend analysis providers will need to provide simpler and easier ways to build datasets that overcome (or shield) business users from the limitations of the underlying OLAP data organization.Thus, we will see in 2008 the beginnings of an ability to cross-link loosely-related datasets, to build and control those linkages quickly and easily, and to create cross-dataset analyses and reports from a unified perspective. Although these advanced capabilities are already present from enterprise database vendors (and are featured prominently in their marketing literature), from a spend analysis perspective they are just laboratory curiosities given the huge effort, expertise, and expense required to set them up and maintain them. The breakthrough for spend analysis will come when ordinary business users with no IT skills can build and explore disparate and loosely-coupled datasets just as easily as they can link spreadsheets together.
  4. Invoice analysis and commmodity-specific analysis, such as that being performed by The Buying Triangle and Opera Solutions will dominate the “what’s new” frontier for spend analysis practitioners during 2008. Invoice analysis carries with it the promise of immediate refunds or other consideration from incumbent suppliers, and the consequent ability to fund entire spend management efforts through careful analysis of past contract compliance.Invoice analysis means negotiating with incumbent suppliers from a position of knowledge and strength, while the relationship is good, rather than dismissing the supplier outright, or blind-siding the supplier with an RFP. Even if the ultimate decision is to go to the open market, or to dismiss the supplier at the end of the current contract, there is money on the table to be recovered now. There is no reason not to go after it.

How To Get The Most From Your Spend Analysis System

Simply put, systematize the tactical and free up your power sourcer(er)s to focus on the strategic. You should automate everything you can from an extraction, classification, categorization, amalgamation, enrichment, and standard financial reporting perspective so that your team can spend the bulk of their time slicing, dicing, refining, dimensionalizing, and re-classifying your spend data in new and creative ways using a true spend analysis tool in search of that next big opportunity. Sometimes the gold nuggets are there for the picking in the shallow stream, but you always have to mine deep into the mountains to find the vein.

Focus on streamlining the following:

  • Extraction from the External Data Systems
    This will likely require your IT or accounting team writing scripts to automate the extraction of relevant data from each ERP or other data system.
  • Cleansing, Classification, and Import
    Your central repository should cleanse and classify new transactions automatically, based on the classification rules that you (or your vendor or services partner, on your behalf) have created. You will need to review the results of this classification for new and existing spending, and you will need to update your vendor and GL masters, but you should ensure that these processes are managed consistently and smoothly.
  • Baseline Reports and Spend Reports
    Your spend analysis tool should be set up to create the reports and summaries that your finance teams and executives will want to see on every data refresh.
  • “Low Hanging Fruit” Opportunity Analysis
    Your spend analysis tool should be configurable to run “low hanging fruit” opportunity analysis reports that look for variances between actual spend and estimated spend based on external benchmark results. Those external benchmarks must be managed and updated on a regular basis to keep these reports useful.
  • Integrate with your Contract Repository
    Maverick spending can’t be positively identified until you’ve eliminated spend which might be on contract. Integrating contracts with your spend analysis system will enable you to definitively identify off-contract spend (although be careful: the inverse is almost certainly NOT true, since true compliance requires much greater insight than is available from the A/P level).
  • Run Maverick Spending Reports
    In line with the above, make your life easy by running maverick spend reports on every refresh.

Once the tactical grunt work is out of the way, make sure your senior sourcers have access to a leading analysis tool like BIQ [acquired by Opera Solutions, rebranded ElectrifAI] (which is also used by Iasta [acquired by Selectica, merged with b-Pack, rebranded Determine, acquired by Corcentric] as part of their end-to-end sourcing suite), and let them dive in to the data and find savings opportunities you never knew you even had, like:

  • Overspending on Computers and Peripherals
    Many sellers and resellers love to give you “best price” guarantees because they know that as long as they don’t raise the price during the contract term, you’ll probably never notice when they charge you $995 in 6 months for the same system you paid $1000 for today. Given that electronics typically depreciates 3% (or more) a month, it’s easy to calculate that you probably shouldn’t be paying more than $844 for the same configuration in six months.
    You do this by loading in historical market pricing for the last year and comparing what you’ve spent to what you should have spent. If you have a “best price guarantee”, you use the generated report to go to your vendor and demand a refund.
    This happens so often, and many big companies overspend so much, that there are some boutique consultancies that pretty much make their living just finding overcharges on commodities such as electronics equipment and office supplies.
  • Fraud Reduction
    This could take the form of spending on a commodity in a department that should never be buying such a commodity (such as X boxes by accounting) or spending on non-approved or banned suppliers (such as to a company owned by a friend of an employee or, worse, the employee himself). It could also take the form of finding fraudulent charges made by your employees (like the sales rep who submitted the same dinner receipt for $484 six months in a row under “client entertainment” or the executive who likes to charge his weekly lap dances to his corporate credit card).
  • Loss Prevention
    Did you know that sometimes it’s more profitable to let your customers keep old equipment for which leases have expired or which has broken down and is still under warranty? If the cost of reclaiming it, inventorying it, and then re-selling it or auctioning it is more than what you will realize, it’s cheaper to let the customer keep it. At least one insurance firm is saving a fortune by using their spend analysis tool to determine when it’s more cost effective to let the customer keep the damaged car. If it’s going to cost $1,000 to transport and process but only sell for an average of $500 at auction, what’s the point of taking it?
  • Invoice Analysis or Compliance, Compliance, Compliance
    How many spend cubes should you build? The answer is, “many.” That’s because invoice analysis (required for true compliance) requires analyzing invoice data that can vary in format and content between suppliers.
    As Jack Welch once asked, “How do you know you’re getting the pricing you contracted for?” If this question reduces your procurement staff to incoherent splutters, as it usually does, you’ll understand the need for invoice analysis!
  • On Beyond Compliance
    Suppose that you find that the price for 5 pound express mail packages was correctly calculated by the freight vendor in every instance. But are you correctly estimating how many 5 pound express mail packages you are sending? Is the loading dock staff perhaps forgetting to fill in the weight fields on one-pound packages?

Up Next: Contract Management Integration – It’s Easier Than You Think


YOU WILL COMPLY!
Seven of Nine

So You Want To Do Spend Analysis?

Now that you’ve read my pieces on The Future of Sourcing and Spend Analysis Today you know that spend analysis is key to your continued success when it comes to year-over-year savings. You want to get on with it, but you’re not sure where, or how, to start. In this post, I’ll attempt to answer that question by providing you with a step-by-step process you can use to get your spend analysis effort under way and keep more of those corporate dollars in the corporate coffers, where they belong.

Before I continue, I’d like to point out that this blog isn’t your only source of great information on spend analysis (even though it does have over 20 posts on the subject). There’s also the “Spend Analysis and Opportunity Assessment: There’s Gold in Them There Hills … Of Data” wiki-paper over on the eSourcing Wiki [WayBackMachine], which, in full disclosure, I should point out has yours truly and Eric Strovink (of BIQ) among the co-authors, and the references in the bibliography that it maintains (which, in the spirit of openness has links to public white papers by Ketera [acquired by Deem] and Zycus, among others).

Step 1: Locate Your Data
The first thing you need to do is figure out where all of your procurement data resides. This is harder then you think, even if you have an ERP system. Because even if you have an ERP, chances are that a significant quantity of the data you need is NOT in the ERP system. Some of it will be in the ERP system, some of it will be in other accounting systems (most large organizations have more than one ERP system, or at least more than one instance – a lot more in some cases), some in the AR system, some in your P-card systems, some in your T&E systems, etc. … you get the point. If you don’t know where your data is, you can’t extract what you need – and without the right data, your effort will fail.

Step 2: Adopt a Taxonomy
Once you have located your data, you need to figure out how you are going to integrate it. The commodity structure that is going to be the foundation of your spend analysis efforts should be based on a standard taxonomy. This can be a universal standard such as UNSPSC or your own custom taxonomy (there is considerable debate as to which is preferable, but there is general agreement that the taxonomy should be modified to your particular considerations and needs, not the other way around). As long as you can easiliy map all of your data in your disparate systems to the taxonomy, and as long as the taxonomy doesn’t interfere with the proper grouping of spend for sourcing and procurement purposes (which is sometimes the case with unmodified UNSPSC), that’s what counts.

[Note: Steps 3, 4, and 5 — and usually most of step 2 as well — are often performed by Spend Analysis vendors on your behalf. You would be wise not to overpay for those services; and if you do avail yourself of them, you should understand what is being done, even if you don’t do it yourself.]

Step 3: Centralize the Data in a Single Repository

Step 3A: Define a Master Transaction Record
Across all the various systems that you are integrating into your spend cube, there are some data fields that are similar, and some that have no equivalents. You’ll need to define a “master” transaction format that can accomodate the data from all of your disparate systems. It will have some fields to which most of the feeds will contribute; but it will have other fields to which only one feed contributes. Thus, there will be many more fields in your “master” record than in any of the individual transaction sources that you are including. Note that when you combine “like” fields together from disparate systems, you have to ensure that their values are unique — which means concatenating a system ID to the field. For example, GL code #37 in System A might mean something entirely different in System B; thus, the System A GL code should changed to: “SysA-37”, and the System B GL code should be changed to “SysB-37”. That way, the two (different) 37 codes won’t be erroneously grouped together.

Note that this effort requires a sophisticated “data translation” tool — the “T” of the (in)famous “ETL” that everyone always talks about. The data translation tool should be capable of column (field) manipulation, computation of new columns as a function of existing columns, addition and deletion of columns, and so on. And, the translation tool needs to produce a script that can be re-run again and again, since this translation will need to occur on every data refresh.

Note also that the Master Transaction Record will contain an interesting new field that isn’t present in any of the data feeds — a “source” field. This will enable you to dimensionalize (slice) the data by a single system, or by all the source systems, or by any subset of the systems.

Step 3B: Collect Related Information
This is typically a Vendor Master, or a GL description table, or a Cost Center breakdown, either maintained by one or more of the ERP systems, or maintained independently somewhere else in the enterprise. Just as with the Master Transaction Record, duplicate tables from multiple sources must be merged into a “master” table using the same methodology as in 3A.

Step 3C: Build the Initial Cube
In this step, all of the data that you’ve assembled and translated is loaded into the Spend Analysis system. Files containing related information are related to the Master Transaction Record. One or more groups of Master Transaction Records are loaded into the system. Then, data dimensions (columns) within your data are defined, and hierarchies (implicit and explicit) created. Measures (quantities that are rolled up in the cube) are also identified. Finally, the cube is put together in some initial fashion (this varies by spend analysis vendor; sometimes this initial cube is available to you immediately for preliminary analysis, sometimes it is not).

The tools made available to users for step 3 processes vary; it is fairly unusual for all of them to be accessible to business users, although that is an absolute requirement for power users (see Step 7).

Step 4: Family the Data
If there are multiple ERP systems being combined, then the “GL” column in the Master Transaction Record (as well as others) will contain multiple instantiations of pretty much the same thing — for example, several different varieties of “office supplies” or “contract labor” and so on. It’s necessary, therefore, to create an “uber” GL — a grouping of like GL codes into logical categories, so as to avoid redundancies later. Similarly, the Vendor dimension must also be “familied” so that the (typically many) entries made across all the ERP systems for a particular vendor are grouped together. Familying should be done for any dimension that needs it; cost center is another candidate.

Step 5: Map the Data
Once the key dimensions are familied, it’s time to map spending to the taxonomy you chose in Step 2. The result of the mapping phase is a set of “mapping rules” which constitute a knowledge base for how to assign spending to the taxonomy. Once the mapping rules are created, when new spend is added to the dataset, the rules are applied to each new transaction, and that transaction is moved to the appropriate taxonomy bucket automatically. Spend Analysis vendors vary on their approaches to creating the mapping rules; some sport automated rules generation systems commingled with manual correction, and perform this service for their customers; others allow customers or third parties to build their own rules. In any event, the end result is a spend cube in which it is finally possible to determine how much was really spent in a particular commodity area (a key piece of information that is not available from ERP systems, and certainly not available when there are multiple ERP systems in the enterprise).

Step 6: Pick the Low Hanging Fruit
These days, everyone wants a quick win – and there’s no quicker way to get a win than to drill around a spend cube for the very first time and look for obvious indications of trouble. These include incorrect vendor density (too many suppliers, or too few suppliers for a particular commodity); high spend rates for certain commodities (such as office supplies) compared to similar companies; and so-called “bypass” spending — that is, spending that is not with approved vendors, or that is clearly off-contract. Some Spend Analysis vendors and consultants provide standard reports to assist with this process.

Step 7: Acquire a Real Spend Analysis Tool and Let Your Power Users Loose!
As I pointed out in Spend Analysis Today, a real spend analysis tool is one that truly gives the user the ability to “play” with the underlying OLAP database. For starters, a user should be able to define their own cubes that consist of any dimensions they want, re-order and re-structure the dimensions of the cube at any time, dynamically create their own reports, analyze multiple dimensions simultaneously using multidimensional extracts, define and re-define the classification rules dynamically, and populate their own models with data from the spend cube.

Step X: Get a Good Consultant
At any point during the process, if you are unsure about what to do, you should find a consultant who specializes in spend analysis for a living to help you out. This doesn’t mean call up your favorite Big-5 Consulting firm and asking them to send over their best guy. Rather, it means seeking out the boutiques who do it for a living day-in and day-out and have truly mastered the process (like The Buying Triangle). The real masters will be able to analyze your spend, compare it to current contracts and benchmarks, and find money owed to you – that you are eligible for right now!

Up Next: How To Get The Most From Your Spend Analysis System


It’s your data. Use it!

Spend Analysis Today

The state of spend analysis today, despite all the frenzied M&A activity between 2003 and 2005, is still a fractured and confusing one. There are about thirty (30) vendors selling solutions in the market-place, and only a handful are reselling or repackaging someone else’s solution. However, the big difference between today and a few years back is that there are only a couple of true stand-alone vendors left on the market, notably BIQ and Zycus, as most of the stand-alone vendors were swallowed up by the Big 6.

What makes it even more confusing is that, even though most of the vendors have their own solutions, it sounds like the vast majority are selling the same type of solution. Furthermore, based on all of the overlapping marketing, it appears that most of the vendors are trying to differentiate themselves based upon either the “intelligence” in their data classification algorithms, or the number of canned reports their application comes with – not on any obviously unique capabilities.

Although it is true that it’s impossible to do spend analysis without good data, and this requires your data to be as complete and as accurate as possible (at least 90%, but preferably as close to 99% as possible), which means that any good spend analysis solution needs good ETL (extract, transform, load) and automated classification capabilities, it’s also true that spend analysis is more than just classification and baseline reporting. Spend Analysis is about uncovering previously unknown savings opportunities. Such opportunities are not likely to be found with standard reports, since obvious opportunities are likely to have already been found and addressed by in-house analysts using basic SQL queries and simple reports. Thus, spend analysis must go beyond what a simple reporting engine can do or what your average analyst can do with SQL in order to be truly useful and find genuinely new savings opportunities.

In other words, creating an OLAP database on cleansed spend transactions will be a worthwhile effort the first time you do it, because you will be able to identify most of the obvious savings opportunities by way of variance and non-compliance. However, once you have addressed those “low hanging fruit” opportunities, there will be little residual value to the effort as it will simply report the success you have already achieved and fail to identify any new opportunities. In order to realize the true power of spend analysis, a user needs the ability to “play” with the OLAP database the same way she can currently “play” with the standard reports in Excel spreadsheets. It’s not about pivoting around the standard cube, but being able to create your own cube with your own data and your own dimensions and slice and dice those dimensions in any way you can dream up in your quest for that next savings opportunity.

When you find that opportunity, it’s about capturing the process used to derive the opportunity and re-applying it in an automated fashion to similar commodities and categories and to the same commodity and category again in the future to make sure that the identified improvements get implemented and stay implemented so that you realize the savings. This certainly requires that you have an instance of the application running a standard cube that is integrated with your contract management system and your procurement system to make sure you are continually buying on contract – but it also requires that you have the ability to build multiple cubes to address commodity-specific analyses and to address datasets that originate from sources other than the ERP system.

When you get right down to it, only two solutions on the market stand out – Zycus and BIQ [acquired by Opera Solutions, rebranded ElectrifAI]. The two last independent players. Zycus stands out because, in addition to the advanced extraction, cleansing, aggregation, and enrichment capabilities that you will find in the other Big 4 players (Emptoris [acquired by IBM, sunset in 2017], Ketera [acquired by Deem], Procuri [acquired by Ariba, acquired by SAP), it has built a first generation opportunity finder that goes beyond just pre-packaged standard reports to integrate variance analysis and market intelligence to automatically identify all of your “low hanging fruit” opportunities and included a pipeline-based workflow management process to attack and manage your initiatives.

BIQ (also available as part of Iasta [acquired by Selectica, merged with b-Pack, rebranded Determine, acquired by Corcentric] Smart Analytics) stands out because it is the only product on the marketplace that truly gives the user the ability to “play” with the OLAP database. In BIQ, each user has the ability to define their own cube, either on any of the “standard” dimensions in the centralized data warehouse (which could be another spend analysis platform or an ERP system) or on any dimension they want to define using BIQ’s capability to define new dimensions in near real time. Plus, the user can re-order the dimensions of the cube for reporting purposes at any time, dynamically create their own reports, and even analyze multiple dimensions simultaneously using treemaps (based on Shneiderman diagrams) and multidimensional extract capabilities. Finally, there’s BIQ’s unique ability to allow users to define and re-define the classification rules dynamically using a very powerful rules engine, and their forthcoming meta-rollup capability (programmatic rollup of rolled-up data).

In short, spend analysis is about the analysis, and, currently, with the exception of BIQ, that’s a point that the current (leading) vendors are failing to grasp. Like Aberdeen, they’re Lost in the Trees. Now it’s true that BIQ does not come with built-in facilities that will automatically classify 95%-plus of your spend, but spend classification is fundamentally not that hard to do. The secret sauce to do that has been known by your leading consulting firms for years: map the vendors, map the GL codes, map the vendor and GL code combinations and then create exception based rules for whatever is left or whatever doesn’t map properly. This is something that can be done by a tactical procurement agent or accounting clerk in a short time frame in even the largest of Fortune 100 companies – at a very reasonable cost. (So why are you paying hundreds of thousands of dollars for technology to do it for you?)

There’s no excuse not to look at any tool that gives you better analysis capabilities when even a basic ETL tool can do what you need to do with a little bit of elbow grease up front. Especially since I’ve heard good arguments that automated classification does not really exist. After all, what are automated classifiers doing? They’re applying rules. Where did those rules come from? A human being. The only real difference between the big solutions with enhanced classification capabilities and the little solutions with basic classification capabilities is that the big solutions have rules that have been defined, or in the case of automatically derived rules, checked by experts based on years of doing manual spend analysis projects for their clients. (Furthermore, I haven’t seen a classifier yet that has not required heavy human intervention on the back end to correct mistakes – especially during the initial implementation. And, despite what the sales people would have you believe, this often takes just as much effort, if not more, than simply having a knowledgeable human define the rules in the first place.) The underlying technology, fundamentally speaking, is not that different. It’s true that some of the algorithms employed by the big players are a lot more advanced, but they are still based on rules and knowledge derived originally from a human. As I’ve said before, computers are not intelligent. They are just very good at doing the calculations they’ve been programmed to do.

Furthermore, even if you have a spend analysis tool already, there’s nothing stopping you from employing a dual-tool approach – a standard Big 4 (or Big 6 when you throw Ariba [acquired by SAP] and CGI into the mix) solution to automatically extract, cleanse, classify, amalgamate, and track all of your spend data from your various data sources in a standard cube setup and then a BIQ (-like) solution that can be delivered on-demand to the members of your strategic sourcing team to help them find the next great savings opportunity. The standard solution will be able to automatically create all of the reports that finance and the executive team want to see while the BIQ (-like) solution will give the power users on your strategic sourcing team the tool they need to uncover the next level of savings opportunities. Plus, a BIQ (-like) solution is on-demand and relatively inexpensive (for example, BIQ only runs your average organization between $3K and $6K a month for the sourcing team), especially compared to the realizable savings it will identify. It’s certainly something that should be considered.

Up Next: So You Want To Do Spend Analysis (7 Starting Steps)


Note that this isn’t to say that suite-vendors like Ariba, Emptoris, Ketera, and Procuri (etc.) don’t have valuable solutions – they do (and I’ve even written about some of them on this blog in the past), just that, on their own, their spend analysis solutions don’t truly achieve the analysis necessary for your sourcing team to go beyond the low hanging fruit – which is the key to achieving year-over-year savings (even if these systems do work great the first year). From a finance perspective, they are pretty good – centralized cube, standard reports, automated feeds, automated classification, etc. etc. – they just don’t have everything the power hitters on your sourcing team need today!


I have an opinion. How ’bout you?

The 2nd Sourcing Innovation Series – Let’s Get Analytical!

Spend Analysis. Decision Optimization. Cost Modeling. Almost since the beginning, these have been the six dirty words of strategic sourcing. Study after study has found that these techniques easily save 8% to 15% for just about any organization that spends more than 500M a year, but yet, on average less than one fifth of companies out there have tried these technologies, and less than one tenth are using them. It’s like they’re taboo. Well, in the not too far off future, the tables are going to turn, and instead of being the six dirty words, Spend Analysis Based Cost Modeling Decision Optimization are going to be the seven words of saving grace for tomorrow’s sourcing organization that wants to survive beyond the next decade. But the technology of tomorrow is not going to be the technology of today. But first …

Why? There are numerous reasons that this will happen, including negative returns from reverse auctions from early adopters, the forthcoming fall-out of the majority of first-generation supplier networks and marketplaces that still remain, and the eventual realization that contract management is not the holy grail if you don’t have a good contract in the first place, but the primary reason this will happen is the G-Word. Globalization. The effects we’re starting to see now are nothing like what’s going to come, especially since the majority of companies are unprepared!

Tactical job loss to outsourcing, rampant inflation in raw materials due to skyrocketing demand from developing countries, quality issues, and CSR (Corporate Social Responsibility), or should I say CSI (Corporate Social Irresponsibility), issues are only going to compound in the coming years. And, without recourse, this is only going to push costs, as they say, through the roof of the nearest skyscraper!

The only way companies are going to be able to maintain costs, yet alone achieve savings, is by getting a firm handle on costs and, more importantly, by identifying and achieving savings opportunities not previously explored. This is going to require an improved understanding of the cost drivers of what you are buying (cost modeling), and understanding of where variability exists, either within past buys or against market indices (spend analysis), and what the best award scenarios are (optimization).

But it won’t be three applications at three different stages of the sourcing process, it will be one, and it will be at the beginning, center, and end of the sourcing process. Think about what CoExprise is doing for the management of contract manufacturing – integrating the important PLM, Sourcing, and Procurement aspects of complex assembly sourcing – it will be something like that. But instead of an Aravo-Iasta*1-Ketera*2 union for a specific domain, it will be an AprioriCombineNetBIQAkoya union for the generic product domain. And it will look like nothing you’ve seen before. Sourcing tomorrow will be quite different than sourcing today. The only question is, who are the brave souls that are going to lead the way?

*1 Iasta was acquired by Selectica, merged with b-Pack, rebranded Determine, acquired by Corcentric
*2 Ketere was acquired by Deem


The future’s coming hard and fast … and I’m gonna be on the freight train that meets it head on!