Why You Should Not Build Your Own e-Sourcing System, Part II Spend Analysis

In Part I, where we noted that Mr. Smith was right in his recent post on “thinking of building your own esourcing system please don’t” over on Spend Matters UK where he pleaded with those organizations, and particularly those organizations in the public sector, who thought they could build their own e-Sourcing system not to, we gave a host of reasons why only those organizations with the core business of software development and delivery specializing in Sourcing or Source-to-Pay should even consider the possibility. We also agreed with Mr. Smith that any other organization that even considered the possibility was

  • going to waste OUR money building it,
  • waste exorbitant amounts of money keeping the system up to date and compliant with ever-shifting legislation, and
  • only feed those dangerous delusions at best (and possibly create an epic disaster worse than the Smug cloud that ruined South Park because, of the 11 greatest supply chain disasters of all time, 8 were caused by technology failures and 6 by software platform failures!)

But we know this isn’t enough to convince the smuggest and most deluded from considering the notion. So we’re going to dive in and address some of the difficulties that will have be conquered, one primary module at a time, starting with spend analysis.

Even though every vendor and their dog thinks they can deliver a spend analysis system these days, the reality is that most vendors, including those with a lot of database and reporting experience, can’t. If vendors with significant experience in data(base) management and reporting can’t build a decent spend analysis system, what makes you think your organization can?

A spend analysis solution must be:

  • Powerful
    and support multiple spend analysis cubes, with derived and range dimensions, stored in public and private work spaces;
  • Flexible
    and support multiple categorization schemes, vendor and offering families, and user defined filters and drill downs;
  • Manageable
    with user defined data mappings and cleansing rules, hierarchical rule priorities, and easy enrichment;
  • Open
    and easy to get data in, out, and mapped; and
  • Informative
    with built-in report libraries, a powerful report builder, and an intuitive report customization feature.

This is not easy. Let’s start with flexibility. Most vendors probably have their goods and services mapped against UNSPSC, which your buyers of domestic goods are familiar with, but globally traded goods are probably mapped against HTS, which your tax division wants, your organization probably has its own GL codings that are required to keep Accounts Payable happy, and none of these categorization schemas are suitable for real spend analysis. As a result, you probably need to maintain at least four separate categorization schemas (for buyers, traders, accounts payable, and real analysts). If you think you can easily achieve this by slapping a report builder on top of an open source relational database, think again.

Let’s move on to power. One cube is never enough. If you’re an organization of reasonable size, doing year over year spend analysis over a reasonable time frame, you’re looking at millions, if not tens of millions of transactions. If you believe that you can dump all of that in one cube, and make sense of it, assuming you can design a system that can even build that cube without crashing (it’s big data, remember), you’re probably living in ImaginationLand (which is a very dangerous place to be).

We cannot forget about openness. The data you need will not live in the Spend Analysis system. It will live in the ERP (Enterprise Resource Planning). It will live in the Accounts Payable System. It will live in the TMS (Transportation Management System). It will live in the WIMS (Warehouse Inventory Management System). It will live in the VMS (Vendor Management System). And so on. Every one of these systems will have a different schema, it’s own data master, and, probably, duplicate vendor and product entries with various spellings of the name, locations, and so on.

Nor can we forget about manageability. It must be easy to map, normalize, clean, and map all of the data that is pushed into the system — by hand. AI doesn’t work. Every organization uses its own classification and shorthand, every department uses its own variations on the theme, and no system can figure out every error a human can make. All AI systems do is pile on rules until there are more collisions than correct exception mappings. That’s why a spend analysis system not only has to support multi-level rules, but help the user define appropriate multi-level rules and understand, when a transaction is mis-mapped, which rule did the mapping, what exception rule is required, and how broad that rule should be.

This leaves us with the need to extract useful information that can be used to identify real saving or value generation opportunities. No canned set of reports can do this. Standard reports can indicate where we can begin to look, but simply knowing a spend is high, or higher than market average, does not indicate why (locked in prices, bundled in services, quality guarantees, maverick spend, supplier overcharges) or what factors, if addressed would decrease spend.

And while this is just a high level overview of the challenges, the hope is that it is sufficient enough to convince you that development is not an easy task and not something that the average organization should remotely entertain.