Category Archives: Spend Analysis

Screwing up the Screw-Ups in BI

Today I’d like to welcome back Eric Strovink of BIQ [acquired by Opera Solutions, rebranded ElectrifAI].

Baseline recently put a slide show on their site illustrating “5 Ways Companies Screw Up Business Intelligence — And How To Avoid The Same Mistakes,” with data drawn from CIO Insight. The slides are an excellent example of how mainstream IT thinking misses the essential problems of business data analysis.

Let’s take the “screw-ups” one at a time:

  1. Spreadsheet proliferation (97% of IT leaders say spreadsheets are still their most widely used BI tool.)Spreadsheets are one of the most valuable business modeling tools available, and IT might as well understand that they’re not going away. The problem is when spreadsheets (and offline tools like Access) are used inappropriately, to manipulate transactional data rather than drawing it in the right format from a flexible store. The solution provided by Baseline is to “cleanse and validate your data, then migrate the information to a central server/database that can be the backbone of any BI strategy.” Bzzt! Sorry, a central database won’t solve the analysis problem, and at the end of the day you’ll have just as many spreadsheets as before. That’s because a fixed schema data warehouse is a lousy analysis tool, and might as well be on planet Neptune as far as usability for the business analyst is concerned. There’s nothing wrong with a reference dataset, but business analysts need to be able to manipulate its structure as easily as a spreadsheet, or they will simply extract the raw data from it and manipulate the data offline, with the same slow, expensive, and uncertain results as today.
  2. Systems can’t talk to each other (64% of IT leaders say integration and interoperability of BI software with other key systems such as CRM and ERP pose a problem for their companies.) Right! Except that the Holy Grail of trying to extend a “centralized” database umbrella over completely disparate systems is both incredibly expensive and nearly impossible. Baseline suggests “[partnering] with a reputable systems integrator.” Good for them — at least they dodge this bullet rather than getting the answer completely wrong. The right answer is that business analysts should be able to construct BI datasets on their own, as needed, from whatever data sources are useful/appropriate, and it shouldn’t be difficult for them to do so. Concentrating all of the information under one umbrella isn’t necessary; many umbrellas can do the job, and if they’re easy to deploy, they’re both inexpensive and provide a better and more flexible answer.
  3. No centralized BI program (61% say they don’t have a center of excellence of the equivalent of BI.)And they’d be well advised to tread carefully, because BI systems have a track record of poor performance and poor customer satisfaction. Why? Because the analyses you can do with a fixed data warehouse are limited to the views set up a priori by IT or by the vendor, and those views are largely immutable. Baseline dodges this one, too, suggesting the “[creation of] a data governance and data stewardship program.” Can’t argue with that in principle, but a governance and stewardship program doesn’t actually put any meat on the table. How about putting tools into analysts’ hands that they can actually use? Right now?
  4. Data lacks integrity (57% say poor data quality significantly diminishes the value of their BI initiatives.) Hmmm, I wonder why the data are of such poor quality. Could it be that the BI system doesn’t really provide much insight? Could it be that the fixed schemas set up by IT or by the vendor don’t have any applicability to day-to-day questions? Could it be that the inability of the BI system to re-organize and map data on the fly causes errors to persist over time? Baseline recommends spending more money on data cleansing, which might make a cleansing vendor quite wealthy, but won’t help much. It typically isn’t cleansing that’s the problem, it’s (1) the fixed organization of the data, which is guaranteed to be inappropriate for any analysis that hasn’t been anticipated a priori, (2) the ad hoc reporting on it, which has to be easy to accomplish, as opposed to requiring IT resources (see below), and (3) the fact that cleansing can’t be accomplished on-the-fly (as it should be) by the business analysts themselves.
  5. Managers don’t know what to do with results (58% say most users misunderstand or ignore data produced by BI tools because they don’t know how to analyze it.)Even when BI is in place, nobody knows what to do with it. Baseline recommends that “IT staffers… should work closely and regularly with business managers to ensure that measurement, reporting, and analysis tools are supporting business goals.” But this is precisely the problem. For business analysts, BI systems are difficult to use and set up, it is difficult to create ad hoc reports, and it is impossible to change the dataset organization. It is also politically impossible to change the dataset organization if it is being shared by hundreds or thousands of users. How are you going to get them into the same room to agree on the changes?

    So, Baseline is proposing (in essence) that IT resources sit cheek-by-jowl with business users, to ensure that they can get value out of a system that they otherwise could not use. This is certainly a “solution” of sorts, but it’s not practical. Either business analysts can use the system on their own, or the system will be of marginal value to them. It’s that simple.

The 12 Days of X-emplification: Day 2 – Spend Analysis

As you have probably figured out by the large number of postings to date on spend analysis, a strong understanding of the data behind spending is pivotal to the proper identification and management of your spend initiatives. Here are some key questions to ask, and answers you should be expecting.

1. How much flexibility do I have in spend cube creation?

Spend analysis is more than just one A/P level spend cube. It’s many cubes. It’s multiple cubes by supplier by commodity for contract compliance. It’s cubes for purposes outside “normal” spend analysis. For example, this could include an analysis of transactional data not necessarily related to spend per se, such as cell phone usage patterns or help desk support calls, but any data where an improved understanding can lead to process improvements which impact organizational spend. It’s cubes for throw-away analysis, cubes that are derivative of other cubes, and so on.

So, you need to be able to build your own cubes, modify your own cubes, and, in many cases, map your own cubes*. This needs to be easy and fast, so that your analysts spend their time analyzing the data, not wrestling with the data for days or weeks before the analysis can even be started.

* The exception is when you know the data has been properly mapped by an expert in the data source you are using. However, this is not the normal situation at most companies, especially in commodity and indirect spend, and you will find that you have to do mapping the majority of the time.

2. How should I deploy spend analysis?

It doesn’t make sense to share a spend cube between your analysts, unless you want to set up a UFC Grudge Match in the hallway to decide who gets to implement his/her changes next. Unlike the central data warehouse, analysis cubes need to be private, not public (although sharing public cubes can be useful for casual informational purposes). The popular notion of a central data warehouse as the basic ingredient for detailed data analysis is wrong, and that’s why there’s so much unhappiness among data analysts with both BI systems and with the majority of spend analysis systems today.

3. What if I am resource constrained, and I need to outsource spend cube services?

Make sure that if you outsource cube development to the vendor, or a third party that works with the vendor, you can do it in such a way that your own people get trained along the way, so they can take over at any time. With the exception of the more complex direct spend categories, which require complex bill of materials and engineering-specific knowledge to properly map, most spend isn’t that hard to map, and this is especially true in the purchase of commodities. Furthermore, you need to make sure there’s more than one source for services with the spend analysis system you select. This is not only because you might want to throw the services business out to bid, but because you don’t want to be waiting on a resource constrained vendor every time you have spend to map and need help. (Let’s face it, just because you should be able to do mapping on your own, this doesn’t mean you’ll have the skills or insight to be able to do it the first time without some help.)

4. How much reporting flexibility do I have?

Reporting is where the rubber meets the road, and despite marketing noise to the contrary, no set of static reports will get you past the first corner. If your analysts are downloading transactions to their desktops in order to construct a report or conduct an analysis, that should be the first clue that your spend “analysis” system isn’t an “analysis” system at all.

It should be possible for your analysts to construct new reports and models easily and quickly, and they shouldn’t have to be IT experts. After all, that’s the whole point of spend analysis!

5. What should I know about data cleansing?

Cleansing is a term that involves “classifying” like items together (for example, multiple entries of “IBM” in the vendor master) and “mapping” spending to a useful sourcing commodity hierarchy. Classification is mostly the elimination of redundant vendor entries, although when collecting spend from multiple sources, it can include the creation of over-arching General Ledger and Cost Center categories. The spend analysis vendor should provide hierarchy classification tools that make the classification process simple.

Some spend analysis vendors make a big deal about classification, but 95-97% of the problem is redundant entries, not issues such as “Lotus” being owned by “IBM.” You’ll find that in most cases your own commodity managers are well aware of who owns whom, and don’t need any help in this area; but if you’re still doubtful, there are third party vendors who will create a who-owns-whom hierarchy out of your Vendor Master for $0.10 to $0.15 per line item.

Spend analysis vendors also make a big deal about mapping, but that process is also straightforward in the majority of commodity and indirect spend categories. (Direct spend categories, where you have to create hierarchical bill of materials that allow you to determine the impacts of raw material or labor cost increases and to perform make-vs-buy analysis, can be quite involved, but unless you are a manufacturing organization, this is not the norm.)

Make sure your spend analysis system supports an overlay-type mapping scheme that allows you to prioritize mapping rules or mapping rule groups in a reasonable way. Prioritizing rules is important, because it allows you to apply basic engineering principles (the famous 80-20 rule) to mapping your spending. The idea is to organize your rules such that each successive rule group maps more and more specifically, but also so that each successive rule group can focus on a smaller and smaller number of transactions. Using simple techniques that are widely published and well known, you can be up and running with a 90% spending map in just a few days. (And this is usually more than enough to allow you to perform initial analysis to find key categories to focus on as part of your first set of spend management initiatives after acquiring a real spend analysis system.)

You should also ensure that the vendor provides a way to map free-form text descriptions. These can be helpful in cases where there is little or no useful information in terms of supplier or GL coding in commodity and indirect categories or missing engineering classification codes in direct categories.

6. Does the tool support derived and ranged dimensions?

A good tool will not only support various time periods, such as day, week, month, quarter, and year, and time period – over – time period analyses, such as month-over-month, quarter-over-quarter, and year-over-year analyses, but will also support other types of ranged dimensions, such as spend size (that will allow you to bucket your suppliers for a commodity into small, medium, and large spending buckets by dollar volume) and risk level (that will allow you to group your suppliers into low, medium, and high risk buckets based on derived risk factors).

7. Can the user fix any set of filters they choose while pivoting and drilling down into reports?

This might not sound that important, but when trying to figure out why a certain spend category is 2M over last year, when an initiative expected to reduce costs by 10% was undertaken, can be difficult if you can’t find the key source of the problem. For example, let’s say you, as the telecommunications sourcing professional at a large national organization with hundreds of locations, decided to switch long distance carriers. If all divisions and business units implemented the change, then costs should be less, not more (unless everyone is calling significantly more than expected). However, let’s say that IT and HR didn’t switch at ten of your largest locations. With a dozen divisions, and hundreds of locations, it could be difficult to determine this unless you can drill into the data, fixing divisions and units at each step, and find out that 30 intersections of division and business unit are spending more than last year. Then, drilling into each you find that 15 of these are still paying, and thus using, the wrong carrier. However, if you can’t fix multiple dimensions, or apply filters that achieve the same effect, you might only be able to figure out that IT is spending more – and then you might have to call 50 locations to figure out which ones haven’t switched. Flexibility in the analysis and reports is key!

8. What if I have multiple accounting systems?

This is actually excellent news, because you are likely to have huge opportunities for savings, given that those systems probably haven’t been combined in any reasonable way before for spend and procurement analysis.

The key for spend analysis is to ensure that the vendor provides an effective tool for translating (the “T” in the “ETL” acronym) files from one format to another. As with the other spend analysis system tools, this tool must also be accessible to, and usable by, your business analysts. You should never let yourself be at the mercy of IT or a spend analysis vendor when trying to analyze new data sources, or when merging new data sources into an existing cube.

With independence comes power; and ensuring that your analysts can control their own data processing and reporting is the key to spend analysis success. This brings us to our last question.

9. How easy is it to get data in and out of the system?

Importing data should be a piece of cake. It should simply be a matter of pointing the system at the appropriate file or URL connection, specifying the dimensions of the records to import, and pressing “import” to get the data in. Then, as pointed out in the last question, transformation and mapping should be easy for even a junior business analyst.

In addition, since the goal of spend analysis is to identify spend reduction projects, it should be easy to get the data out that you need to not only create a spend project in your sourcing system, and track historical costs, but justify the project’s creation.

Real Analysis Solutions Uncover Actionable Data

Supply & Demand Chain Executive recently publish an article by Kari Dwyer titled “Paving The Way For Continual Performance Improvement” that stated that through the availability of actionable data, supply chain visibility solutions become an invaluable asset in providing continual performance improvement.

The article, which pointed out that actionable data are the precursor for effective change, since isolating the root causes for specific performance measurements and providing a tactical approach to resolving them is the fastest, most effective way to gain performance improvements, also submitted that one effective and powerful way to receive actionable data is made possible through supply chain visibility solutions. This is because visibility solutions have the infrastructure in place to prominently display data that need attention, whether through alerts, dashboards, reports, e-mails, hand-held devices or text pages, and direct the information to the right people.

The author then states that visibility solutions allow the presentation of higher-level metrics with the ability to drill into the supporting detail, often through multiple layers, to get to the detail that drives action and that robust visibility solutions build metrics from the bottom up, using the most granular level of detail available to build a solid foundation as a basis for all higher-level metrics. Up to this point, I agree wholeheartedly!

The issue that I have is that the author appears to be implying that a visibility system is enough to turn data into actionable data. A visibility system is absolutely necessary, but it may not always be sufficient. Just because a dashboard turns red does not mean you have enough information to fix the problem! The author correctly notes that:

    • for the majority of supply chains, corporate-level metrics and performance numbers are often comprised of results from different systems,
    • information [only] becomes actionable once the data can be analyzed in such a way that a decision can be made to effect a desired outcome, and
    • knowledge of pertinent information is essential to effecting change that will lead to cost savings

.

However, the author then implies that a visibility solution alone will meet all of these requirements. Let’s analyze the example provided which states that knowing that vendor compliance averaged only 98% last quarter is not actionable in its truest form. If you take out the two worst performing vendors, it could be that compliance was 99%, with the two worst vendors performing at 95% and 93%, respectively. In this situation, it would be wrong to chastise all of your vendors and say they had to do better – as the correct solution would be to, if they were willing, sit down and jointly work out the reasons for their poor performance along with the required resolutions, and if they were not willing, to terminate the relationship. However, just being able to drill down into the performance metric and find out you have two vendors with poor rankings is not enough. It does not tell you why the rankings were poor.

What if the rankings were poor because the supplier was consistently one day late with their shipments – would the visibility system tell you this? Presumably you would be able to drill down on the supplier’s overall performance metric and determine it was it’s delivery metric leading to its poor performance, with quality and reliability and other rankings A+. If the visibility is based on a reporting system – it might only contain this scorecard data and you might not be able to figure out that the supplier was only one day late when it was consistently late. Although the visibility system has identified the source of the problem, the data is still not actionable. Unless you know how late the shipments are on average, and the reason for the lateness, you cannot act upon the data to effect a desired outcome. What could be happening is that the truck could be showing up at 10 am when it’s supposed to be there at 8 am. Because the warehouse inventory system only tracks lateness in terms of days, each time the truck shows up at 10 am, it is recorded as being one day late in the metrics column being sucked in by the visibility system – which ignores the arrival time column, which shows it arriving on the correct day, but two hours late. Furthermore, this could have all been because of a simple miscommunication by a junior member of the procurement team who told the supplier that the warehouse needed the truck there by 10 am to have it unloaded on time, when in reality the warehouse needed the truck there by 8 am to have it unloaded on time. And as far as the supplier was concerned, it was compliant on delivery terms at least 99% of the time.

The fact of the matter is that most visibility solutions today are simply reporting solutions on top of traditional data warehouses, which suck data into a static cube and run roll-up reports on that cube to produce metrics and KPIs. Although this is often a great solution for identifying where you have a known problem, it doesn’t always give you enough information to allow you to figure out why you have the problem and what you need to do, at a detailed level, to correct it. Chances are you’ll need to augment it with a sophisticated analytics (or business intelligence) tool that can not only do a deep dive into the data in the solution and the data in the systems the data was amalgamated from, but one that can also build cubes on the spot and slice and dice them in dozens of different ways until you find the data you need to identify the source of the problem.

Furthermore, visibility tools can only tell you when you have a known problem type, they can’t tell you about unknown problem types. For example, let’s consider small package freight. A visibility solution would only generate an alert or turn a dashboard red if you were not being charged at contracted rates. It wouldn’t detect that 90% of your packages were going out as 5 pounds when, in fact, 80% of these packages should be going out as 2 pounds or less because most are simply short documents and contracts! And it definitely wouldn’t detect when you are sending federal express packages between neighboring buildings or, even worse, between two floors in the same building! (It happens!) Thus, a visibility solution on its own will never be enough – you need to constantly be applying analytics to find missed exception cases which translate into new rules that need to be added to the visibility system.

Integrating Contract Management and Spend Analysis

Today I’d like to welcome back Eric Strovink of BIQ [acquired by Opera Solutions, rebranded ElectrifAI] to Sourcing Innovation. In this post, Eric tackles the contract management – spend analysis integration issue that the sales and marketing representatives of a number of suite vendors often make a lot of fuss about.

If your company is like most, your contracts are a hodge-podge of dense language resulting from hundreds of negotiations, whether you have a Contract Management (CM) system or not. If you already have a CM system, chances are good that most of your contracts aren’t written with the templates and standard language that some of them offer. In fact, most companies use CM systems simply to organize existing unstructured contracts for better searching, reporting, accessibility, and tracking – with the promise, in some CM systems, of proactive alerts.

So when an e-sourcing vendor claims to “integrate” Contract Management with Spend Analysis, exactly what does this mean? Well, as it turns out, it isn’t even necessary to have a CM system in order to integrate your contracts into your Spend Analysis (SA) system.

Let’s imagine that there’s a stack of contracts on the corner of your desk. The “stack” can be a “virtual” stack that’s held in a CM system, or it can be a physical stack of documents; it’s not important which. Each contract represents an ability to buy a commodity or a group of commodities from a specific vendor, over a specific period of time, perhaps additionally limited to a geographical region or a business unit.

Let’s walk through the process of integrating a contract into the SA system.

1) In the SA system, we create a data dimension called “Contract.” It is a simple list of contract names or other identifying information. An entry is defined for each of the contracts in our stack.

2) Using the SA system’s mapping rules, we map potential spending to each contract in turn. The spending on a contract is typically a function of Supplier, Date Range, and Commodity. For example, if contract C174-KELLY was for temp labor, and it was valid between February 2001 and October 2001, and it was with Kelly Services, then we map the combination of

Commodity

Time

Vendor

to the contract:

Mapping

After applying this rule, if we then filter (“drill”) the SA system on the HR>Recruiting>Temps commodity, we see these amounts in the Contract dimension:

Contract

Does this mean that all of the 95,996 Kelly spending was on contract? Absolutely not, since we cannot know (1) if Kelly charged us the correct contract price, or (2) whether someone used Kelly without realizing that we had a contract, or (3) whether in fact anyone ever used the Kelly contract at all when doing business with Kelly. Which is why talking about “compliance” at this level of analysis is silly. But we do know, if we’ve entered all our contracts this way, that the “Other” spending was definitely not on contract. That’s valuable information, and it’s better than half-measures to find bypass spend, such as a “preferred vendor” dimension.

Now, what was the difficult part of the above? Well, it was figuring out what “Commodity” the contract was for, from the perspective of the SA system. Building the Contract dimension is easy (perhaps a vendor’s “integration” logic performed this few minutes of work for you) – but building the rule that maps the contract into the spend cube requires reading the contract and deciding what SA commodity should be referenced. The final work to add the appropriate rule to the SA system? 20 seconds, tops.

Bottom line: It’s easy to integrate contracts information into your SA system. And, with some SA systems, you can embed an HTML link to the contract document itself, directly from the Contracts dimension, to establish a useful reverse linkage.

All without a CM system at all!