Buying Spend Analysis Systems: Test Drive Case Study

Today’s guest post is from Bernard Gunther of Lexington Analytics who recently brought you Buying Spend Analysis Systems: Taking a Test Drive. He can be reached at bgunther <at> lexingtonanalytics <dot> com.

A client who read my recent SI post, “Taking a Test Drive“, thought that relating the experience of their own test drive might help other readers who are investigating spend analysis approaches.

In this case, the company already had a spend analysis system, but the contract was about to expire. The test drive was intended either to provide ammunition for switching to another system that had been identified as a alternative, or justification for renewing the contract for the existing system. The company wanted to evaluate whether there were advantages with the alternative system, and whether or not the alternative system could improve performance for users whose buy-in was essential. A financial case either for making a change or for maintaining the status quo was also a deliverable.

As a result of the test drive, the company ended up changing systems, and believes that user needs are better met because of that decision. Data are cleaner, because vendor groupings and commodity mapping are more accurate, and analysis capability has improved greatly. The company reports spending less time supporting the new system. The company also added external consulting resources to work with their users each month to help extract additional value. Best of all, the monthly expenditure for the system — including the cost of the incremental external resources — dropped by more than 25%.

The Test Drive Process

To perform their test drive, the company focused on how each system would:

  1. Meet the existing user needs: “must haves”
  2. Deliver on known needs that users don’t have today: “wants”
  3. Deliver additional value that may not be understood today: “didn’t know I wanted, but after seeing, can’t live without”

The business case needed to describe how each system would deliver on these three items at either a lower cost or, if the costs were higher, how the selected system would deliver an incremental return on investment.

The test drive for the new system occupied a few days over a three week period. Since the evaluation team understood their current system thoroughly, they focused on learning where additional value might be delivered, as follows:

  1. Understand the current users of the system.
    The team interviewed users to see what they valued and what they were currently doing with the existing system, e.g. did they have features or data that they would like to see in the system, did they understand the value they were currently getting from the system, and did they know what they wanted the system to deliver in the future.
  2. Understand the “non-users” of the system.
    Individuals were identified who were not current users of the system, but who the team felt could or should be users of the system. The team worked to understand what these potential users would need to see, and the value that they would receive.
  3. Provided a sample of current data and reports to the supplier of the new system.
    Since the core data required for the demonstration was already available in the existing system, the supplier was able to produce a working spend cube for review with minimal effort.
  4. Review with the suppliers how they would meet all the “must haves”, “wants”, and “future wants”.
    Evaluate the suppliers’ offerings to determine how each element generates savings and/or adds value. Users were involved with this part of the evaluation, as they were considered to be the best judges of how a new feature compared to an existing capability.
  5. Put together the business case.
    The test drive showed that the new offering would both reduce costs and increase value, so it was not difficult to achieve internal agreement on a decision forward.

Survey Results

  • User Must Haves
    • Users said that they obtained the most value from basic visibility to the spend data. However, other than the advantage of having all the AP spend data in one place, most users felt the existing system was just a “warehouse of data” that didn’t really help them do their job much better than data extracts directly from AP. They were unhappy with the vendor grouping and commodity mapping.
    • Users had the basic ability to filter data via point and click interfaces, but were unhappy with the speed and limited complexity supported.
  • User Wants
    • Users expressed a desire for the ability to create and modify reports inside the system, without external support. The existing system had limited reporting, so in order to create all but basic data extracts, users had to dump raw data to their desktops and build custom reports and models outside the system.
    • Users were unhappy with vendor grouping and commodity mapping in the existing system. Getting changes made to groups and maps was awkward, required committee decisions, and took a long time. Users wanted to make changes to the commodity structure, commodity mapping, or vendor grouping, and immediately see the results.
  • New Features
    • Users wanted the ability to make private and arbitrary changes to a spend dataset, to see if a change in data organization could improve their understanding of the data.
    • Users wanted the ability to build new data sets from scratch, on their own, as well as the ability to analyze many different kinds of data, such as commodity-specific invoice-level data.
    • Users wanted the ability to build complex reports inside the system.

In summary, the client believes that the test drive process was very useful. The value delivered by the spend analysis system has been increased, user satisfaction with the data and the system has gone up, and the cost of the system has gone down. The client also believes that if a decision had been taken to stay with the incumbent vendor, the test drive would have provided significant leverage for renegotiation.