Category Archives: Spend Analysis

BIQ: Alive and Well in the Opera House! Part II

Yesterday we noted that BIQ, from the sleepy little town of Southborough, that was acquired by Opera Solutions in 2012, is not only alive and well in the Opera House, but has been continually improved since its acquisition and the new version, 5(.05), even has a capability no other spend analytics product on the market has.

So what is this new capabilities? We’ll get to that. First of all, we want to note that since we last covered BIQ, a number of improvements have been made, and we’ll cover those.

Secondly, we want to note that the core engine is as powerful as ever. Since it runs entirely in memory, on data entirely in memory, it can process 1M transactions per second. Need to add a dimension? Change a measure? Recalculate a report? It’s instantaneous on data sets of 1M transactions or less. And essentially real-time on data sets of 10M transactions. Try getting that performance from your database or OLAP engine. Just try it.

One of the first big changes they made was complete separation of the engine from the viewer. This allowed them to do two things. One, create a minimal engine footprint (for in-memory execution) with a fully exposed API that allowed them to create a full web-based SaaS version as well as an improved desktop application and expose the full power of the BIQ engine to either instance.

They used QlikView for the web interface and through this interface have created a collection of CIQ (category intelligence) and PIQ (performance intelligence) dashboards for just about every indirect category and standard performance category (supplier, operations, finance, etc.) in addition to a standard spend dashboard with reports and insights that rivals any competitor dashboard. In addition, they have exposed all of the dimensions in the underlying data and measures that have been programmed and a user can not only create ad-hoc reports, but ad-hoc cross-tabs and pivot tables on the fly.

And they re-did the desktop interface to look like a modern analytics front-end that was built this decade. As those who saw it know, the old BIQ looked like a Windows 98 application, even though Microsoft never built anything with that amount of power. The new interface is streamlined, slick, and quick. It has all of the functionality of the old interface, plus modern widget that are easy to rearrange, expand, minimize, and deploy.

One of the best improvements is the new data loader. It’s still file based, but supports a plethora of file formats, can be used to transform data from one format to another, merge files into a single file or cube, picking some or all of the data. It’s quick, easy, user friendly, and can process massive amounts of data quickly, letting users know if there are errors or issues that need to be identified almost immediately.

Another great feature is the new anomaly detection engine that can be run in parallel with BIQ, built on the best of BIQ and Signal Hub technology. Right now, they only have an instance fine tuned to T&E spend in the procurement space, but you can bet more instances will be coming soon. But this is a great start. T&E spend is plentiful, a lot of small transactions, and hard to find those needles that represent off policy spend, off contract spend, and, more importantly, fraudulent spend. Using the new anomaly detection feature you can quickly identify when an employee is flying business instead of coach, using an off-contract airline, or, and this is key, charging pet kennels as lodging or strip club bills as executive dinners.

But this isn’t the best new feature. The best new feature is the new Open Extract capability that provides true open access to Python-based analytics in BIQ. The new version of BIQ engine, which runs 100% in memory, includes the python runtime and a fully integrated IDE. Any analyst or data scientist that can script python can access and manipulate the data in the BIQ engine in real time, using constructs built specifically for this purpose. And these custom built scripts run just as fast as the built in scripts as they run native in the engine. For example, you can run a Benford’s Law analysis on 1M transactions in less than a second. And building it in python, and the Anaconda distribution in particular, means that any of the open source analytics packages for Continuum Analytics can be used. There’s nothing else like it on the market. It takes spend analysis to a whole new level.

BIQ: Alive and Well in the Opera House! Part I

Fourteen years ago, in the sleepy little town of Southborough, Massachusetts, a tiny start up called BIQ was created. It’s mission was to give business analysts the powerful transactional data analysis tool that they needed to do their own analysis and get their own insight. Less than two years later, it released that tool, called BIQ, and it totally changed the spend analysis market. For the first time, power analysts could do everything themselves in a market where spend analysis was primarily offered as a service, and they could do it at a price point that was at least an order of magnitude less than what the big providers were charging them. With licenses starting at 36K a year, an analyst could do the same analysis that he was paying a suite provider 360K for and a best of breed provider 1M for. Now, it required a lot of knowledge, aesthetic blindness, elbow grease, and overtime, but it could be done.

And when we say everything, we mean everything. You could load any flat files you want in a standard format (such as csv) in the data loader. You could combine them into any cubes you wanted by defining the overlapping dimensions. You could define ranged and derived dimensions using simple formula or built in definitions. You could drill down in real time, filter on what you wanted, and export subsets of records. You could define any categorization you wanted against any schema, any mapping rules you wanted, they were organized into priority groups, given a priority order, and run most specific to least specific so you never got a collision or random mapping like you might in a tool where you just defined non-prioritized rules that went in a database and often got applied in random order. You could define supplier families that could be reused. You could build your own cross-tab reports. It was the swiss army knife of analytics, at a price every organization could afford.

This quickly made BIQ a favourite not just among mid-market companies that couldn’t afford, and big companies that didn’t want to afford, high priced services, but also niche consultancies that could now do power-house analytics projects on their own, including firms like Lexington Analytics and Power Advocate. This, along with some really smart marketing, pushed BIQ into the mainstream of spend analytics providers, making it a de-factor shortlist candidate for any company wanting do-it-yourself spend analysis. This, of course, got the attention of many providers, who were afraid of the threat, in awe of the technology, or both.

One of these providers was Opera Solutions, who acquired BIQ in 2012, and shortly after, Lexington Analytics. Once the two providers were merged, Opera Solutions instantly had a complete spend analysis software and services solution for the indirect space. And they have steadily improved this offering since its acquisition. The new version comes packed with some big enhancements, including one capability that is not only market leading, but unique among all the spend analysis providers we have covered to date.

What is that? Come back tomorrow!

The UX One Should Expect from Best-in-Class Spend Analysis … Part V

In this post we wrap up our deep dive into spend analysis and what is required for a great user experience. We take our vertical torpedo as far as it can go and wrap the series up with insights beyond what you’re likely to find anywhere else. We’ve described necessary capabilities that go well beyond the capabilities of many of the vendors on the market, and more will fall by the wayside today. But that’s okay. The best will get up, brush off the dirt, and keep moving forward. (And the rest will be eaten by the vultures.)

And forward momentum is absolutely necessary. One of the keys to Procurement’s survival (unless it really wants to meet it’s end in the Procurement Wasteland we described in bitter detail last week) is an ability to continually identify value in excess of 10% year-over-year. Regardless of what eventually comes to pass, the individuals who are capable of always identifying value will survive in the organizations of the future.

But if this level of value is to be identified, buyers are going to need powerful, usable, analytics — much more powerful and usable then what the average buyer has today. Much more.

As per our series to date, this requires over a dozen key useablity features, many of which are not found in your average first, and even second generation, “reporting” and “business intelligence” analytics tool. In our brief overview series to date here on SI (on The UX One Should Expect from Best-in-Class Spend Analysis … Part I, Part II, Part III, and Part IV) we’ve covered four key features:

  • real, true dynamic dashboards,
  • simultaneous support for multiple cubes,
  • real-time idiot-proof data categorization, and
  • descriptive, predictive, and prescriptive analytics

And deep details on each were provided in the linked posts. But even prescriptive analytics, which, for many vendors, is really pushing the envelope, is not enough. Great solutions really push the envelope. For example, the most advanced solutions will also offer permissive analytics. As the doctor has recently explained in his two-part series (Are We About to Enter the Age of Permissive Analytics and When Selecting Your Prescriptive, and Future, Permissive, Analytics System), a great spend analysis system goes beyond prescriptive and uses AR and a rules-engine to enable a permissive system that will not only prescribe opportunities to find value but initiate action on those opportunities.

For example, if the opportunity is a tail-spend opportunity that could best be captured by a spot-auction, approved products that meet the bill, and approved suppliers that can automatically be invited to an auction to provide them, the system will automatically set up the auction and invite the suppliers, and if the total spend is within an acceptable amount, automatically offer an award (subject to pre-defined standard terms and conditions).

And that’s just the tip of the iceberg. For more insight onto just how much a permissive analytics platform can offer, check out the doctor and the prophet‘s fifth and final instalment on What To Expect from Best-in-Class Spend Analysis Technology and User Design (Part V) over on Spend Matters Pro (membership required). It’s worth it. And maybe, just maybe, when you identify, and adopt, the right solution, you won’t end up wandering the Procurement Wasteland.

The UX One Should Expect from Best-in-Class Spend Analysis … Part IV

As per our last post, in this series we are diving into spend analysis. Deep into spend analysis. So deep that we’re taking a vertical torpedo to the bottom of the abyss. And if you think this series has been insightful so far, wait until we take you to the bottom. By the end of it, there will be more than a handful of vendors shaking and quaking in their boots when they realize just how far they have to go if they want to deliver on each and every promise of next generation opportunity identification they’ve been selling you on for years.

We’re giving you this series so that you can use it to make sure they deliver. Because, as we have repeatedly pointed out, you only have two technologies at your disposal to achieve year-over-year savings of 10% or more. Optimization (covered in our last four-part series, see Part I, Part II, Part III, and Part IV), which can capture the value, and spend analytics, which can identify the value.

But, as we will keep repeating, it has to be true spend analytics that goes well beyond the standard Top N report templates to allow a user to cube, slice, dice, and re-cube quickly and efficiently in meaningful ways and then visualize that data in a manner that allows the potential opportunities, or lack thereof, to be almost instantly identified.

But, as per our last two posts, this requires truly extreme usability. Since not everyone has an advanced computer science or quantitative analysis degree, not everyone can use the first generation tools. This means that, in organizations without highly trained analysts, the first generation tools would sit on the shelf, unused. And that is not how value is found.

However, creating the right UX is not easy. That’s why it takes a five part series just to outline the core requirements (and when we say core, we mean core — there are a lot more requirements to master to deliver the whole enchilada). But it’s needed because we are in a time where there seems to be a near universal playbook for spend analysis solution providers when it comes to positioning the capability they deliver and when many vendors sound interchangeable when, in fact, they are not.

In each part of the series to date (What To Expect from Best-in-Class Spend Analysis Technology and User Design Part I, Part II, and Part III), over on Spend Matters Pro [membership required], the doctor and the prophet have explored three to four key requirements of a best-in-class spend analytics system that are essential for a good user experience. Here on SI, we’ve covered three of these to whet your appetite for the knowledge that is being kept from you.

In The UX One Should Expect from Best-in-Class Spend Analysis … Part I we discussed the need for real, true, dynamic dashboards. Unlike the first generation dashboards that were dangerous, dysfunctional, and sometimes even deadly to the business, true next generation dynamic dashboards are actually useful and even beneficial. Their ability to provide quick entry points through integrated drill down to key, potentially problematic, data sets can make sharing and exploring data faster, and the customization capabilities that allow buyers to continually eliminate those green lights that lull one into a false sense of security is one of the keys to true analytics success.

In The UX One Should Expect from Best-in-Class Spend Analysis, Part II, we pointed out that one cube will NEVER be enough. NEVER, NEVER, NEVER! And that’s why procurement users need the ability to create as many cubes as necessary, on the fly, in real time. This is required to test any and every hypothesis until the user gets to the one that yields the value generation gold mine. Unless every hypothesis can be tested, it is likely that the best opportunity will never be identified. If we knew where the biggest opportunity was, we’d source it. But the best opportunities are, by definition, hidden, and we don’t know where. Success required cubes, cubes, and more cubes with views, views, and more views. But this is just the foundation.

Then, in The UX One Should Expect from Best-in-Class Spend Analysis, Part III, we indicated that success requires appropriately classified and categorized data. But good data categorization is not always easy, especially for the average user. That’s why the third key requirement is real-time idiot-proof data categorization, which, while a mouthful, is a lot easier to say than it is done. (For details, check out the articles.)

But, as you’ve probably guessed by now, more is required. Much more. In What To Expect from Best-in-Class Spend Analysis Technology and User Design (Part IV) over on Spend Matters Pro [membership required], the doctor and the prophet dive deep into a couple of additional key requirements for a best-in-class spend analytics solution. And, like the previous requirements, these are intensive. Quite intensive.

The one we are focussing on today is support for descriptive, predictive, and predictive analytics. First generation solutions stopped at descriptive. They simply reported on what happened in the past, and stopped there. And usually the description of the past was so far behind that the reports were not always that useful. So next generation moved onto predictive, and computed trends, taking into account historical sales data and current market data to describe opportunities so that, even if the data was a bit outdated, at least the analyst had a good idea of direction.

And as platforms got faster, and more powerful, and more real-time, the predictive power got better, and more useful. And organizations realized more value … but not nearly what they should realize. Because it’s not always enough to know that there may be an opportunity, to realize that opportunity, one needs an idea on how to capture it. And if one’s not a category or market expert, one can be completely lost. But if the system supports prescriptive analytics, then the analyst has an idea where to start. And that is key to a great user experience.

But is that everything the system needs for a great user experience. Nope. And we’ll continue our overview in the next, and final, part of this initial series. (We’ve written the first few chapters, but believe us when we say the book has not been written yet.)

The UX One Should Expect from Best-in-Class Spend Analysis … Part III

In previous posts, we took a deep dive into e-Sourcing (Part I and Part II), e-Auctions (Part I and Part II), and Optimization (Part I, Part II, Part III, and Part IV). But in this series we are diving into spend analysis. And this time we’re taking the vertical torpedo to the bottom of the deep. If you thought our last series was insightful, wait until you finish plowing through this one. By the end of it, there will be more than a handful of vendors shaking in their boots when they realize just how far they have to go if they want to deliver on all those promises of next generation opportunity identification they’ve been selling you on for years! But we digress …

We’ve said it multiple times, but we are going to repeat it again. The key point to remember here is that there are only two advanced sourcing technologies that can identify value (savings, additional revenue opportunity, overhead cost reductions, etc.) in excess of 10% year-over-year-over-year. One of these is optimization (provided it’s done right, useable, and capable of supporting — and solving — the right models; see our last series). The other is spend analytics. True spend analytics that goes well beyond the standard Top N and report templates to allow a user to cube, slice, dice, and re-cube quickly and efficiently in meaningful ways and then visualize that data in a manner that allows the potential opportunities, or lack thereof, to be almost instantly identified.

But, as per our last two posts, this requires truly extreme usability. Since not everyone has an advanced computer science or quantitative analysis degree, not everyone can use the first generation tools. This limits these users to the built-in Top N reports. And as we have indicated many times, once all of the categories in the Top N have been sourced and all of the Top N suppliers have been put under contract, there is no more value to be found from a fixed set of Top N reports. At this point, the first generation tools would sit on the shelf, unused. And that’s not how value is found.

However, creating the right UX is not easy. It’s not just a set of fancy reports (as static reports have been proven to be useless for over a decade), but a powerful set of capabilities that allow users to cube, slice, dice, and re-cube seven ways from Sunday quickly, easily, and repeatedly until they find the hidden value. It’s innovative new reporting and display techniques that makes outlier identification and opportunity analysis quicker and easier and simpler than its ever bin. It’s real-time data validation and verification tools that insure that a user doesn’t spend a week building a business case around data where one of the import files was shifted by a factor of 100 because of missing decimal points, destroying the entire business case in 4 clicks. And so on. And that’s why the doctor and the prophet are bringing you a very in-depth look at what makes a good User eXperience for spend analysis that goes far, far deeper than anyone has done before.

In a time where there seems to be a near universal playbook for spend analysis solution providers when it comes to positioning the capability they deliver and when many vendors sound interchangeable, and when many vendors are fungible in a way that is not necessarily negative, this insight is needed more than ever. And if a few dozen vendors quake in their books when this series is over, so be it.

In the first part of our series, we explored a few key capabilities that must be present from the get go, including, as we dove into here on SI in our first post on The UX One Should Expect from Best-in-Class Spend Analysis … Part I, dynamic dashboards. Unlike the first generation dashboards that were dangerous, dysfunctional, and sometimes even deadly to the business, true next generation dynamic dashboards are actually useful and even beneficial. Their ability to provide quick entry points through integrated drill down to key, potentially problematic, data sets can make sharing and exploring data faster, and the customization capabilities that allow buyers to continually eliminate those green lights that lull one into a false sense of security is one of the keys to true analytics success. (For more details, see the doctor and the prophet‘s first deep dive on What To Expect from Best-in-Class Spend Analysis Technology and User Design (Part I) over on Spend Matters Pro [membership required]).

In the second part of our series we explored a few more key capabilities, four to be precise, that include dynamic cube and view creation “on the fly”. Given that:

 

  • A cube will never have all available (current and future) data dimensions
  • Not all data dimensions are important;
  • Some of the essential data (referenced in the previous point) will be third-party data updated at different time intervals
  • A user never needs to analyze all data at once when doing a detailed analysis.
  • We have not (yet) encountered a system that will have enough memory to fit enough of a true “mega cube” in memory for real-time analysis.

 

One cube will NEVER be enough. NEVER, NEVER, NEVER! That’s why procurement users need the ability to create as many cubes as necessary, on the fly, in real time. This is required to test any and every hypothesis until the user gets to the one that yields the value generation gold mine. Unless every hypothesis can be tested, it is likely that the best opportunity will never be identified. If we knew where the biggest opportunity was, we’d source it. But the best opportunities are, by definition, hidden, and we don’t know where. Success requires cubes, cubes, and more cubes with views, views, and more views. (For more detail, or information on the other capabilities we didn’t cover in our post on The UX One Should Expect from Best-in-Class Spend Analysis … Part II, see the doctor and the prophet‘s second deep dive on What To Expect from Best-in-Class Spend Analysis Technology and User Design (Part II) over on Spend Matters Pro [membership required].)

But much, much more is required. That’s why the doctor and the prophet recently published their third deep dive on What To Expect from Best-in-Class Spend Analysis Technology and User Design over on Spend Matters Pro [membership required] on the breadth of requirements for a good Spend Analysis User Experience. In this piece, we dive deep into three more absolute requirements (which, like the previous requirements, are so critical the absence of any should delete a vendor from your list) including real-time idiot-proof data categorization.

Just about every solution has categorization, most allow end users to at least over-ride categorization, but, in our view few, relatively few solutions can claim (to approach) idiot-proofness.

So what is an idiot proof solution? Before we define this, let us note that the approach a provider takes to classification is secondary. It doesn’t matter whether the methodology provided is fully automated (and based on leading machine learning techniques), hybrid (where the machine learning can be overridden by the analyst with simple rules), or fully manual (where the user can classify data using free-form rules created in any order they want on any fields they want).

This means that the system must provide a simple and effective methodology for classifying, and re-classifying, data in an almost idiot-proof manner. So, if the engine uses AI, it should be easy for the user to view, and alter, the domain knowledge models used by the algorithms. If it uses rules-based approaches, it should be easy to review, visualize, and modify rules using a language and visual techniques wherever possible. And if the solution uses a hybrid approach, the user should be able to quickly analyze the AI, determine the reason for a mis-map, and then define appropriate over-ride rules that will correct any errors the user discovers so the error never materializes again in the future.

In other words, success requires cubes, cubes and more cubes on correctly mapped and classified data that can be accessed through views, views, and more views. With any data the user requires, from any location, in any format. But more on this in upcoming posts. In the interim, for additional insight on a few more key requirements of a spend analytics product for a good user experience, check out the doctor and the prophet‘s second deep dive on What To Expect from Best-in-Class Spend Analysis Technology and User Design (Part III) over on Spend Matters Pro [membership required].) As per the past two parts of the series, it’s worth the read. And stay tuned for the next two parts of the series. That’s right! Two more parts. We told you this one was a doozy!