Category Archives: Spend Analysis

Analytics Gotchas To Watch Out For!

Companies win or lose in the modern marketplace based upon the actionable insights they can derive from their data. We’re in the age of information warfare (that is used against us everyday, especially in political elections, but that’s a post for a different blog), and companies are competing with, or attacking, each other based on the quality of their data. This is why big data science is taking off and why many companies are engaging third party “experts” to help them get started. But not all of these experts are truly experts. Here are some questions to ask and gotchas to walk out for when considering the pitches from supposed experts.


What’s in the 10% to 20% that wasn’t mapped?

While it’s true you can get a lot of insight when 80% of the spend is mapped, and often enough to get an idea where to dig in, there’s two things to watch out for when the data “experts” come back and say they’ve mapped 80%. First of all, is it 80% of spend, 80% of transactions, or 80% of the supply base? Be very careful to understand what 80% was mapped. If it was spend, chances are it’s the big value transactions, and the tail spend is unmapped. If the tail spend contains small, but critical components to production (such as control chips for expensive electro-mechanical systems), this could be problematic if this spend is increasing year over year, or everyone could be okay. If it’s 80% of transactions, this could leave the largest value transactions unmapped, which could completely skew the opportunity analysis. If it’s 80% of the supply base, the riskiest suppliers could go unmapped, and the risk analysis could be skewed.


How many of the recommendations are backed up with your data and not just industry benchmarks?

If the “expert” says that their benchmarks indicate huge opportunities in specific categories, make sure the benchmarks are based on your data, and not data gathered from your competitors (and used in lieu of doing a detailed analysis on your data). Make sure the “experts” are not taking shortcuts (because your data was dirtier than they expected and they didn’t want to make the effort to clean it).

Remember what Sir Arthur Conan Doyle said, It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts. And, specifically, before one has their own data, not just any data!


Never forget there are lies, damn lies, and statistics!

In the best case, statistics are used for supporting arguments rather than leading, illuminating, arguments. In the worst case, they are used to plug holes in the preliminary analysis that the “expert” would rather not tell you about. (Experts rarely want to admit your data is so dirty and incomplete that they couldn’t do their preliminary analysis to the promised level of accuracy in the time given. They’d rather discover that during the project, after they have it, and, if necessary, put in a change order to clean it for you, and take more of your money.)

Remember that statistics can be used to skew arguments just about anyway you want to, especially if you are willing to use +/- with 90% confidence …


Not everything that can be counted counts!

Remember what Einstein said, because in this age of data overload it’s never been more true. Detailed analysis on certain types of trend data, social media reviews, and segmented consumer purchase patterns don’t always yield any insight, especially when the goal is spend reduction or demand optimization. It all comes down to what Denning said, if you do not know how to ask the right question which will help you focus on the right data then you discover nothing.


There’s no such thing as an alternative fact!

While most consultants in our space won’t try to sell you alternative facts, they may try to sell you alternative interpretations to the ones the data suggest. This is almost as bad. Always remember that Aldous Huxley once said that facts do not cease to exist because they are ignored. If we only had to deal with experts and consultants ignoring facts. These consultants especially in the political arena, that try to sell alternative facts or alternative interpretations are selling what has to be the biggest crock of bullsh!t they have come up with yet.

Finally, its not only opportunities that multiply as they are seized (Sun Tzu), it is also misfortunes that come from making bad decisions from bad or incomplete analyses.

And yes, someone has to be the gnashnab!

When Selecting Your Prescriptive, and Future Permissive, Analytics System …

Please remember what Aaron Levenstein, Business Professor at Baruch College, said about statistics:

Statistics are like bikinis. What they reveal is suggestive, but what they conceal is vital.

Why? Because a large number of predictive / forecasting / trending algorithms are statistics-based. While good statistics, with good sufficiently-sizeable data sets, can reach a very high, calculable, probability of accuracy a statistically high percentage of the time, if a result is only 95% likely 95% of the time, then the right answer is only obtained 95% of the time (or 19 / twenty times), and the answer is only “right” to within 95%. This means that one time out of twenty, the answer is completely wrong, and may not even be within 1%. It’s not the case that one time out of twenty the prediction is off more than 5%, it’s the case that the prediction is completely wrong.

And if these algorithms are being used to automatically conduct sourcing events and make large scale purchases on behalf of the organization, do you really want something going wrong one in twenty times, especially if an error that one time could end up costing the organization more than it saved the other nineteen times because it was primarily sourcing categories that were increasing with inflation or decreasing according to standard burn rates as demand dropped on outdated product offerings, but one such category was misidentified. If instead of identifying the category as about to be in high-demand, and about to sky-rocket in cost due to the reliance on scarce rare earth metals (that are about to get scarcer as the result of a mine closure), it identified it as low-demand, cost-continually-dropping, over the next year and chose a monthly-spot-buy auction, then costs could increase 10% month over month and a 12M category could, over the cost of a year, could actually cost 21.4M (1M + 1.1M + 1.21M …), almost double! If the savings on the other 19, similarly valued, categories was only 3%, the 5.7M the permissive analytics system saved would be dwarfed by the 9.4M loss! Dwarfed!

That’s why it’s very important to select a system that not only keeps a record of every recommendation and action, but a record of its reasoning that can be reviewed, evaluated, and overruled by a wise and experienced Sourcing professional. And, hopefully, capable of allowing the wise and experienced Sourcing professional to indicate why it was overruled and expand the knowledge model so that one in twenty eventually becomes one in fifty on the road to one in one hundred so that, over time, more and more non-critical buying and automation tasks can be put on the system, leaving the buyer to focus on high-value categories, which will always require true brain power, and not whatever vendors try to pass off as non-existent “artificial intelligence” (as there is no such thing, just very advanced machine-learning based automated reasoning).

Are We About to Enter the Age of Permissive Analytics?

Right now most of the leading analytics vendors are rolling out or considering the roll out of prescriptive analytics, which goes one step beyond predictive analytics and assigns meaning to those analytics in the form of actionable insights the organization could take in order to take advantage of the likely situation suggested by the predictive analytics.

But this won’t be the end. Once a few vendors have decent predictive analytics solutions, one vendor is going to try and get an edge and start rolling out the next generation analytics, and, in particular, permissive analytics. What are permissive analytics, you ask? Before we define them, let’s take a step back.

In the beginning, there were descriptive analytics. Solutions analyzed your spend and / or metrics and gave you clear insight into your performance.

Then there are predictive analytics. Solutions analyzed your spend and / or metrics and used time-period, statistical, or other algorithms to predict likely future spend and / or metrics based on current and historical spend / metrics and present the likely outcomes to you in order to help you make better decisions.

Predictive analytics was great as long as you knew how to interpret the data, what the available actions were, and which actions were most likely to achieve the best business outcomes given the likely future trend on the spend and / or metrics. But if you didn’t know how to interpret the data, what your options were, or how to choose the best one that was most in line with the business objectives.

The answer was, of course, prescriptive analytics, which combined the predictive analytics with expert knowledge that not only prescribed a course of action but indicated why the course of action was prescribed. For example, if the system detected rising demand within the organization and predicted rising cost due to increasing market demand, the recommendation would be to negotiate for, and lock-in supply as soon as possible using either an (optimization-backed) RFX, auction, or negotiation with incumbents, depending upon which option was best suited to the current situation.

But what if the system detected that organizational demand was falling, but market demand was falling faster, there would be a surplus of supply, and the best course of action was an immediate auction with pre-approved suppliers (which were more than sufficient to create competition and satisfy demand)? And what if the auction could be automatically configured, suppliers automatically invited, ceilings automatically set, and the auction automatically launched? What if nothing needed to be done except approve, sit back, watch, and auto-award to the lowest bidder? Why would the buyer need to do anything at all? Why shouldn’t the system just go?

If the system was set up with rules that defined behaviours that the buyer allowed the system to take automatically, then the system could auto-source on behalf of the buyer and the buying organization. The permissive analytics would not only allow the system to automate non strategic sourcing and procurement activities, but do so using leading prescriptive analytics combined with rules defined by the buying organization and the buyer. And if prescriptive analytics included a machine learning engine at the core, the system could learn buyer preferences for automated vs. manual vs. semi-automated and even suggest permissive rules (that could, for example, allow the category to be resourced annually as long as the right conditions held).

In other words, the next generation of analytics vendors are going to add machine learning, flexible and dynamic rule definition, and automation to their prescriptive analytics and the integrated sourcing platforms and take automated buying and supply chain management to the next level.

But will it be the right level? Hard to say. The odds are they’ll make significantly fewer bad choices than the average sourcing professional (as the odds will increase to 98% over time), but, unlike experienced and wise sourcing professionals, won’t detect when an event happens in left-field that totally changes the dynamics and makes a former best-practice sourcing strategy mute. They’ll detect and navigate individual black swan attacks but will have no hope of detecting a coordinated black swan volley. However, if the organization also employs risk management solutions with real time event monitoring and alerts, ties the risk management system to the automation, and forces user review of higher spend / higher risk categories put through automation, it might just work.

Time will tell.

UNSuitable Procurement Spend Classification!

Brian Seipel of Source One Management Services recently shared his Pros and Cons of using UNSPSC for spend classification, indicating that the best taxonomy for you, including UNSPSC, was determined by your primary goal.

According to Brian, if your goal was to hit the ground running fast and base analysis on a tried-and-true standard, then UNSPSC was a great start because, as a standard, it is:

  • pre-developed and ready-to-use,
  • capable of expressing a good degree of granularity, and
  • widely available from vendors and a significant number of data enrichment options exist.

And this sounds great, but, any services vendor with a spend analysis offering (Insight Sourcing Group – SpendHQ, Spendency, Sievo, etc.)

  • has one more standard taxonomies designed for Procurement that it has been using for years and years (that has been refined across dozens, if not hundreds, of clients) and that it regularly achieves great results with
  • and these taxonomies are highly granular, usually to at least four levels of detail, and sometimes more and
  • can be enriched from dozens of sources using pre-defined mappings that the expert spend services group has ready-to-go

And when you look at it this way, there are really no benefits. (Well, there is one benefit to UNSPSC, and that is easy H(T)S code mapping, but that’s a Finance/AP benefit, not a Procurement one!)

However, the benefits of a custom Procurement taxonomy:

  • alignment to organizational Procurement/Sourcing needs
  • flexibility and capability to be re-organized on the fly
  • ability to support different levels of granularity in different categories (so that drill down is only available where it makes sense)

can not be found in UNSPSC. It’s one rigid unaligned structure. It can’t be remapped and re-organized as needed to support changing spend responsibility (such as department-specific IT services being taken out of IT spending and mapped to the appropriate departments). And the granularity cannot be altered. Allowing spend to be analyzed in some cases down to nonsensical levels.

So while it may be standard and universally supported (and even useful from a Finance/AP point of view), it really is an UNSuitable Procurement Spend Classification. So, when it comes time to do spend analysis, do NOT use it. (Select a system that supports multi-classification and finance can have their UNSPSC pound-cake and you can have your feathery souffle.) Are we clear?

(And yes, if asked, even consultants who do not like UNSPSC will say it’s a reasonable option because they are told to never directly contradict a client who signs the cheque, and if the CFO who signed the PO wants it, for whatever half-baked reason, guess what is all of a sudden a viable option … )

AnyData: Another Analytics Arriviste from Across the Atlantic

Maybe some good is coming of all the gross incompetence in public sector spending, unreasonably long payment terms, and multi-nationalization of contemporary British companies … the last few years have seen more Analytics companies start in the UK than in the rest of the English speaking world. Anydata, founded in May, 2013, is one in the long list of UK-based spend analysis providers that have been receiving coverage here on SI and over on SM over the past year or so.

It’s one of the more unique offerings as, in some ways, it has more in common with Agiloft, a BPM (Business Process Management) vendor which recently forayed into Contract Management, building its first application in a matter of days using its visual development environment.

Like Agiloft, and unlike many other vendors in analytics, Anydata started out by building a visual development framework upon which it built its spend analysis offering. This gives it a number of advantages which include, but are not limited to, rapid configuration, rapid report and dashboard construction, rapid visualizations (that is on par or faster than Tableau, QlikView, PowerPivot, Birst, and other platforms they are typically compared against), and rapid development of workflows to support additional data collection.

The analysis platform is centered around powerful dashboard-driven analytics that can be customized by client from dozens of dashboard templates that include historic, strategic, geographic, vendor, company, office, cost-center, and chart-of-account overviews as well as savings opportunities, invoice opportunities, and category opportunities.

The categorization is quite powerful, and currently second only to Sievo in functionality currently on the market. Sievo’s unique multi-pivot drill-down approach allows users to classify data in chunks in any way they want to define those chunks in any order in a very collaborative fashion, which is currently unique on the market today. And while the AnyData approach is not as collaborative, it is as powerful as you can define chunks not on pivots and values, but on queries which can then be replayed, in the order of your choosing, as data is reloaded. So instead of having to define a three level breakdown to select a specific group of transactions for a category, it’s a simple query — which allows for much faster categorization if you are a power user good at creating SQL queries. Much faster.

And, rather uniquely, it has a very powerful data intelligence feature that allows an analyst to query and inspect the data and meta-data on a recently imported data source for the purposes of validating the accuracy and completeness — an activity that should go well beyond just validating the basic check-sums (against the annual financial reports). With AnyData’s platform, you can quickly identify sums, trends, and outliers for any time period of interest, use sliders to zone-in and zone-out on potentially anomalous data, use filters to restrict to dimensions (and even facts) of interest, and understand the characterization of the data you are importing. Not only does this help immensely in cleansing, but helps you pinpoint errors that standard techniques miss in cleansing and classification.

It has additional strengths, and, of course, weaknesses compared to other tools on the market — which can be explored in depth in the Spend Matters Pro series co-authored by the doctor and the prophet [membership required] (Part I) — but this should give you a good introduction to, and flavour for, what Anydata is.