What the Hell is Automated Spend Analysis?

While reading a “must-read” post on “next generation spend analysis” (which shall not be named or linked to because it was not must read and contained no useful information on spend analysis, and definitely did not contain anything that would make it next generation), the doctor encountered the claim that automated spend analytics yields spend intelligence. Now, despite claims to the contrary, there aren’t that many technologies in the Supply Management world that truly deliver spend intelligence (and that’s probably why there are only two advanced sourcing technologies that have been found to deliver year-over-year returns above 10%, namely decision optimization and spend analysis). Moreover, nothing about these technologies is automated — they require a skilled user to define the models, do the analysis, and extract the insights.

So if someone is claiming a technology offers spend intelligence, that perks up the doctor‘s ears. And if someone is claiming it offers spend intelligence and is automated, that really gets his attention because if it’s real, it deserves to be shouted from the rooftops, and, if it’s not, shenanigans must be called on the charlatans. And even though calling shenanigans on the charlatans won’t stop them, as proven by the fact that repeated exposes have been done over the years on mediums (who claim to talk to the dead, but really don’t) and televangelists (who claim God is telling them to raise money for personal jets even though they aren’t even religious, and if you don’t believe the doctor, then please feel free to donate to Our Lady of Perpetual Exemption), at least the truth will be out there for those willing to look for it.

the doctor knows what automated spend reporting is, what automated spend refresh is, what automated spend cleansing and enhancement is, and what automated spend insights are, but what the hell is automated spend analytics? And how does it provide spend intelligence?

So, the doctor did some research. According to a meritalk blog post, which defines it as a must for every Federal agency and which appears to be using Spikes Cavell’s spend analysis technology, it is the automated data collection, cleansing, classification, enrichment, redaction, collation, and reporting through cloud based systems, which makes sense, but this isn’t spend intelligence. This process will turn data into a collection of facts that provide the analyst with knowledge, and maybe even actionable insight, but not without human intervention.

A human will have to look at the reports and identify which opportunities are real and which are not. Simply knowing how much is spent by Engineering, spent on cogs, spent with Cotswell’s Cosmic Cogs, and shipped by Planet Express is not providing an analyst with any real intelligence. Knowledge on its own is not intelligence. Knowing that the average price paid per cog was $1.50 when the market price for the same cog appears to be $1.30 is not intelligence. Intelligence is know that the price of steel is projected to continue to drop due to an influx of new supply and a fall in current construction projects, that in a month the price is expected to be $1.20, that the best time to lock in a long term contract will be in six to eight weeks just before the steel price hits the expected low point, and how to go about sourcing that contract to get a long term price at or below $1.20.

Rosslyn Analytics, who claimed to launch the “world’s first, and fastest, fully automated cloud-based spend data integration service”, defines it’s platform as a web-based automated spend analytics platform, defines spend intelligence as an 8-step process that starts with planning and includes a detailed data analysis phase, both of which require human intelligence to complete.

Further searching turns up a post titled “can we ever fully automate spend analysis or do we need” on Capgemini’s Procurement Transformation Blog from 2013 that clearly states that on their own, the analytics tools cannot interpret the data so the tools must be programmed and algorithms developed which “tell” the software how the data should be mapped and that even though we have now reached a level where human interaction with a data analysis tool is diminishing … human intervention is still required to tell software what can be learned.

These are just three examples where bloggers, consultants, and solution providers all agree that while much of the spend analysis process can be automated, human intervention is still required to extract intelligence out of the facts that the tool identifies.

There is no automated spend intelligence, and any claims to the contrary are false. the doctor sincerely hopes that this is the last time he sees this phrase, because if he ever sees it again, a rant of epic proportions is sure to follow (and fingers will be pointed)!

Screwing up the Screw-Ups in BI (Repost)

Back in January of 2008, SI ran this now classic post by Eric Strovink, formerly of Opera Solutions, BIQ (acquired by Opera Solutions), and Zeborg (acquired by Emptoris, which was acquired by IBM). While writing tomorrow’s rant, the doctor was reminded of this classic post and how most companies and people screw up the basics of BI and spend analysis. Since this will put you in the right frame of mind to understand tomorrow’s post, the doctor has decided to repost it.

Baseline recently put a slide show on their site illustrating “5 Ways Companies Screw Up Business Intelligence — And How To Avoid The Same Mistakes,” with data drawn from CIO Insight. The slides are an excellent example of how mainstream IT thinking misses the essential problems of business data analysis.

Let’s take the “screw-ups” one at a time:

  1. Spreadsheet proliferation (97% of IT leaders say spreadsheets are still their most widely used BI tool.)
    Spreadsheets are one of the most valuable business modeling tools available, and IT might as well understand that they’re not going away. The problem is when spreadsheets (and offline tools like Access) are used inappropriately, to manipulate transactional data rather than drawing it in the right format from a flexible store. The solution provided by Baseline is to “cleanse and validate your data, then migrate the information to a central server/database that can be the backbone of any BI strategy.” Bzzt! Sorry, a central database won’t solve the analysis problem, and at the end of the day you’ll have just as many spreadsheets as before. That’s because a fixed schema data warehouse is a lousy analysis tool, and might as well be on planet Neptune as far as usability for the business analyst is concerned. There’s nothing wrong with a reference dataset, but business analysts need to be able to manipulate its structure as easily as a spreadsheet, or they will simply extract the raw data from it and manipulate the data offline, with the same slow, expensive, and uncertain results as today.
  2. Systems can’t talk to each other (64% of IT leaders say integration and interoperability of BI software with other key systems such as CRM and ERP pose a problem for their companies.)
    Right! Except that the Holy Grail of trying to extend a “centralized” database umbrella over completely disparate systems is both incredibly expensive and nearly impossible. Baseline suggests “[partnering] with a reputable systems integrator.” Good for them — at least they dodge this bullet rather than getting the answer completely wrong. The right answer is that business analysts should be able to construct BI datasets on their own, as needed, from whatever data sources are useful/appropriate, and it shouldn’t be difficult for them to do so. Concentrating all of the information under one umbrella isn’t necessary; many umbrellas can do the job, and if they’re easy to deploy, they’re both inexpensive and provide a better and more flexible answer.
  3. No centralized BI program (61% say they don’t have a center of excellence of the equivalent of BI.)
    And they’d be well advised to tread carefully, because BI systems have a track record of poor performance and poor customer satisfaction. Why? Because the analyses you can do with a fixed data warehouse are limited to the views set up a priori by IT or by the vendor, and those views are largely immutable. Baseline dodges this one, too, suggesting the “[creation of] a data governance and data stewardship program.” Can’t argue with that in principle, but a governance and stewardship program doesn’t actually put any meat on the table. How about putting tools into analysts’ hands that they can actually use? Right now?
  4. Data lacks integrity (57% say poor data quality significantly diminishes the value of their BI initiatives.)
    Hmmm, I wonder why the data are of such poor quality. Could it be that the BI system doesn’t really provide much insight? Could it be that the fixed schemas set up by IT or by the vendor don’t have any applicability to day-to-day questions? Could it be that the inability of the BI system to re-organize and map data on the fly causes errors to persist over time? Baseline recommends spending more money on data cleansing, which might make a cleansing vendor quite wealthy, but won’t help much. It typically isn’t cleansing that’s the problem, it’s (1) the fixed organization of the data, which is guaranteed to be inappropriate for any analysis that hasn’t been anticipated a priori, (2) the ad hoc reporting on it, which has to be easy to accomplish, as opposed to requiring IT resources (see below), and (3) the fact that cleansing can’t be accomplished on-the-fly (as it should be) by the business analysts themselves.
  5. Managers don’t know what to do with results (58% say most users misunderstand or ignore data produced by BI tools because they don’t know how to analyze it.)
    Even when BI is in place, nobody knows what to do with it. Baseline recommends that “IT staffers… should work closely and regularly with business managers to ensure that measurement, reporting, and analysis tools are supporting business goals.” But this is precisely the problem. For business analysts, BI systems are difficult to use and set up, it is difficult to create ad hoc reports, and it is impossible to change the dataset organization. It is also politically impossible to change the dataset organization if it is being shared by hundreds or thousands of users. How are you going to get them into the same room to agree on the changes?

So, Baseline is proposing (in essence) that IT resources sit cheek-by-jowl with business users, to ensure that they can get value out of a system that they otherwise could not use. This is certainly a “solution” of sorts, but it’s not practical. Either business analysts can use the system on their own, or the system will be of marginal value to them. It’s that simple.

Economies of Anti-Scale

Late last month, Mr. Smith reposted a couple of great posts on why Bigger Procurement is Not Always Better Procurement (Part I and Part II) over on SpendMatters UK. While sometimes bigger spend equals bigger discount, this is typically only true for the acquisition of consumables where there is a predictable economy of scale that kicks in at higher volumes — such as the production of identical goods on a production line or the sale of multiple software licenses.

In some markets, as Peter points out, increasing volume decreases costs. Let’s review his examples to understand why.

Short-Term Contingent Labour

Let’s say you want 100 additional workers for 20 days to help you stuff boxes for the Christmas rush. You might think that you should get a much better rate than the 10 workers you hired for your annual 4th of July promotion, but this is not likely to be the case. First of all, lots of organizations in your position will want extra workers for the Christmas rush, and the contingent labour organization will only have so many. Secondly, even if there is demand, why would an organization want to hire resources that will then be on the bench until at least February (for the mini-Valentine’s day rush)?

Bulk Pre-Paid Hotel Room Rates for a Year

Hotels, like airlines, make their money during peak travel and peak conference / festival seasons, and during these times, when they can charge the full amount allowed under law, the last thing they want to do is give you a room for 30% off of the normal rate, which is typically less than 35% of what they could be charging. As a result, the best deal you’re going to get is the average price they got for a room last year, as they will be hedging their bets.

Energy

Most energy plants still rely on oil, coal, and natural gas and, as a result, energy costs are dependent on the somewhat unpredictable prices for these limited resources. Plus, these energy companies can always get the maximum allowable price from consumers and small to midsize businesses with no negotiation power, so if they are selling most of the energy they are producing, and giving a big contract might require them to occasionally buy energy from the spot market during peak (heating/air conditioning) season, your big contract, that requires a big discount, is not attractive to them.

These are all economies of anti-scale. An economy where volume increases uncertainty (such as the short term contract for contingent labour who could be benched for long periods of time if hired by the contingent labour provider or the energy example), decreases profitability (such as the hotel example where having to give you a room during peak seasons cuts profit by 60% or more), or increases overhead cost per unit beyond a baseline is anti-scale. This last case includes any situation where customization is required or limited edition runs are required, as is common in jewelry or toys and games. Customization requires labour, and beyond a certain number of customized units, the provider would have to hire more labour who could not be fully utilized at the time of contract signing. This adds risk, and cost. Similarly, it does not matter how many limited edition collector cowbell* orders of 1,000 you put in if each requires a specialized mould to be produced. Each mould and line set up requires a fixed amount of time and that fixed overhead does not scale with more custom orders.

Thus, when sizing a spend opportunity, it’s important to first identify if you are dealing with a scale or anti-scale economy. There will typically be a much larger savings potential with an economy of scale than with an economy of anti-scale. Thus, if your organization is being measured primarily on savings, it’s important to identify those economies of scale categories as soon as possible.

* Even if you will always need more cowbell!

101 Procurement Damnations – We’re Almost Halfway There

Our last post chronicled our 50th Procurement Damnation that you, as a Procurement professional, have to deal with on a regular, if not daily, basis. If only these were the only 50 damnations clouding your mind and getting in your way. There are still 50 more damnations that are just as pervasive that seemingly exist only to pester you on a daily basis that we have not yet discussed!

However, before we get to the next 50, we thought it would be a good idea to summarize the list to date so that you could go back and review any posts in the series that you might have missed as this is SI’s biggest and most aggressive series to date, longer than both the 15-part “Future” of Procurement series and the 33-part “Future” Trends Expose series (that followed) combined and double the length of the maverick‘s 50 Shades of Pay series (assuming it gets completed) which, to date, only has 10 parts up and available for your reading pleasure.

There’s more that could be said, but much has been said in the 50 posts published to date and much more will be said in the next 50 posts, so, without further ado, here’s the first 50 for your reviewing pleasure.

Introductory Posts

Economic Damnations

Infrastructure Damnations

Environmental Damnations

Geopolitical Damnations

Regulatory Damnations

Societal Damnations

Organizational Damnations

Authoritative Damnations

Provider Damnations

Consumer Damnations

Technological Damnations

Influential Damnations

Bonus Posts!

Technological Damnation 91: Proprietary Madness

It’s bad enough that we have to deal with IP & Patent Madness, as chronicled in our post on the 89th (Technology) Damnation, but proprietary madness is likely to drive us all mad (and may someday push the doctor over the edge, into the land of the crackpot, where at least one blogger in the space is already dwelling).

Just what is proprietary madness? It’s mega-corporations, especially in software and electronics, taking the rights of ownership to extreme. Started, and continued, by the current and former Technology heavyweights, including the likes of IBM, Microsoft, and SAP, it’s not only the creation of company specific standards for software and hardware interfaces, its the restriction of the specification of those interfaces to approved partners and suppliers, limiting the supply of support services and related products to a handful of vendors. This not only drives up the price of those products and services to well above the market average price for support services for software and products with open and published specifications, but can make it difficult, if not impossible, to get support when demand is high or related products if one of the few vendors who can produce products shuts down.

Those of you with SAP know exactly what we’re talking about. Unlike Oracle, which publishes its core schema, and does not change it between minor versions, SAP does not publish its score schema, does not guarantee any stability between bug updates between minor versions, such as between 4.7.1 and 4.7.2, instead requiring you to go through its proprietary NetWeaver interface, which you will, of course, have to acquire to actually support any customizations (and likely build applications in the Portal). And learning the portal is no easy task. One of the most complete books on it is 700 pages alone! Then you need to find the documentation on the data stored in each R3 module you are interested in and how to get it out. There’s a reason that not every shop does SAP support — and that’s because, even though SAP now has a lot of documentation on their website, you need weeks of expensive training just to learn the basics of Portal Development, R3 interface, and the core data types and record types used to pull the data you need out of R3 and push modified data back in. Getting to the point where you are effective at developing and integrating custom supply chain applications requires months of training and mentoring and years of experience. As a result, it’s typically only SAP partners who can provide this support. In contrast, with an open Schema, as found in Oracle and MySQL, all you need is SQL experience and the interface library for whatever language you are using (whereas NetWeaver limits you to Java) — which makes it much easier (and cheaper) to not only find support resources, but vendors with best of breed software modules and platforms that can plug and play with Oracle right out of the box!

But it’s not just software vendors that create proprietary technologies, it’s hardware vendors too. Dell, IBM, HP, etc. all have custom control and administrative solutions for their server platforms. Want a third party virtualization platform to work out of the box on a new server configuration and take full advantage of the capabilities? forget it! You’ll probably have to wait six months to a year or more before third parties, like VMWare, are optimized and configured for those platforms (assuming that the full specifications are published upon technology release and licenses for custom drivers aren’t required), making their administrative software a must if you want to upgrade to the latest technology, and not upgrade to technology that was outdated a year ago.

But it’s not just IT companies that have proprietary technologies and interfaces. Big electronics companies do this too for most of their consumer (and even enterprise) electronics, including companies like Samsung (and its new Mobile AP core) and Sony (and its new ultra high definition TV technology).

And while there is nothing wrong with proprietary technology, as a company needs some assets in order to survive, the lengths at which some companies go to keep it secret and protect it, in a world where data needs to be shared and products need to be utilized with other products makes development (and the supply chains that rely on that development), a nightmare.

We need open standards and open interfaces. The sheer existence of IE alone should make that clear. (There’s a reason that many new IT start ups simply won’t support it anymore, and that’s because they can write stuff that runs almost flawlessly in Chrome, Firefox, and a dozen of other browsers or that runs almost flawlessly in a single version of IE on Windows platforms only, but not both. Since Chrome and Firefox and similar clone browsers run on all major platforms, and IE doesn’t, and since Chrome and Firefox almost fully support the open standards, whereas IE supports the Microsoft standards and those portions of the open standards it feels like, and, to top it off, [older versions of] IE allows case insensitive JavaScript!) Restrictive proprietary standards and interfaces just make life unnecessarily difficult.

But too many companies are too big and powerful, so it’s not going to happen and we’ll be forever wasting countless hours checking interface requirements, versions, and support availability instead of focusing on whether or not the technology meets our needs and will help us get our work done. It’s more daily damnation for all of us.