Category Archives: Spend Analysis

Decideware: Taking Marketing Magic to a Whole New Level!

When we last briefed you on Decideware, they were Taking Agency Expense Management to the Next Level! Their Production module had just entered beta, and they had the facility to track not only quotes but actual costs down to the lowest level of detail and associate it with tasks, budgets, providers, and even individual resources.

In the production module, clients define jobs in detail, associate team members, define workflow, assign to vendors, breakdown costs, and go. The definition of job can be quite detailed — name, scope, lead, budget and budget period, type, geography, org unit, and so on. It can be as detailed as necessary, supporting everything from the creation of a simple banner advertisement to a full-scale shoot of an extended informercial, with costs ranging from 10 thousand to 10 million.

Costs can be broken up by phase, and then broken down by expense type, and even resource. The module can track estimated, actual, and will then compute the variance automatically by line item, task type, and phase. This may simply sound like an enhanced version of their scope of work, but the breakdown is much more detailed and their ability to capture data much more refined. This is important, because it supports their new dashboard module.

Their dashboard module, which needs a better name, is not a dashboard at all, but the release beta of their new deep BI capability. Decideware have recently integrated Tableau and can finally bring Marketing the deep insight into spend, and performance, that Marketing has until this point lacked.

Using Tableau, they have developed custom level 1 and level 2 dashboards for over a dozen big clients and are providing marketing spend insights that are going light years beyond what Marketing has ever seen, with the deep drill down you’d expect from a standard spend analysis tools.

At level 1, clients can see how much they are spending by agency, project type, phase, task, or resource, drill down on any available dimension, and, once and for all, see average costs for resources, tasks, projects and other deliverables. They can see when the average cost per hour for banner ad creation and management is $75 and one firm is charging them $125.

This is great, but the real value comes when you start importing performance data and contrasting it against cost. Nowhere is it more true than in marketing that “it’s not what you spend, it’s the impact you make”. It’s not how much more or less than the average you pay for social media campaign marketing, it’s how many impressions you make and clicks you get. If the average impressions on a campaign that cost $5000 is 500, and the average click throughs 15, then paying a company $10000 for a campaign that gives you 2000 impressions and 100 click throughs is a great deal, as you are paying 100 per lead vs. 333. And while most good marketers will get this data from a focussed campaign, how many can integrate it with the cost of campaign (banner ad) creation, how many can contrast it against similar campaigns, and how many can do that against normalized costs around the globe? None. But now they can.

With their latest development, DecideWare have not only taken (Marketing) Agency Management to a whole new level, they have also taken the insight into the ROI into a whole new level. Which creative genius is worth the $500 / hour (as his contribution can now be compared to end results across all his projects and his cost per effect normalized and compared against the other creative geniuses at the other agencies)? And which one isn’t even as productive as a $50 grunt doing stock art. With the new Decideware platform, not only can Marketing win the Agency Management Battle, but the cost management war.

What the Hell is Automated Spend Analysis?

While reading a “must-read” post on “next generation spend analysis” (which shall not be named or linked to because it was not must read and contained no useful information on spend analysis, and definitely did not contain anything that would make it next generation), the doctor encountered the claim that automated spend analytics yields spend intelligence. Now, despite claims to the contrary, there aren’t that many technologies in the Supply Management world that truly deliver spend intelligence (and that’s probably why there are only two advanced sourcing technologies that have been found to deliver year-over-year returns above 10%, namely decision optimization and spend analysis). Moreover, nothing about these technologies is automated — they require a skilled user to define the models, do the analysis, and extract the insights.

So if someone is claiming a technology offers spend intelligence, that perks up the doctor‘s ears. And if someone is claiming it offers spend intelligence and is automated, that really gets his attention because if it’s real, it deserves to be shouted from the rooftops, and, if it’s not, shenanigans must be called on the charlatans. And even though calling shenanigans on the charlatans won’t stop them, as proven by the fact that repeated exposes have been done over the years on mediums (who claim to talk to the dead, but really don’t) and televangelists (who claim God is telling them to raise money for personal jets even though they aren’t even religious, and if you don’t believe the doctor, then please feel free to donate to Our Lady of Perpetual Exemption), at least the truth will be out there for those willing to look for it.

the doctor knows what automated spend reporting is, what automated spend refresh is, what automated spend cleansing and enhancement is, and what automated spend insights are, but what the hell is automated spend analytics? And how does it provide spend intelligence?

So, the doctor did some research. According to a meritalk blog post, which defines it as a must for every Federal agency and which appears to be using Spikes Cavell’s spend analysis technology, it is the automated data collection, cleansing, classification, enrichment, redaction, collation, and reporting through cloud based systems, which makes sense, but this isn’t spend intelligence. This process will turn data into a collection of facts that provide the analyst with knowledge, and maybe even actionable insight, but not without human intervention.

A human will have to look at the reports and identify which opportunities are real and which are not. Simply knowing how much is spent by Engineering, spent on cogs, spent with Cotswell’s Cosmic Cogs, and shipped by Planet Express is not providing an analyst with any real intelligence. Knowledge on its own is not intelligence. Knowing that the average price paid per cog was $1.50 when the market price for the same cog appears to be $1.30 is not intelligence. Intelligence is know that the price of steel is projected to continue to drop due to an influx of new supply and a fall in current construction projects, that in a month the price is expected to be $1.20, that the best time to lock in a long term contract will be in six to eight weeks just before the steel price hits the expected low point, and how to go about sourcing that contract to get a long term price at or below $1.20.

Rosslyn Analytics, who claimed to launch the “world’s first, and fastest, fully automated cloud-based spend data integration service”, defines it’s platform as a web-based automated spend analytics platform, defines spend intelligence as an 8-step process that starts with planning and includes a detailed data analysis phase, both of which require human intelligence to complete.

Further searching turns up a post titled “can we ever fully automate spend analysis or do we need” on Capgemini’s Procurement Transformation Blog from 2013 that clearly states that on their own, the analytics tools cannot interpret the data so the tools must be programmed and algorithms developed which “tell” the software how the data should be mapped and that even though we have now reached a level where human interaction with a data analysis tool is diminishing … human intervention is still required to tell software what can be learned.

These are just three examples where bloggers, consultants, and solution providers all agree that while much of the spend analysis process can be automated, human intervention is still required to extract intelligence out of the facts that the tool identifies.

There is no automated spend intelligence, and any claims to the contrary are false. the doctor sincerely hopes that this is the last time he sees this phrase, because if he ever sees it again, a rant of epic proportions is sure to follow (and fingers will be pointed)!

Screwing up the Screw-Ups in BI (Repost)

Back in January of 2008, SI ran this now classic post by Eric Strovink, formerly of Opera Solutions, BIQ (acquired by Opera Solutions), and Zeborg (acquired by Emptoris, which was acquired by IBM). While writing tomorrow’s rant, the doctor was reminded of this classic post and how most companies and people screw up the basics of BI and spend analysis. Since this will put you in the right frame of mind to understand tomorrow’s post, the doctor has decided to repost it.

Baseline recently put a slide show on their site illustrating “5 Ways Companies Screw Up Business Intelligence — And How To Avoid The Same Mistakes,” with data drawn from CIO Insight. The slides are an excellent example of how mainstream IT thinking misses the essential problems of business data analysis.

Let’s take the “screw-ups” one at a time:

  1. Spreadsheet proliferation (97% of IT leaders say spreadsheets are still their most widely used BI tool.)
    Spreadsheets are one of the most valuable business modeling tools available, and IT might as well understand that they’re not going away. The problem is when spreadsheets (and offline tools like Access) are used inappropriately, to manipulate transactional data rather than drawing it in the right format from a flexible store. The solution provided by Baseline is to “cleanse and validate your data, then migrate the information to a central server/database that can be the backbone of any BI strategy.” Bzzt! Sorry, a central database won’t solve the analysis problem, and at the end of the day you’ll have just as many spreadsheets as before. That’s because a fixed schema data warehouse is a lousy analysis tool, and might as well be on planet Neptune as far as usability for the business analyst is concerned. There’s nothing wrong with a reference dataset, but business analysts need to be able to manipulate its structure as easily as a spreadsheet, or they will simply extract the raw data from it and manipulate the data offline, with the same slow, expensive, and uncertain results as today.
  2. Systems can’t talk to each other (64% of IT leaders say integration and interoperability of BI software with other key systems such as CRM and ERP pose a problem for their companies.)
    Right! Except that the Holy Grail of trying to extend a “centralized” database umbrella over completely disparate systems is both incredibly expensive and nearly impossible. Baseline suggests “[partnering] with a reputable systems integrator.” Good for them — at least they dodge this bullet rather than getting the answer completely wrong. The right answer is that business analysts should be able to construct BI datasets on their own, as needed, from whatever data sources are useful/appropriate, and it shouldn’t be difficult for them to do so. Concentrating all of the information under one umbrella isn’t necessary; many umbrellas can do the job, and if they’re easy to deploy, they’re both inexpensive and provide a better and more flexible answer.
  3. No centralized BI program (61% say they don’t have a center of excellence of the equivalent of BI.)
    And they’d be well advised to tread carefully, because BI systems have a track record of poor performance and poor customer satisfaction. Why? Because the analyses you can do with a fixed data warehouse are limited to the views set up a priori by IT or by the vendor, and those views are largely immutable. Baseline dodges this one, too, suggesting the “[creation of] a data governance and data stewardship program.” Can’t argue with that in principle, but a governance and stewardship program doesn’t actually put any meat on the table. How about putting tools into analysts’ hands that they can actually use? Right now?
  4. Data lacks integrity (57% say poor data quality significantly diminishes the value of their BI initiatives.)
    Hmmm, I wonder why the data are of such poor quality. Could it be that the BI system doesn’t really provide much insight? Could it be that the fixed schemas set up by IT or by the vendor don’t have any applicability to day-to-day questions? Could it be that the inability of the BI system to re-organize and map data on the fly causes errors to persist over time? Baseline recommends spending more money on data cleansing, which might make a cleansing vendor quite wealthy, but won’t help much. It typically isn’t cleansing that’s the problem, it’s (1) the fixed organization of the data, which is guaranteed to be inappropriate for any analysis that hasn’t been anticipated a priori, (2) the ad hoc reporting on it, which has to be easy to accomplish, as opposed to requiring IT resources (see below), and (3) the fact that cleansing can’t be accomplished on-the-fly (as it should be) by the business analysts themselves.
  5. Managers don’t know what to do with results (58% say most users misunderstand or ignore data produced by BI tools because they don’t know how to analyze it.)
    Even when BI is in place, nobody knows what to do with it. Baseline recommends that “IT staffers… should work closely and regularly with business managers to ensure that measurement, reporting, and analysis tools are supporting business goals.” But this is precisely the problem. For business analysts, BI systems are difficult to use and set up, it is difficult to create ad hoc reports, and it is impossible to change the dataset organization. It is also politically impossible to change the dataset organization if it is being shared by hundreds or thousands of users. How are you going to get them into the same room to agree on the changes?

So, Baseline is proposing (in essence) that IT resources sit cheek-by-jowl with business users, to ensure that they can get value out of a system that they otherwise could not use. This is certainly a “solution” of sorts, but it’s not practical. Either business analysts can use the system on their own, or the system will be of marginal value to them. It’s that simple.

Data Analytics is Big Money, But

Last Friday, Palantir raised $450 Million in a new round of funding, at a valuation of almost $20 Billion, making it the fourth most valued “startup” to date with almost 1 Billion in funding including Founders Fund, Tiger Global Management, and In-Q-Tel, the CIA’s investment arm.

But it’s not just big data that generates big money (for the software provider) and big value (for the organization that has [access to] it). It’s big analytic power. And, as SI has indicated repeatedly, the data set doesn’t necessarily need to be that big to identify considerable savings opportunities.

A million transactions might not be more insightful than 1,200 transactions. If the transactions are for 10 different products from 10 different suppliers over the course of the year, a single summary transaction for each month for each supplier-product pair that summarizes the lowest price paid, the average price paid, the highest price paid, and the total paid is just as informative from a spend analysis perspective. Given this data, the buyer can see, for each product, how much money it would have saved if it always bought at the lowest price, how the price is trending, and how much could be saved by using a contract to lock the product in at a price less than the current market price. The other 998,800 transactions are not needed.

In other words while you need large spend cubes to find value opportunities, which will often depend on redefining categories, redefining shipping lanes, redefining delivery schedules, and so on, you can often get away with cubes that are at most, hundreds of thousands of well defined (summary) transactions (for the right time period). Millions of transactions are typically not necessary, and that’s why you can do enterprise wide spend analysis on a laptop with the right spend analysis tool (like busiq.com) as you can generally define a transaction set of just a few million transactions that covers the last three years and fits in memory!

Why You Should Not Build Your Own e-Sourcing System, Part II Spend Analysis

In Part I, where we noted that Mr. Smith was right in his recent post on “thinking of building your own esourcing system please don’t” over on Spend Matters UK where he pleaded with those organizations, and particularly those organizations in the public sector, who thought they could build their own e-Sourcing system not to, we gave a host of reasons why only those organizations with the core business of software development and delivery specializing in Sourcing or Source-to-Pay should even consider the possibility. We also agreed with Mr. Smith that any other organization that even considered the possibility was

  • going to waste OUR money building it,
  • waste exorbitant amounts of money keeping the system up to date and compliant with ever-shifting legislation, and
  • only feed those dangerous delusions at best (and possibly create an epic disaster worse than the Smug cloud that ruined South Park because, of the 11 greatest supply chain disasters of all time, 8 were caused by technology failures and 6 by software platform failures!)

But we know this isn’t enough to convince the smuggest and most deluded from considering the notion. So we’re going to dive in and address some of the difficulties that will have be conquered, one primary module at a time, starting with spend analysis.

Even though every vendor and their dog thinks they can deliver a spend analysis system these days, the reality is that most vendors, including those with a lot of database and reporting experience, can’t. If vendors with significant experience in data(base) management and reporting can’t build a decent spend analysis system, what makes you think your organization can?

A spend analysis solution must be:

  • Powerful
    and support multiple spend analysis cubes, with derived and range dimensions, stored in public and private work spaces;
  • Flexible
    and support multiple categorization schemes, vendor and offering families, and user defined filters and drill downs;
  • Manageable
    with user defined data mappings and cleansing rules, hierarchical rule priorities, and easy enrichment;
  • Open
    and easy to get data in, out, and mapped; and
  • Informative
    with built-in report libraries, a powerful report builder, and an intuitive report customization feature.

This is not easy. Let’s start with flexibility. Most vendors probably have their goods and services mapped against UNSPSC, which your buyers of domestic goods are familiar with, but globally traded goods are probably mapped against HTS, which your tax division wants, your organization probably has its own GL codings that are required to keep Accounts Payable happy, and none of these categorization schemas are suitable for real spend analysis. As a result, you probably need to maintain at least four separate categorization schemas (for buyers, traders, accounts payable, and real analysts). If you think you can easily achieve this by slapping a report builder on top of an open source relational database, think again.

Let’s move on to power. One cube is never enough. If you’re an organization of reasonable size, doing year over year spend analysis over a reasonable time frame, you’re looking at millions, if not tens of millions of transactions. If you believe that you can dump all of that in one cube, and make sense of it, assuming you can design a system that can even build that cube without crashing (it’s big data, remember), you’re probably living in ImaginationLand (which is a very dangerous place to be).

We cannot forget about openness. The data you need will not live in the Spend Analysis system. It will live in the ERP (Enterprise Resource Planning). It will live in the Accounts Payable System. It will live in the TMS (Transportation Management System). It will live in the WIMS (Warehouse Inventory Management System). It will live in the VMS (Vendor Management System). And so on. Every one of these systems will have a different schema, it’s own data master, and, probably, duplicate vendor and product entries with various spellings of the name, locations, and so on.

Nor can we forget about manageability. It must be easy to map, normalize, clean, and map all of the data that is pushed into the system — by hand. AI doesn’t work. Every organization uses its own classification and shorthand, every department uses its own variations on the theme, and no system can figure out every error a human can make. All AI systems do is pile on rules until there are more collisions than correct exception mappings. That’s why a spend analysis system not only has to support multi-level rules, but help the user define appropriate multi-level rules and understand, when a transaction is mis-mapped, which rule did the mapping, what exception rule is required, and how broad that rule should be.

This leaves us with the need to extract useful information that can be used to identify real saving or value generation opportunities. No canned set of reports can do this. Standard reports can indicate where we can begin to look, but simply knowing a spend is high, or higher than market average, does not indicate why (locked in prices, bundled in services, quality guarantees, maverick spend, supplier overcharges) or what factors, if addressed would decrease spend.

And while this is just a high level overview of the challenges, the hope is that it is sufficient enough to convince you that development is not an easy task and not something that the average organization should remotely entertain.