Category Archives: rants

Where’s the Beef?

Contrary to what you might expect, this isn’t a post about the beef supply chain, or the purity of beef that you source, but a post about modern media. I’m borrowing Wendy’s classic catch-phrase because, well, it’s what we should be asking anytime we watch the news or read an article that, simply put, does nothing more than summarize press releases and coverage from other sources.

Why is the doctor ranting about this now? Well, the day he’s writing this is just a little over a year since the inauguration of Donald Trump, whom, according to The Washing Post, made 2,140 false or misleading claims in his first year. But this isn’t what set the doctor off.

It’s the behaviour of media in the last year, and their repeated spreading of fake news, which is real, and, typically, not the fake news that the politically leaning media enterprises are rallying against (which each have their own set of alternative facts). And how this all popped to the forefront of his mind as he was scrolling through his archives and stumbled upon this classic post from January 2011 on why you have to Think!.

In this classic post, where he covered an awesome article by Atlantic Business of the same name, he started off by quoting the author who worries [that] we seem to have forgotten or dismissed the value of careful and considered thought because common sense seems to be in very short supply. And pondering on this, and the author’s statement that we always want an instant response or immediate gratification, he noted how it was becoming common for a journalist, or blogger, [who] doesn’t cover a “breaking” story the minute it happens, to feel that he’ll miss the boat.

And this, as the doctor noted, is a problem. We’ve gone from a world where a company would make a big announcement in a press conference and it would be a headline the next day — after the journalist had time to verify the statement, think about the impact, talk to experts, verify statements with references and customers, and so on — to a world where the press release goes up and 5 minutes later there are two dozen online sites offering “deep and complete coverage”. How “deep and complete” can a story be if someone spent 5 minutes of research on a press release and a couple of websites? The answer is NOT VERY.

But if we were still in that world, that would be almost acceptable. Today, when a company releases a press release about it’s new product, the journalists talk about whether the color will match your latest outfit based on what’s in style if it’s a phone or accessory. An executive makes a statement about the importance of sustainability and how the government should create regulations and laws, and instead they get unrelated backlash about how the additional cost will result in job loss because companies will just move to a locale where there are no regulations. The Prime Minister of Canada goes to the World Economic Form to discuss important global trade issues and all the journalists care about is what pair of socks he wore.
Who the F*ck cares?

Reporting is supposed to be about facts, issues, and deep information we can’t dig up on our own or deep thought and analysis. It’s supposed to be about the beef, not the bun, the sesame seeds, or the fancy box it came in.

And the worst thing is that our willingness to accept this as news is leading to our willingness to accept press releases as product tech sheets and scientific fact without any analysis whatsoever. At a time when we need to Think! the most, we are now, often, thinking the least when we should be echoing Dave Thomas and asking Where’s the Beef?!

A Visual Metaphor …

Is the following a visual metaphor for:

  1. Supply Chain Bloggers at the latest M&A press conference,
  2. Spend Analysis Vendors at Data Mapping and Cleansing,
  3. Sourcing Consultants at Indirect Spend, or
  4. All of the Above?

You Decide!

Why Do We Still have First Generation ERP/Data Warehouse BI?

the doctor was recently asked by a senior consultant if a CFO was right when he said why should I use Spend Analysis if my ERP has BI functionality that allows me to do ‘any’ analytics and generate reports … and I only have one ERP instance as the company was relatively small (< 100M).

How is the CFO wrong? Can we even count the ways? First of all, let’s go back ten (long years) to when SI published this great post from the spend master himself, Eric Strovink on screwing up the screw-ups in BI where he noted that Baseline, in their efforts to defend AI, were simply pointing out more holes in the process. In this post, the spend master noted:

1. A central database won’t solve the analysis problem, and at the end of the day you’ll have just as many spreadsheets as before … which, as every CFO and CIO knows, is way too many.

2. Business analysts should be able to construct BI datasets on their own, as needed, from whatever data sources are useful/appropriate, and it shouldn’t be difficult for them to do so … but most BI tools only make it easy to construct datasets and reports on data in the ERP. And you NEVER, EVER, EVER have all the data you need in the ERP. Some is in the AP. Some is in the sourcing and procurement systems. Some is in the WIMS. And then there are market data feeds that can provide insight, not in the ERP.

3. While BI is said to be the cornerstone of a governance program, a governance and stewardship program doesn’t actually put any meat on the table … whereas modern spend analysis systems do.

4. While BI can support data integrity, it typically isn’t cleansing that’s the problem, it’s (1) the fixed organization of the data, which is guaranteed to be inappropriate for any analysis that hasn’t been anticipated a priori, (2) the ad hoc reporting on it, which has to be easy to accomplish, as opposed to requiring IT resources (see below), and (3) the fact that cleansing can’t be accomplished on-the-fly (as it should be) by the business analysts themselves.

5. BI systems are difficult to use and set up, it is difficult to create ad hoc reports, and it is impossible to change the dataset organization … especially compared to spend analysis.

And this doesn’t even begin to address the facts that

6. BI reports are pretty generic, and not fine tuned to Sourcing, Procurement, or Finance. Modern SA systems, built by Sourcing, Procurement, and Finance professionals, have out of the box reporting fine-tuned to the needs of sourcing, procurement, and finance professionals that report on spend by category, supplier and metrics by category and supplier with easy drill down and segmentation by department, category, etc.

7. BI engines work on one schema — the ERP schema. And this is not always appropriate for Sourcing and Procurement who need to manage by category. and then do what if analysis against different re-categorizations to try and find the best way to source and procure for the organization. Modern SA tools allow for the creation of different schemas, different cubes on those schemas, and different views on those cubs. Power not in the BI.

8. BI engines expect the data in the ERP. SA systems don’t. They can import data from multiple systems, flat files, market feeds, etc. — put it in private, or public workspaces, reclassify and modify and augment the data as needed, and create true intelligence on a category or a supplier — not just a summary of last year’s spend by product or supplier.

9. The ability of first (and even second generation) BI engines to create arbitrary reports is considerably overstated. Most of them limit the facts and dimensions that can be used in reports to those in defined tables, and limit the self-service reporting to modification of pre-defined templates. Not the freeform capability of a modern Tableau or Qlik solution, and definitely not the freeform capability in a best in class Spend Analysis solution that can allow any dimensions or facts to be used and reports and dashboards to be created using any defined report components in an easy drag and drop manner.

And so on. Hopefully by now you get the point — especially when there are SA solutions out there that start at only a few thousand a month and provide at least 10 times the value of that outdated BI solution the ERP company should be paying you to use. (The technical debt they owe you on this is huge!)

Going Digital. Digitization. Digital Transformation.

Do you know what any of these terms mean? Are you sure? and I’ve been “digital” for three and a half decades — and I’m not sure I’m whatever “digital” is when it’s spoken by someone who hasn’t been digital since before it was cool. I had a TRS-80 which, supposedly, understood the BASIC programming language (it did, but even if you think you know BASIC, unless you had the joy of Level I Basic, you don’t … especially if you haven’t experienced the joy of only three error messages … which, I will admit, was better than the one error message I got on the VAX) and that was followed by an 8088 … remember that? Probably not … it was long before it was all about the pentiums … but, like the TRS-80, it was digital. (And if you don’t understand any of this, all you need to know is I’m the One That’s Cool.)

The thing is if you’re using a computer, you’ve already went digital. It works on bits, not analog signals. So what does it mean to go digital? Since there isn’t a business these days that doesn’t use computers, there isn’t a business that’s not digital. The only question is how much is done on the computer. And, more importantly, how much is done entirely on the computer. This is, simply put, a meaningless bullshit phrase.

Now let’s talk about digitization. Technically, this is just the process of converting an analog signal to a digital one (by selecting a sampling frequency, such as every second, tenth of a second, hundredth of a second, etc.). Or, more generally, converting something in the analog world to the digital world. In the back office, this usually means converting paper to documents. But this can just be the process of scanning paper documents to image files, which can not be searched, automatically indexed on key meta-data, etc. That requires (advanced) OCR, and machine learning to take corrections and improve the OCR so that future digitization of faxed documents (sent as images) or PDF documents can be automatically converted into searchable, indexed, text formats with accuracy. So while this isn’t as much of a bullshit term as going digital is, it’s a pretty ambiguous one. A software provider doesn’t have to do very much to honestly say they support digitization given the multitude of (rather weak) definitions that exist.

So this takes us to digital transformation. This is supposed to imply that you dramatically improve all of your business process through the implementation of a new platform that transforms the way you do business to a process that is faster, better, cheaper … and delivers more value. But if you think about this, pretty much any platform you implement is going to transform the way you do business … but is going to be for the better?

So before you fall into the “digital” craze, think about what it really means!

Just like infinite scroll websites aren’t new (that’s what we sorted with before we could frame and tab and paginate, for those that don’t remember), neither is digitization!

Why are We Still Hyping Big Data When We Haven’t Mastered Small Data?

The end of the year is coming, everyone is looking for next year’s tech, and everyone wants cognitive / AI that works on Big Data. But there are two real problems with this:

1. With current hard drive and memory capacities in a single machine, we typically don’t have enough data to fill it (at least with efficient encodings) in our typical back-office functions or unbearable computation times on a quad-core (with efficient algorithm implementations).

2. When it comes to learning, the biggest sets we have for training are typically quite small!

We’ll start with the second point first. Consider spend analytics, where the primary task is to map transactions to a categorization hierarchy. If you want to use a “deep learning” AI, then you need a big data set to train that AI. But how many transactions will the average organization have that have been mapped and verified individually by a human? Maybe 10K or 20K. Even a spend analysis provider will typically have only verified a few hundred thousand or maybe a couple of million compared to the tens or hundreds of millions of transactions its big customers will throw at it a year.

The situation is even worse for contract analytics. A large multi-national might have 20K contracts, but how many have been properly indexed with meta data at the clause and term level? If you have 2K you’ve struck gold. A contracts analytics provider might struggle to cobble together a data set of 20K contracts. This is even smaller data.

And then moving on to the first point. Even though most first (and even second) spend analytics applications are slow, (Opera) BIQ has been able to process and re-categorize a million transactions on a dual-core or better laptop with 8GB of memory in under a minute for almost ten years, and their most recent version on a modern quad-core laptop with 16GB of memory can handle a million transactions in a little over a second! In fact, it can handle ten million transactions in less time than it takes you to enjoy two sips of your coffee. And when you consider that most analytics are only on a set of related categories for a relatively short time period (at most 3 years), the number of transactions is typically only a few hundred thousands for a large company and in the tens of thousands for a mid-size company. That’s not only small data, but data that can, these days, even be processed in your browser (as a new analytics offering will prove next year).

So before you go goo-goo-ga-ga over big data, understand how big your data really is and get the application that works best for your data, which will more likely be at a cost point that works best for Finance as well.