Category Archives: Guest Author

Procurement – Why we really matter!

Today’s guest post is from David Furth, VP of Marketing at Hiperos. David has been in Procurement for over 20 years and has held senior positions at Perfect Commerce, BasWare, RightWorks/i2, and Deloitte Consulting.

Procurement is on the verge of experiencing its next major transformation. During the past ten years, the emphasis has been on optimization – leveraging spend, improving the sourcing process, and becoming more efficient across all aspects of the P2P and Order-to-Cash value stream.

As a result of these improvements, companies now rely on suppliers, outsourcers, and other third parties more than ever. A fact now recognized by C-level executives, boards of directors, and regulators, alike. Why? The increased reliance on these third parties has occurred without implementing the same level of control or having the same level of visibility that was in place when the work was being performed internally. The result is increased risk to company performance and brand reputation.

As a result, forward-looking procurement leaders are transforming their organizations. They still maintain the same obligation to keep costs down. But they have added the responsibility to continuously assess risk, pre- and post-award, and introduce integrated processes and controls across their companies to mitigate that risk by working closely with other functional areas, business lines, and geographies. During the next few years, procurement will be looked upon to provide important guidance around how key external contributors to their companies’ value chains are managed.

This is why more and more procurement executives are stepping forward to introduce a consistent method for managing providers across a wider breadth of their extended enterprise. These executives recognize that just because the contract assigns responsibility/liability for just about “everything”, this does not absolve their companies from the responsibility of ensuring each provider is living up to all contractual obligations. This requires implementing management control programs that actively monitor both performance and compliance to help ensure suppliers are meeting all their obligations.

This is an enormous responsibility that requires consolidating requirements across a large number of stakeholders, communicating expectations to all providers, collecting information and documentation about current status, and collaborating with providers to remedy issues when shortfalls are identified.To be successful requires a new attitude, a thoughtful approach, buy-in from key stakeholders, and the appropriate technology. Despite the best of efforts, responsibility or risk cannot entirely be outsourced.

So, when you consider the consequences of suppliers failing to meet their obligations, regulators handing out fines for poor oversight of third parties, and investors losing confidence in your brand, it is not surprising to see real action taking place. The past few years have made it abundantly clear, it is not a good strategy to expect that a great contract will get you great results, ensure providers follow the law, or prevent them from acting unethically. Therefore, it is imperative to have the appropriate level of controls to mitigate to the appropriate level of risk. This has not been the traditional way of thinking, but that is rapidly changing.

Thanks, David.

Share This on Linked In

Analytics V: Spend “Analysis”

Today’s post is by Eric Strovink of BIQ.

As an engineer who originally entered the supply management space in 2001 to build a new spend analysis system, over the last 9 years I’ve watched marketing departments consistently “dumb down” the original broad and exciting definition of spend analysis that I remember from those days, to something really quite ordinary. For example, here are the steps required for classic data warehousing:

  1. Define a database schema and a set of standard reports (once, or rarely)
  2. Gather and transform data such that it matches the schema
  3. Load the transformed data into the database
  4. Publish to the user base
  5. Repeat steps 2-4 for life of warehouse

And here are the steps required for what has come to be termed “spend analysis”:

  1. Define a database schema and a set of standard reports (once, or rarely)
  2. Gather and transform data such that it matches the schema
  3. Load the transformed data into the database
  4. Group and map the data via a rules engine
  5. Publish to the user base
  6. Repeat steps 2-5 for life of warehouse

Not much difference.

You might ask, how can spend analysis vendors compete with each other, when the steps are so simple, and when commodity technologies such as commercial OLAP databases, commercial OLAP viewers, and commercial OLAP reporting engines can be brought to bear on any data warehouse? Well, it’s been tough, and it’s especially tough now that ERP vendors are joining the fun, but they compete in several ways:

  • Our step 4 is better [than those other guys’ step 4].
  • [briefly, until it failed the laugh test] Our static reports are so insightful that you don’t even need anyone on staff any more.
  • [suite vendors’ (tired) mantra] “Integration” with other modules
  • “Enrichment” of the spend dataset with MWBE data, supplier scoring on various criteria, and any other ways that might exist to try to add checklist features for analysts that may broaden interest in the spend analysis dataset beyond simple visibility.

It’s all very discouraging, but the doctor and I will continue to point out that spend analysis is not just A/P analysis; it can’t be done with just one dataset; and it’s not a set of static reports or a dopey dashboard, even though some vendors and IT departments would like to think it is. Spend analysis is a data analysis problem just like any other data analysis problem, and it requires extensible and user-friendly tools that empower people to explore their data for opportunities without third-party assistance. Those data come from multiple sources, not just the A/P system; many datasets will need to be built and analyzed; and from them, hugely important lessons will be learned.

The above notwithstanding, building a single A/P spend cube is a useful exercise. If you’ve never done it before, you will find things that will save you money. But that’s just the tip of the iceberg.

Previous: Analytics IV: OLAP: The Imperfect Answer

Next: Analytics VI: Conclusion

Share This on Linked In

The Time for One Vision is … Not Any Time Soon

 

Today’s guest post is from Eric Strovink of BIQ.

 

 

the doctor has asked “Has the time for one vision arrived?” The point of his post is contained in the last paragraph1, and boils down to whether “Best of Breed” solutions in the supply chain space are “good” or “bad” from an “integrated data” perspective.

It is certainly the case that there are a lot of software solutions that boil down to nothing more than a custom database fronted by a UI and a report writer. Such systems are rather easy to build; in theory, they can be built (as the doctor has pointed out in jest previously) using a VBA programmer and a Microsoft Access database. Jesting aside, home-brew solutions can exceed both the functionality and usability of so-called “enterprise” solutions. For example, Access-based 1990’s-era spend analysis leave-behinds from consulting organizations such as the old Mitchell Madison Group are still running in some large companies today, and, I daresay, are still superior to many current solutions.

Since there’s a pretty low technical bar to producing YAS (Yet Another Solution), there are grounds for hand-wringing when trying to keep track of them all, and of all the disparate data they are managing.But it really doesn’t matter whether a software product is built by in-house resources using Access and VBA, or by an international team of professional programmers using J2EE/Flex/Silverlight/Ajax/etc. and delivered via the browser, because the answer to the doctor’s question is simple:

Until there is a major, earthshaking change in the technology of database systems, the notion of an “integrated” enterprise-wide data store is pure fantasy.

Why? Because, as the post points out, “each data source [is using] a different coding and indexing scheme, [and] there is no common framework that connects the applications.” And that’s all she wrote, folks. You can’t store egg nogg in a fruit basket. It’s just not going to work.

Now, kudos to Coupa and others for “opening their API” (meaningful for programmers, not so much for ordinary humans) and so forth, but there is at present no way to integrate disparate, unrelated data into some centralized data store, without losing all the detail in the process. I don’t care if it’s all “spend” data, either. Slapping a label on something doesn’t make it homogeneous. I’m a bit of an expert on spend data, and I can assure you that spend data comes in all shapes and sizes and is certainly not homogeneous, whether you run an e-procurement system or you do not.2

And, of course, both old and new database vendors have been claiming for years to be able to integrate disparate data sources across the enterprise. Sure, if you want to join a few records across disparate databases that share common keys, there’s demo-ware that they can show you. It works great. But try a multi-way join across millions of records across disparate databases, and I’ll join you for a beer in the year 2025 when the query finishes.

So, I’ll steal the thunder from the “future post” mentioned by the doctor and jump right to the conclusion:

  1. Don’t worry about “integrating” data, because it’s not going to work the way you hope it will. At best, you will end up with inadequate compromises and uselessly generic data, like a design-by-committee spend cube that is shelf-ware after six months.
  2. Do worry about being able to move data easily in and out of the systems that you have. Don’t allow vendors to “lock up” your data; you should be able to change platforms easily, whenever you want to.
  3. Do worry about flexibility and adaptability in your analysis system. You should be able to operate it yourself, for example. If your data is locked up behind some SQL database that only IT drones can access, it isn’t doing you any good at all.
  4. Do worry about being able to move data from [anywhere] to your analysis system, quickly and easily.3

Let’s see what the doctor thinks, when he gets around to it.

1Apparently the doctor has never taken Journalism 101. But we can forgive him, since he doesn’t pretend to be a journalist.

2There are new ideas like “semantic database systems”; but a quick glance at recent history will show how well that works out in practice (Jason Busch over at Spend Matters, for example, made the mistake of drinking the semantic search Kool Aid with the now-defunct Spend Matters Navigator).

3Also, it’s important to clarify the notion that “real time” access to data is required for procurement decisions. No procurement decision needs to be made in real time. Is this a Hollywood science fiction movie where we need to dodge laser blasts from Tie fighters zooming in from all angles? No, it’s the real world, and decisions can and should be made thoughtfully and carefully. When the doctor says “real time,” I would hope that he means that there should be access to the data and answers to questions without waiting a week or a month for some analyst to write software.

Share This on Linked In

Analytics IV: OLAP: The Imperfect Answer

Today’s post is by Eric Strovink of BIQ.

When relational database technology breaks down, as it does on any sizeable transaction-oriented dataset when multiway joins on millions of records are required, the answer is, essentially, to “cheat”. At the risk of dumbing down some pretty complex technology (and the work of some extremely smart people), the usual idea1 is to pre-aggregate totals in advance of the query, so that most of the work of the multi-way joins has been done in advance. This is called “OLAP” — an acronym that is unfortunate at best (“OnLine Analytical Processing”).

OLAP databases solve some data analysis problems, in particular slow joins, but only for certain columns in the dataset, and only for datasets that contain transactional data. So, many of the intrinsic problems of the data warehouse are exacerbated in an OLAP database, because the OLAP database is even more special purpose, and its schema very rigorously constrains the queries that can be expected to work efficiently.

Building OLAP databases is therefore harder than building general-purpose relational databases, and thinking in OLAP terms is also harder than thinking relationally. Deciding what columns are “interesting” is challenging as well, and also time-consuming; by the time the OLAP dataset is built, and you decide the column is “uninteresting”, you may have wasted considerable effort.

But OLAP datasets do provide one major advantage, and that’s the ability to “slice and dice” data rapidly, with visual impact and in human-understandable terms. OLAP viewers give users great visibility into data relationships, and enable exploration of large datasets without any need for IT expertise. That’s primarily what “Business Intelligence” or “BI” tools bring to the party: the ability to navigate OLAP datasets for insight.

So, OLAP solves some problems, but fails to solve others. Here is a short (and incomplete) list of significant issues:

  • A dependence (typically) on a(nother) fixed database schema
  • Another level of schema complexity to manage, in addition to the underlying database schema
  • Another level of inflexibility, in that changing the OLAP database organization is often even more difficult than changing the underlying database schema
  • Another level of complexity in SQL queries (called “multidimensional SQL”, or MDX) must be used, that is much harder to comprehend than ordinary SQL.

In the procurement space, OLAP databases are often used for “spend analysis,” but more on that topic in part V.

Previous: Analytics III: The Data Expert and His Warehouse

Next: Analytics V: Spend “Analysis”

1There are many approaches to OLAP.

Share This on Linked In

Analytics III: The Data Expert and His Warehouse

Today’s post is by Eric Strovink of BIQ.

Nothing is potentially more dangerous to an enterprise than the “data expert” and his data warehouse. In the data expert’s opinion, every question can be answered with an SQL query; any custom report can be written easily; and any change to the database schema can be accomplished as necessary. Everything of interest can be added to the warehouse; the warehouse will become the source of all knowledge and wisdom for the company; life will be good.

How many times have we heard this, and how many times has this approach failed to live up to expectations? Problem is, business managers usually feel that they don’t have the background or experience to challenge IT claims. There’s an easy way to tell if you’re being led down the rose-petaled path by a data analysis initiative, and it’s this: if your “gut feel” tells you that the initiative’s claims are impossibly optimistic, or if common sense tells you that what you’re hearing can’t possibly be true (because, for example, if it’s that easy, then why isn’t everyone else doing it), then go with your gut.

Sometimes the reflexive response of management to an IT claim is to say, “OK, prove it“. Unfortunately, challenging a data expert to perform a particular analysis is pointless, because any problem can be solved with sufficient effort and time. I recall an incident at a large financial institution, where an analyst (working for an outsourcer who shall remain nameless) made the claim that he could produce a particular complex analysis using (let’s be charitable and not mention this name, either) the XYZ data warehouse. So, sure enough, he went away for a week and came back triumphantly waving the report.

Fortunately for the enterprise, the business manager who issued the challenge was prepared for that outcome. He immediately said, “OK, now give me the same analysis by …“, and he outlined a number of complex criteria. The analyst admitted that he’d need to go away for another week for each variant, and so he was sent packing.

It’s not really the data expert’s fault. Most computer science curricula include “Introduction to Database Systems” or some analog thereof; and in this class, the wonders and joys of relational database technology are employed to tackle one or more example problems. Everything works as advertised; joins between tables are lickety-split; and the new graduate sallies forth into the job market full of confidence that the answer to every data analysis problem is a database system.

In so many applications this is exactly the wrong answer. The lickety-split join on the sample database that worked so well during “Introduction to Database Systems,” in the real world turns into a multi-hour operation that can bring a massive server to its knees. The report that “only” takes “a few minutes” may turn out to need many pages of output, each one a variant of the original; so the “few minutes” turns into hours.

Consider the humble cash register at your local restaurant. Is it storing transactions in a database, and then running a report on those transactions to figure out how to cash out the servers? No, of course it isn’t. Because if it did, the servers would be standing in line at the end of the night, waiting for the report to be generated. A minute or two per report — not an unreasonable delay for a database system chewing through transactional data on a slow processor — means an unacceptable wait. That’s why that humble restaurant cash register is employing some pretty advanced technology: carefully “bucketizing” totals by server, on the fly, so that it can spit out the report at the end of the night in zero time.

We’ll talk about “bucketizing” — otherwise known as “OLAP” — in part IV.

Previous: Analytics II: What is Analysis?

Next: Analytics IV: OLAP: The Imperfect Answer

Share This on Linked In