Monthly Archives: April 2011

When It Comes to Tech, Sometimes I Think Analysts Should Get Out of the Game

Especially if they don’t have a degree in technology! Even if they have 30 years in the tech industry, because, at some fundamental level, they just don’t get it and ultimately end up making a suggestion that not only makes everything more complicated than it has to be but confuses the heck out of the average person.

So why am I ranting again? Supply Chain Brain republished an article by a Gartner analyst on Suite Versus Best of Breed: The Argument Rages On that, to be honest, impressed the hell out of me until I got to the second last paragraph. The author nailed the pros and cons of enterprise suites before beautifully exposing the advantages and disadvantages of of best-of-breed with the precision of a master craftsman and then concluded, with deft clarity, that best-of-breed vs. integrated suites is not a good basis to guide application selection (which is a reality that not all technology analysts seem to be aware of). But then, just when I was about to applaud Gartner for publishing such a fine piece, the author not only goes on to say that the solution is a “new model” (which is scary in itself as most analysts have no idea what a real “model” is or that there’s a big difference between “framework” and “model”), but goes on to say that the model is something called pace-layered application strategy. WTF?!?

I’m a PhD in Computer Science with fifteen years designing, building, leading, and consulting on the design, architecture, integration, and implementation of enterprise software systems, with expertise in algorithms, data stuctures, computational geometry, optimization, mathematical modeling, relational databases, automated reasoning, and some areas of semantic technology … and I didn’t have a sweet clue as to what he was talking about. (So how is an average non-technical person supposed to know what this means?)

So I made the mistake of looking it up. Of course, the first result from a Google search is a Gartner page to a locked article that describes pace layering as a “new methodology for categorizing applications and developing a differentiated management and governance process that reflects how the applications are used and their rate of change”. Buzzword Bingo anyone? The next few results are no better — all buzzword summaries of this “great new thing” that you apparently can’t get any information about unless you’re a Gartner client (surprise, surprise) [unless you’re really good with Google].

So I decided to take a step back and look up pace layering before I dove deeper into the Gartner grief. According to this post by James Governor over on RedMonk on why applications are like fish and data is like wine, pace-layering is an idea from Stewart Brand where complex systems can be decomposed into multiple layers, where the layers change at different rates. The “fast layers” learn, absorb shocks and get attention; the “slow layers” remember, constrain and have power. One of the implications of this model is that information architects can do what they have always done — slow, deep, rich work; while tagging can spin madly on the surface. This is a good way to build systems, and embodies the best practices of a hybrid agile development model where one team iterates rapidly through a UI and the business logic, through regular interaction with the end users, to hammer out what it is that the system really needs to do while another team slowly builds a powerful, flexible, scalable and robust back-end that can accomodate an evolving business landscape. But there is a big difference between best-practices for building a system and best-practices for selecting a system.

First of all, you can’t implement an enterprise system in a couple of weeks, test it out for a few weeks, and then throw it away if it doesn’t work. Implementations (and integrations) take considerable time and investment. Secondly, there are no “fast” systems in the average enterprise. Once you implement something, you typially have it for years either because it takes that long to see value or it takes that long for the enterprise to agree on something new. Thirdly, the hybrid agile development approach that pace layering describes does not care if you are developing a system of record, a system of differentiation, or a system of innovation whereas Gartner’s pace-layering application strategy relies on a company being able to make this distinction because each has characteristics that apparently suggest ERP / Suite vs. Standalone Module / Best of Breed vs. Modified Best of Breed / Custom App.

And while each of the characteristics (on page 17) that Gartner identified in their recent webinar on “ERP Strategies: Exploit Innovations in Enterprise Software” (PDF slides) are important considerations in technology selection, there are two major problems with the approach.

  1. Technology selection is never that simple across the board.
    If the organization is a large enterprise that is slow to adapt to new technology and implements new systems infrequently, then an ERP suite from an established, stable, vendor that has been around for ten years (and that is likely to be around for ten more) is probably the best answer. But if the organization is a small, new, (but) growing enterprise that is quick to adapt to new technology and always looking for, and implementing, new solutions, then the best solution might be a new best-of-breed application from a smaller provider that is more cost effective and innovative (because, in the worst case, if the vendor goes belly up, the organization can always move to a new solution, and, if the new solution was 1/10th the cost of the ERP, still save a bundle even when the migration costs to a new system are factored in).
  2. It’s not about the framework — it’s about the solution
    and if you follow a framework, sooner or later you’ll choose the wrong system — and pay dearly. For example, the pace layer governance framework recommends best of breed for a function where differentation is key. This says that if you want to implement next generation sourcing strategies, you need a best of breed system. Not true. Many next generation sourcing strategies have nothing to do with technology. They are about business value, and with the exception of true spend analysis or decision optimization, can be accomplished with commodity e-Negotiation functionality, which even the ERP suites have in spades. If the organization is technologically behind, or needs a lot of support, it should probably go with a suite from a big player with the resources, and experience, to support it and then bring in a consulting firm, with access to (and expertise in) best of breed products to help with the spend analysis and decision optimization, where and when required.

In other words, another framework is not the answer. The answer is, as it has always been, identify your needs, identify the functions that the potential solution systems implement, and find the best match. Suite vs. Best of Breed vs. Custom App. vs Yet Another Confusing and Ridiculous Model be damned.

Why Do You Need Market Intelligence?

Because, as this recent article in Supply Chain Brain on how Market Intelligence Helps You Avoid Embarrassing Questions About Your Supply Chain, your competitor could purchase a stake in your key supplier and cut off supply on a moment’s notice (if your contract is on the supplier’s paper and allows for termination on significant change in control) and you wouldn’t know until supply stopped.

That’s why you need to be continually monitoring the market so that you’ll know:

  • when a supplier is in financial distress and looking for an investor,
  • when supply is limited due to spikes in demand or raw material shortages, and
  • when opportunities arise to acquire new sources of supply.

And when you need to bring counter-intelligence strategies into play. So check out how Market Intelligence Helps You Avoid Embarrassing Questions About Your Supply Chain. It will be worth your time.

You Can’t Solve a Problem You Can’t Identify …

Nor can you solve a problem that won’t admit exists. Industry Week recently ran a great article on “Surfacing Problems Daily” that pointed out a harsh reality: the culture of many organizations dictates that they only face problems that they know how to address.

But if you only face problems you know how to solve, the problems you don’t know how to solve grow and fester … until, someday, they paralyze you. But it doesn’t have to be the case. You can recognize the problem as soon as it becomes apparent. Even if you can’t solve the problem right away, the sooner you begin to address it, the sooner you are likely to come up with a solution.

So what can you do to improve? According to the article, you can:

  1. Assess the Current Condition
    and make sure you know what to do when you see a problem.
  2. Develop a Mechanism
    to insure that the problem is properly recorded and tracked.
  3. Establish Non-Monetary Incentives to Surface Problems
    to insure that they are identified and recorded.
  4. Define How Leaders Should Respond
    since workers will not surface, track, or even acknowledge problems if the leaders don’t support the initiative.

And make sure that you understand the nature of problems … as old ones get solved, new ones surface. It’s a never ending cycle.

200 Billion


Water go down the hoooole.
Toilet paper go down the hoooole.
Diaper go down the hoooole.
Nana go down the hoooole.
Ducky go down the hoooole.
Toot Toot go down the hoooole.
Kitty go down the hoooole.

The Potty Years

According to this recent article in Fortune, telecom investors might be “the 21st century’s biggest chumps”, and they might be right. Since 2000, the United States’ telecom tab is down 22% in inflation-adjusted dollars. In other words, the telecom networks have destroyed nearly 200 Billion in value over the last ten years — despite the fact that cell phone use has tripled, that high speed residential internet connections have jumped from 2 Million to 24 Million, and that wifi is almost ubiquitous.

So what happened? The article puts forward three hypothesis that, taken together, paint a compelling picture.

1: Internet protocol networks are like Pac Man. Eventually they will eat everything.

The days of expensive, custom-built communication networks for a single purpose — radio, telephone, cable tv — are over. Now, everything flows over the internet in packets. Packets here. Packets there. Packets, packets everywhere.

2: If a customer likes it, then it doesn’t matter what it does to your economics — it’s going to happen.

The cell carriers fought wifi tooth, nail, and claw for years to prevent cuts to their (sometimes ridiculous) margins, but it happened anyway. Eventually a carrier realized that offering wifi would result in a huge increase in customer base, it happened, and now every cell carrier supports hybrid wi-fi devices in an effort to keep their customers.

3: Anyone who relies on the fact that they own a scarce distribution resource is going to face ten years of turmoil.

It’s a new age for telecom and no longer are networks analog, expensive, and single purpose. Now they are digital, ubiquitous, and multi-purpose.

The question is, will the telecoms adapt? How much more will be lost as they try, competing with each other for a consumer base that becomes less profitable by the day? And will it be worth it in the end? Is it a situation where last man standing wins a defacto monopoly, pumps up prices to cover the losses, and profits big in the end (just like Google, who won the search engine war)? Or will an entirely new type of network provider emerge and wipe out the telecom industry as it exists today.


Water came back.
Water came back.
Water came back.

But will the 200 Billion come back?