Monthly Archives: March 2015

Technology Damnation 82: The Secret Seven

We all think the internet, with its distributed design, open and thoroughly tested encryption and security technologies, and its foundation of our modern public, private, government, and academic culture is, despite regular security breaches (which are often a result of improperly applied security procedures and technologies of corporations that should know better), relatively secure and reliable and will remain outside of any one organization’s control for years to come. Especially since our global business functions, and global procurement functions in particular, rely on it.

And while that is the expected future, as no one corporation, nation, or conglomerate owns the internet, the reality is that ICANN, the Internet Corporation for Assigned Names and Numbers, which is a private corporation, has an awful lot of power over the internet as it manages the Internet’s Domain Name System (DNS) that links your domain to the right IP address. In order for a registrar to sell you a domain (to link to an IP that is typically made available to you by your ISP), the registrar has to be accredited by ICANN. In addition, IANA, the Internet Assigned Numbers Authority, which is another private corporation, is responsible for the Internet Protocol Addressing System and allocates IP blocks to the Regional Internet Registries (that allocate, in turn, to National Internet Registries, that allocate, in turn, to the Local Internet Registries that, in turn, allocate IP address to the local ISPs).

This says that if a body managed to gain control of IANA, they control your IP address, and, even worse, if a body managed to gain control of ICANN, they control the mappings, and since everyone uses domain names, and not IPs, they would essentially control who goes where on the information superhighway. This couldn’t really happen, right? Wrong. While not likely, all a villainous/terrorist organization of Bond proportions needs to do is gain control of, or replace, the seven key holders that control the core ICANN DNS system. That’s right. The vault that controls the entire global internet only takes seven keys to open.

And even though the key holders hold traditional safety box keys, the keys that control the internet aren’t regular keys you find on a key ring and are, in fact, smart cards, that can only be accessed by the key holder (with the safety box key) after going though traditional and biometric security screenings that are likely tighter than they have in place at Fort Knox (and the process required to complete the ceremony and gain access to the machine that generates the new master key has over 100 steps). And no key on its own can make changes to the master DNS. All seven keys are required to activate the machine that generates the master key that allows the DNS to be updated. (And whoever holds the master key, just like whoever holds a traditional master key, has access to the entire internet just like a traditional master key gives you access to an entire building.)

But at the end of the day, it only takes the keys and biometrics of 7 people to get the smart cards that activate the machine that generates the new master key for the internet which allows whomever holds it to redirect domains at will. It is true that these 7 people, who are some of the greatest minds in internet security and who are as trustworthy as they come, are spread all over the world, but still, at the end of the day, it would only take 7 samurai to slay the internet.

In other words, no matter how far we progress with technology and security, it all comes down to the trust and nobility of a select few to keep our global supply chains humming.

And if you start to think about this too deeply, you might really believe we’re all damned in the end!

Just What is “Best Value”, Part Deux!

In yesterday’s post, we discussed an article in a recent edition of Purchasing Tips (by Charles Dominick of Next Level Purchasing) that asked What is Best Value Procurement where he stated that “best value” should be a hard metric measurable in financial terms and expressed in units of currency and not a soft metric where factors other than price are used in determining a supplier and/or product to select for purchase (as that is weighted average supplier/product scoring).

We noted that SI tends to agree, but that there are often issues with trying to assign a(n exact) hard dollar revenue increase or cost decrease to an event that has not yet happened. Even the illustrative example used by Mr. Dominick in trying to choose between machine A and machine B to automate a production line is not cut and dry. For example, if the organization stops manufacturing a product before production line end of life or has the option to lease vs. buy the machine, the calculations get complex. But this is just the beginning.

When it comes to making an IT purchase, the “best value” calculations become a bit of a nightmare. First of all, there is system cost. Depending on whether you want to go with a true SaaS, hosted ASP (which might be wearing a cloud disguise), or on-site hosted solution, as discussed in our classic series on the Enterprise Software Buying Guide (Part V: Cost Model), there are anywhere from four to eleven core up-front and on-going costs that need to be considered (plus ancillary costs for complex or special systems). (And even with the free calculation template provided in the classic SI post on uncovering the true cost of an on-premise sourcing/procurement software solution, the calculation is still a nightmare. How confident are you in the integrator’s estimate? How secure do you feel about the amount of training time (and budget) that will be required? How reliable are the ongoing support level and associated cost calculations.)

Assuming you can work through the system cost equation, which can be quite a doozy (doozy, not doozer, although you will likely need doozer cooperation levels to make any new IT system work these days), you then need to work through the value equation. Just how much value can be expected from the system over the timeframe, and how accurate is that prediction. There are multiple components to this calculation.

  • Throughput Increase
    if the system increases the number of invoices that can be m-way matched, increases the number of sourcing events that can be run, or automates the production of trade documents, this needs to be calculated first as these numbers are need to compute the savings
  • Efficiency Savings
    how much manpower is saved (and how much can therefore be reassigned or eliminated) and how much is the HR expenditure accordingly reduced
  • Cost Savings
    how much cost is expected to be avoided either by increased throughput or the increased performance offered by the system (such as defect reduction, which reduces repair costs)

Obviously, these calculations are not straightforward. In the case of efficiency savings, since every resource (and type) has a different cost (based on salary and associated benefits), the best you will be able to do is estimate an average cost for the manpower by hour (or day). In the cast of cost savings, it’s more than just an industry average, it’s an industry average for a company at a similar stage of competency, with a similar sized workforce, and a similar production or spend pattern. Let’s take spend analysis. If the company is a leader with close to 80% of spend under management, has been sourcing against industry benchmarks, and has used advanced negotiation (and optimization) techniques on high value or key categories (with the help of a third party, if necessary), the company is likely not only aware of its top n categories, but has likely strategically sourced the majority of next n categories as well and the untapped opportunities would represent less than 20% of its spend. This company would only expect to see the industry average 11% savings on roughly 10% of its spend and would likely only see a few percentage points on the spend under management in the current economy. In comparison, if it is an average company only had 45% of its spend under management, had not used advanced sourcing techniques in the past, and only sourced a few categories against benchmarks, it might expect to see the industry average savings of 12% on 40% of its spend and 5% to 6% on the rest. The up-front savings potential (over 1 to 3 years) for this average company on a new spend analysis system would be four times that of the industry leader! It might be the case that the industry leader might need the new system to properly monitor and analyze its spend going forward more efficiently to help it avoid bad decisions in the future, but now we are in cost avoidance territory, and fuzzy territory at that. In hard dollar costs, all one can argue is additional manpower reduction.

And we still haven’t dived to the bottom of the iceberg. In other words, the best definition of best value is a hard dollar metric, but it might be the hardest metric of all to calculate.

Just what is “Best Value”?

In a recent edition of Purchasing Tips over on Next Level Purchasing, Charles Dominick asked What is Best Value Procurement? In the article, he notes that many people use the term “best value procurement” to describe purchasing decisions where factors other than price are used in determining the supplier and/or product to select for purchase and states that he believes that this is “weighted average supplier/product scarring”, which it is.

In his view, value should be measurable in financial terms and expressed in units of currency. I tend to agree, but there are issues with trying to assign a(n exact) hard dollar revenue increase or cost decrease to an event that has not yet happened.

In his illustrative example of choosing between machine A and machine B to automate a production line and reduce the labour needed to keep it running (in an effort to, hopefully, allow the organization to either redeploy the personnel on higher-value tasks or, if not possible, replace those jobs with jobs that could generate more value for the organization down the road), it seems cut-and-dry. Just compute the value-to-cost ratio (where the value, as defined by the estimated labour savings, is divided by the cost of the new machine, which should include purchase, installation, and additional maintenance costs over the expected lifetime). In this case, one machine will generate a higher value-to-cost ratio and that is the machine you should purchase for the organization.

Assuming, of course, that you are sure the machine will have the indicated lifespan and will be useful to you for that lifespan. For example, what happens if you stop making the product in three years but your value calculations are for five years, the expected lifetime of the machine. The value-to-cost calculations will still rank the machines in relative order (as only the value changes), but the return might not look so enticing. And what about the situation where you can instead lease one of the machines from a third party (instead of buying it) and, because that machine in particular is made to a higher quality standard, get an annual lease that is only 1/10th, and not 1/5th, of the purchase cost? In this situation, a machine that cost twice as much would not only have the same value-to-cost ratio but, if you had to sell the machine you bought after three years, the leased machine would have a higher value-to-cost ratio since you’d likely not get the full undepreciated book value for the machine you bought.

And this is just a “best value” calculation on a simple piece of machinery. Consider the difficulty when trying to compute a “best value” on a technology platform purchase, where such platform is intended to improve your sourcing, procurement, supplier relationship management, or similar supply management process. It’s not just up-front cost. It’s implementation. It’s maintenance. It’s operational manpower savings on tactical tasks. It’s efficiency improvements (which have a value in terms of more events or throughput, which translates into generated value) and it’s additional cost reductions identified through the platform (which can be estimated based on benchmarks, but not predicted). How do you do that “best value” calculation? What number do you use? Do you compute a range and use the middle? Do you identify all platforms with a minimum acceptable value-to-cost ratio in terms of guaranteed hard-dollar savings and then select the best-value using the platform with the maximum value-to-cost potential?

There are no easy answers and costs alone don’t always tell the whole story.