Category Archives: rants

No Solution is Completely Foolproof

A common mistake that people make when trying to design something completely foolproof is to underestimate the ingenuity of complete fools.
Douglas Adams

Source-to-Pay solutions are getting easier by the day and soon they will be so easy that some vendors will be claiming their solutions are so simple that even a fool can use it error-free. But that’s really not the case. No solution is foolproof. Never will be.

Why? First of all, it’s impossible to predict every action a person could take. So, no matter how many situations you plan and check for, if there is even one you missed, and if the application is complex enough there will be at least one, no matter how unlikely that situation is (or how nonsensical it is), there will be at least one user who finds it and either crashes the application or generates a scenario that is nonsensical.

The alternative is to lock the application down to an enumerable finite set of inputs in each state and limit the allowable actions to those that will allow a smooth, predictable, transition to the next state without fail. But if the vendor chooses this route, the result will be a very limited application with very limited possibilities. And given that the real world is not limited to a small set of situations with always predictable solutions, this is not a very useful solution.

Secondly, never underestimate the application stupidity of a potential user. First of all, the user could be a new transfer from another department with no training and a very shallow understanding of Procurement. What a vendor would assume to be obvious to an average Procurement user would not be obvious to a new transfer. Secondly, not all users are Procurement users. For example, shop floor users might have access to initiate requisitions. And these workers might have limited computer knowledge. And then there’s management. And consultants.

Thirdly, the more a vendor tries to make a solution foolproof, the more they end up throwing in way too much unnecessary code. The more unnecessary code that is put into an application, the more errors that creep in. Errors multiply with code. Always. Doesn’t matter if the code compiles. Doesn’t matter if the code passes the boundary tests. All that matters is that there is more code with more paths and more state transitions to track, to the point where eventually there are too many paths to track and test and something breaks when a user goes down the wrong path.

The moral of the story? Don’t fall for any vendor who says their application is foolproof. And don’t look for a foolproof application, because it’s not about how easy the application is, it’s about how much value the application can generate. The best applications, while easy and logical for most of the functionality, will not be foolproof. Nowhere close. So, value first. Because, at the end of the day, the only user a foolproof solution is for is a fool.

Pay the Piper on Time or Pay the Price!

In response to abysmal payment terms of 120 days or more, which were seriously crippling smaller suppliers, the UK has instituted a requirement for large businesses to report on their UK payment practices twice a year, with failure to do so a criminal offence with unlimited fines. The goal is that the mandatory reporting requirement, which requires companies to report on the average time it takes to pay invoices for the majority of contracts (0-30 days, 31-60 days, and 61+ days), will encourage businesses to improve their payment practices as a result of transparency and public scrutiny.

It’s a shame that this requirement only exists in the UK, because not only should you know, and be prepared to report on, how fast you are paying your suppliers, but you should be striving to pay all of your suppliers within 30 days of receipt of a valid invoice, because your success depends on their success, and while a happy supplier, like the pied piper, will catch and lead the supply chain problem rats away, an unhappy one will allow those problem rats to multiply, and possibly even aid in their reproduction and spreading.

Suppliers are critical to your success. They not only provide the raw materials, products, and services you need, but often the raw materials, products, and services your customers need — and if these raw materials, products, and/or services are not of high quality, delivered timely, and supported enthusiastically, your customers will not be happy. Unhappy customers, especially those not under or nearing the end of their contracts, tend to defect.

A supplier is only likely to provide high quality, supported, timely products and services if it is happy. And believe the doctor when he tells you that a supplier will NOT be happy if that supplier is not paid on a relatively timely basis most of the time. Like you, suppliers need predictable cashflow and if you give them a cashflow nightmare, they will not be too concerned about giving you an inventory forecasting or customer satisfaction nightmare.

So don’t rely on a forthcoming guidance or industry initiative to tell you when to pay the piper. Just pay the piper and reap the benefits. (And if you not only pay on time, but pay early, you’ll be a customer of choice, and those customers tend to get all the benefits.)

Bigger. Badder. Baffling.

As per our previous posts, the merger and acquisition cycle is peaking. Coupa went on a spending spree and bought Spend 360 and Trade Extensions. Jaggaer merged with Pool4Tool. OpenText is acquiring Covisint Corporation. And Descarte Systems acquired PCSTrac Business. And we just know more announcements are coming.

Everyone is getting bigger and badder, at the expense of BoB (whose days appear numbered), and it’s getting a bit baffling. Some of the acquisitions make a lot of sense (at least on paper) with companies trying to flesh out suites, but some like Open Text’s acquisition of Covisint (which is very vertically focussed on automotive) are stretching a bit. But what’s most baffling with the rapid pace of acquisitions are how the companies are going to manage integrations (of platform and strategy) and solution footprint.

When you get big, things can get costly … quick, especially if there are multiple platforms involved. This isn’t good for you from a market perspective (as the size of the customer base that can afford your baseline solutions will shrink), and it isn’t good from an operations perspective. There’s a reason that Oracle expected to save a Billion in operating costs by acquiring Sun, and a lot of it came down to platform. Sun Microsystems was very efficient in its software infrastructure, running almost 1,000 different systems whereas Oracle, which ate its own “one instance dog-food”, ran one Oracle instance. By migrating all of Sun’s systems into one, it saves hundreds of million a year (at least 250 to 300 by some counts, more by others). If a company has six different platforms to maintain, that’s six different hardware infrastructure costs, six different software infrastructure costs, six different dedicated support team costs, six different implementation expert team (who will implement and train third parties) costs, and so on. These costs add up. Rapidly.

And they escalate the platform costs that the companies need to charge to customers, which shrinks the perspective customer base. And if the mid-market gets squeezed out, everybody hurts as the greatest number of companies without decent Supply Management solutions (and the bulk of the 40% who don’t have solutions) are in the mid-market. So while acquisition makes sense to fill a hole, not working on ways to integrate, or at least harmonize, the solution (so that there is no duplicate development across products or unnecessary, and costly, integration efforts) can be costly. So, in some sense, the speed at which some companies are moving is a bit baffling, as good integration takes good analysis, planning, and development — all of which takes time. Given that some acquisitions are being completed in two months, and that the amount of information that can be extracted in due diligence is limited, there’s no way the average company can begin integration out of the gate. In many cases, the acquiring company (that are experts in a different technology and business process) won’t even know where to start.

In other words, while some companies might be on the right track, they are just beginning a very long journey and have thousands of miles to go before they reach their destinations. Adding acquisitions adds miles to the track — miles that have to be travelled. The question now is not do they have the vision, but how will they get there. And that can be a baffling question for anyone to answer (especially without third party expertise and guidance). But not necessarily unresolvable …

In the interim, Spend Matters has been putting together decent guides on questions to ask your providers if they were involved in one of the covered acquisitions. Check them out. And answer the questions for yourself before committing.

Walmart: Still Running on a 56.6 baud Modem …

Walmart recently released a statement that it plans to use employees to do home deliveries, presumably to fulfill online orders, as recently reported on The Washington Post. the doctor couldn’t believe it at first … convinced it was an article from the Onion misposted on a real news site, but apparently it’s real.

Overlooking all the things that could go terribly wrong with this, and all of the new legal liabilities this could cause them to incur (which would give your average risk manager and Chief Counsel nightmares for months), this makes absolutely no sense from a supply chain perspective where the name of the game is cost control (unless, of course, Walmart is looking for a way to actually lose money as a tax avoidance scheme).

There’s a reason even Amazon uses third party carriers for its prime service, and the reason is that, as stated by the article, last mile logistics are costly. Very costly. And they can only be minimized by maximizing the number of packages delivered per hour by a driver. An employee who can only deliver a few packages due to space limitations in their car can’t maximize deliveries compared to a Fedex or UPS van driver that has a van built to maximize the number of packages that can be carried at one time and that is making deliveries determined by software that minimizes the delivery radius of all assigned packaged and delivery time using route optimization software (that eliminates left turns and backed-up routes).

Now, maybe Walmart is thinking that they can introduce a new kind of package assignment algorithm that minimizes the distance from an employee’s home route, and then just pay that employee for additional distance and time required (using google map calculations, etc.), but you still have the problem that the closest employee(s) may not be working that day, may not be able to do deliveries that day, or may not be able to fit the packages in their vehicle. Most of the time the software will have to re-assign and re-assign again until a viable sub-optimal match is found, and at the end of the day the cost would be more than just having a full time driver deliver everything according to route optimization software at a cost that is still more than negotiating a good volume-based outsourcing agreement with the dominant local carriers who can increase the delivery density even more.

The reality is that just because something sounds good (as in 90% of all customers live within 10 miles, where most employees are also located), does not mean it is good — and that’s why you need to perform analytics and optimization before embarking on major initiatives such as this. Because even if Walmart could get near-optimal assignments, it still needs volume, and as long as it takes 3 times as long to do anything on their site as it does on Amazon (and that is definitely true in Canada, where the outsourced development organization prefers to benchmark against sites for other real-world retailers and not Amazon from an online retail perspective), and as long as they continue to ship 6 (light) items on the same order across 5 boxes, their online volume growth is not going to be fast enough to make this idea anywhere as efficient as they hope in the next few years. This is one case where the doctor hopes their trials flop and they see the error of their ways and go back to investing in more hybrid vehicles, more efficient warehouses and inventory management methods, and other initiatives guaranteed to increase efficiency and sustainability.

Supply Management Technical Difficulty … Part VI

In this post we conclude our initial 7 part (that’s right, 7, because Part IV was so involved, we had to do 2 posts) series on supply management technical difficulty, focussing on the source to pay lifecycle. We did this because many vendors, with last generation technology, have been touting their own horn with a “market leading” offering that was market leading a decade ago, but, due to lack of innovation on their part, is now only average. Moreover, much of what used to be challenging in this space is now, in the words of the spend master, so easy a high school student with an Access database could do it, and that ain’t far from the truth. Unless the platform comes with an amazing user experience (and the reality is most don’t), a lot of basic functionality can be accomplished using open source technology and an Access database.

So far, we’ve covered the basics of sourcing, the basics of procurement, supplier management, spend analysis, and (invoice to) payment, and while each have their challenges, the true technical challenges are few and far between comparatively speaking. Today we are rounding out the series with the true, hidden, technical challenges that you don’t see. And there aren’t many of those either, but they are doozies.

Technical Challenge: Large-Scale Scalability

If you’re selling an application that is only going to be used, by a few dozen, and maybe a few hundred, users, scalability isn’t an issue. An average low-end server with eight cores, 64 GB of RAM, and a few TB of solid state storage should be more than enough to support this user base even if the application is shoddily coded by junior developers who cobbled most of it together cutting and pasting code from SourceForge.

But if we are talking about a true e-Procurement system that is going to be rolled out to everyone across a Global 3000 organization with the authority to make a requisition or spot buy, this will be tens of thousands of users, serviced by hundreds of Procurement professionals doing daily spot buys and MRO inventory management and dozens of strategic buyers and analysts looking for opportunities and conducting complex events using optimization and deep data mining, an average high end server is not going to do the trick. Multiple server instances are going to be needed, but they are all going to have to work off of the same data store, and a significant amount of this data is going to need to be accessed and updated in real time, so it’s not just a matter of replicating the database and allowing the users to go to town. While some data can be replicated for analysis, MRO data has to always be updated in real time to insure requisitions are filled from on-site inventory or warehouse inventory first. This requires a complex data management scheme, fifth degree normalized design, real-time clustering, and so on and so on and so on on the data side as well as intelligent request routing on the application side as you can’t route all requests evenly (as 10 inventory look up requests are a lot less processor intensive then the creation of 10 detailed category reports).

Technical Challenge: User Experience

While the creation of just about any modern user interface component is a piece of cake using modern language libraries, there’s a big difference between user interface and user experience. And the most slick user interface in the world is useless if the process it forces the user through is kludgy and cumbersome and takes three times as long to accomplish a simple task as it should. A great user experience is one that requests minimal input, involves minimal steps, and, most importantly, involves minimal time and effort on behalf of the user. It takes context into account, known information into account, organizational processes and (approval) rules into account, etc. and makes it so that a user only has to do as little as possible and is in and out of the application as fast as possible so that she can focus on her primary task. If she’s not a strategic buyer or a spend analyst, she shouldn’t be spending her days in the tool — she should be spending her days doing her job. This is what many applications miss. A truly good software tool is elegant. In our space, even today, many aren’t.

So, hopefully by now you have a good understanding of what is truly difficult and what you should be looking for when evaluating a tool. There is still an intense amount of complexity that needs to be overcome in a modern application, but any application that does not tackle the complexity outlined in this series is not truly modern. Keep this in mind and you’ll make great selections going forward.