Category Archives: Technology

The Days of Black Box Marketing May Soon Be Over!

In what marketing will refer to as the good old days of the Source-to-Pay marketplace, when the space was just emerging and most analysts couldn’t see past the shiny UI to what features were, or more importantly, were NOT, lurking underneath, it was a wild-west, anything goes marketplace.

Marketers could make grandiose claims as to what the platform did and did not do, and if they could give a good (PowerPoint) presentation to the analysts, the analysts would buy it and spread the word, and the story would grow bigger and bigger until it should be seen as crazy and unrealistic, but instead was seen as the new gospel according to the power on high.

Big names would get bigger, pockets would get fatter, but customers would lose out when they needed advanced functionality or configurability that just was not there. On the road-map, maybe, but would it get implemented before the company got acquired by a bigger company, which would halt innovative development dead in its tracks?

But those days, which still exist for some vendors with long-standing relationships with the big name analyst firms, may soon be numbered. Why? Now that SpendMatters is doing SolutionMaps, which are deep dives into well defined functionality, a customer can know for sure whether or not a certain provider has a real solution in the area, how deep it goes, and how it compares to other providers. As a result, the depth of insight that will soon be expected by a customer has been taken up a couple of notches, and any analyst firm and consultancy that doesn’t up the bar, is going to be avoided, left behind.

Once (potential) customers realize the degree of information that is available, and should be available, they’ll never settle for less. And that’s a good thing. Because it means the days of black box marketing will soon be over. While North America may never be a Germany where accurate technical specs lead the way, at least accurate claims will. And every vendor will be pushed to do better.

Get Your Head Out of the Clouds!

SaaS is great, but is cloud delivery great?

Sure it’s convenient to not have to worry about where the servers are, where the backups are, and whether or not more CPUs have to be spun up, more memory needs to be added, or more bandwidth is needed and it’s time to lay more pipe.

However, sometimes this lack of worrying leads to an unexpectedly high invoice when your user base decided to adopt the solution as part of their daily job, spin up a large number of optimization and predictive analytics scenarios, and spike CPU usage from 2 server days to 30 server days, resulting in a 15-fold bill increase over night. (Whereas hosting on your own rack has a fixed, predictable, cost.)

But this isn’t the real problem. (You could always have set up alerts or limits and prevented this from happening had you thought ahead.) The real problem is regulatory compliance and the massive fines that could be headed your way if you don’t know where your data is and cannot confirm you are 100% in compliance with every regulation that impacts you.

For example, EU and Canada privacy regulations limit where data on their citizens can live and what security protocols must be in place. And even if this is a S2P system, which is focussed on corporations and not people, you still have contact data — which is data on people. Now, by virtue of their employment, these people agree to make their employment (contact) information available, so you’re okay … until they are not employed. Then, if any of that data was personal (such as cell phone or local delivery address), it may have to be removed.

But more importantly, with GDPR coming into effect May 25, you need to be able to provide any EU citizen, regardless of where they are in the world and where you are in the world, with any and all information you have on them — and do so in a reasonable timeframe. Failure to do so can result in a fine of up to €20 Million or 4% of global turnover. For ONE violation. And, if you no longer have a legal right to keep that data, you have to be able to delete all of the data — including all instances across all systems and all (backup) copies. If you don’t even know where the data is, how can you ensure this happens? The answer is, you can’t.

Plus, not every country will permit sensitive or secure data to be stored just anywhere. So, if you want a client that works as a defense contractor, even if your software passes the highest security standards tests, that doesn’t mean that the client you want can host in the cloud.

With all of the uncertainty and chaos, the SaaS of the future is going to be a blend of an (in-house) ASP and provider managed software offering where the application, and databases, are housed in racks in a location selected by the provider in a dedicated hardware environment, but the software, which is going to be managed by the vendor, is going to run in virtual machines and update via vendor “pushes”, where the vendor will have the capability to shut-down and restart the entire virtual machine if a reboot is necessary. This method will also permit the organization to have on-site QA of new release functionality if they like, as that’s just another VM.

Just like your OS can auto-update on schedule or reboot, your S2P application will auto-update in a similar fashion. It will register a new update, schedule it for the next, defined, update cycle. Prevent users from logging in 15 minutes prior. Force users to start log-off 5 minutes before. Shutdown. Install the updates. Reboot if necessary. Restart. And the new version will be ready to go. If there are any issues, an alert will be sent to the provider who will be able to log in to the instance, and even the VM, and fix it as appropriate.

While it’s not the one-instance (with segregated databases) SaaS utopia, it’s the real-world solution for a changing regulatory and compliance landscape, which will also comfort security freaks and control freaks. So, head in the cloud vendors, get ready. It’s coming.

Why’s it all about the platform when it should be all about the power?

As we all know, the last year has been all about the M&A frenzy as the big try to get bigger by gobbling up any player with any modules they don’t have or any player with customer bases in a region they aren’t in, and doing so in a manner that doesn’t always make sense to analysts. As the doctor indicated in his post last month on Surviving a M&A: The Customer Perspective, acquisitions should lead to synergies and do so from a customer, solution, and/or operations perspective.

Preferably, an M&A should culminate in synergies of all kinds. Why? An M&A that doesn’t synch on an operations perspective doesn’t reduce overhead costs, and that means you don’t get any economics of scale, which is something all the traditional textbooks say is the first thing you should look for. If there are no customer synergies, then there are no cross-sell or up-sell opportunities, and that’s typically the next thing the textbooks say you should look for.

And, especially in our space, if there are no solution synergies, then a lot of money is wasted, as the point of the acquisition should be to build a better, or at least, a more complete platform. Otherwise, one company is paying a lot of money for something that will just get tossed in the bit bucket because supporting non-synergistic platforms gets too expensive too fast and the non-synergistic pieces will get sunsetted faster than the sun in Alert, Nunavut in late February.

So why doesn’t the recent M&A Frenzy make a lot of sense to the doctor? Not only has a fair amount of it been lacking in obvious synergies, but a lot of it has been to simply expand platform offerings, without focussing on the power of the solutions being bought or how the acquisitions will help the platform.

The past year has seen the acquisitions of traditional catalog providers and leading spend analytics and optimization providers. In some cases, the power is limited … and in other cases the power is limitless. But in the majority of cases, to date, the integration has been pretty limited. It’s been more or less just plugging a module into a whole without an analysis of not only the power of the solution but how the solution could enhance the rest of the platform in new and innovative ways.

For example, let’s take optimization. Just plugging it into a S2P platform is pretty good, especially given the dearth of optimization solutions on the market today, but is it great? How do you take an offering to market that the market will understand is better than the other leading vendors which have optimization? After all, if it’s just the same process — construct RFI, send it out, get data, pump into model, get result, make award, push into contract management — what’s better from the perspective of an average Joe? But if you have an advanced Procurement solution, can plug it into the catalog and analyze not only the cost, but the total cost if the order can be piggy-backed on other orders from on-contract suppliers who can add it to forthcoming shipments, give you contract-level discounts, etc. that’s value. And if you are looking to assemble a standard kit for a new hire, can run all the various combinations and determine which variant is best over a given time frame, that’s value too.

And a catalog solution can enhance sourcing if it supports punch-out and integrated search and anytime a buyer is considering sending out an RFI, can be integrated to identify current market pricing and source suppliers from the data within the catalog and in punch-out sites. If the buyer compares this pricing to current pricing, this can let the buyer know if going to market will likely be good (if market pricing is significantly less than current organizational pricing) or bad (if market pricing is significantly higher and the best option is just to extend the contract with the incumbent if pricing will stay about the same).

At the end of the day, Procurement is about generating value — and if the platform addition doesn’t generate additional value, what’s the point?

Keep Your Self Driving Car. I’ll Still Choose Good Ol’ Alfred Every Day of the Week!

As the doctor pointed out back in 2014, calling #badwolf on self-driving cars is well-founded. Just last month we had more accidents involving self-driving cars (from Tesla and GM) where a Tesla “ploughed into the rear” of a fire engine in Culver City and where a GM car collided with a motorcycle in San Francisco.

And when you get injured, as in the case of the motorcycle driver, who do you sue? If the car is self-driving, then there’s no driver, just source code. Source code isn’t an entity, so all you’re left with is suing GM, as the cyclist whose motorbike was hit with the GM car is doing (as per this article in Engadget and this article in Popular Science). But is it the company? When technically it’s the software — written by who knows how many employees who used who knows what from open source to speed up development, which was again contributed to by who knows how many authors?

But you can’t sue software, it’s not an entity, at least not a legal one, and that can’t happen at least until we grant it intelligence … and the right to own assets. So, it’s GM, but are they liable under the law? And, if not, how can the individual in the vehicle, not driving, be held liable?

And what happens if the “AI” becomes artificially intelligent and decides to “improve its own code” or the code gets co-mingled with the company’s “sentiment analysis” technology and all of a sudden gains a strong “dislike” for the self driving cars of the competition and, using it’s limited action-reaction processing algorithms, determines the best course of action is to “crash into the competition cars”. What then? We’re driving cars with a “kill” switch we have no control over!

And we’ll never know if there is one! With 99M+ lines of code in an average self-driving car OS, how would you ever find the kill switch until it triggered? And if it triggered en-masse, all of a sudden we have Maximum Overdrive on a global scale! Are you ready for that? the doctor is not!