Category Archives: Technology

M&A Has Been Mad. Platforms Will Disappear. But There Will Be More Than One. But Who?

We’ve been writing a lot about M&A lately, including, but not limited to, our pieces on:

because M&A is still going strong. (And, as per our recent post on The Hidden Value of SI Association, SI is acutely aware of this because this is how it loses its customers. SI works with these companies, helps them become known and successful [through a focus not on buzz but actual education, process improvement, and appropriate roadmaps], they get noticed by cash-rich firms, who then buy them, and in many cases, strip out the management teams and/or consultants.)

We’ve also noted that not only will some platforms have to disappear (to make the mergers successful) but that (in our recent piece on One Vendor Won’t Rule Them All … And One Ring Won’t Bind Them), due to the wide range of needs that organizations need and the different process that are used around the globe in organizations headquartered in different regions and run by different cultures.

But that being said, now that Sourcing and Procurement technology is starting to become more mainstream — and the majority of organizations are looking for analytics, procurement automation, and supplier program management — those organizations that are looking for their first platform (as well as the early adopters of first generation platforms that are now almost a decade behind) are trying to figure out who they should look at and, more importantly, what product lines they should look at (now that some organizations have as many as three different product lines for Procurement under one organizational roof).

This is hard to predict, especially since the Fortune 500 is in more flux than it’s ever been. It used to be if you were on the list, you were on the list for years (if not decades) and changes were subtle. Now a company can make it one year and as a result of one major disruption or media fiasco, be in bankruptcy the next year (and disappear from the list). And while most of the companies in our space are not on the Fortune 500, these companies are now being bought by the big enterprise software giants, including SAP (with a market cap over 100B), that are.

And the instability in enterprise software companies amplifies they smaller they are, and when the biggest stand-alone public company in our space has a valuation of a mere 2.5B and the largest private company in our space would likely get a valuation in the same range, you can see where we are when the average large company has revenues that you have to round up to 100M and the average BoB vendor rounds to the 10M range.

But the platforms provided by some companies, due to the immense value they offer, will survive, even if under a different name, as part of a different platform, under a different company, held by a different holding co, whose name may change three times over the next decade. And who will they be?

Simply put, they will be those platforms that are the hardest to replicate and offer the deepest capabilities that are key to value identification, like optimization, advanced predictive and prescriptive analytics, cognitive process automation, semantic risk identification and monitoring etc — whether the platform is a standalone best of breed platform in a financially stable 10M company or part of a suite of a larger 100M company or just one module in a suite in stable of suites in a 1B enterprise. So don’t try to guess which vendor will survive, instead focus on what platform will survive — and chances are you will be setting your organization up for success.

RPA: Are We There Yet?

Nope. Not even close. And a recent Hackett study proves it.

Earlier this month, The Hackett Group released a point of view on Robotic Process Automation: A Reality Check and a Route Forward where they noted that while early initiatives have produced some tangible successes, many organizations have yet to scale their use of RPA to a level that is making a major impact on performance, likely because RPA has come with a greater-than-expected learning curve.

Right now, mainstream adoption of RPA is 3% in Finance, 3% in HR, 7% in Procurement, and 10% in GBS – Global Business Services. Experimentation (referred to as limited adoption) is higher, 6% in HR, 18% in Finance, 18% in Procurement, and 29% in GBS, but not that high, especially considering the high learning curve for the average organization will end up with a number of these not continuing the experiment.

Due to the large amount of interest, Hackett is predicting that, within 3 years, RPA will be mainstream in 11% of HR Organizations, a 4X increase, 30% of Procurement, a 4X increase, 38% of Finance, a 12X increase, and 52% in GBS, a 5X increase, as well as increases in experimentation. Experimentation will definitely increase due to the hotness of the topic, but mainstream adoption will require success, and as Hackett deftly notes, successful deployment requirements have certain key prerequisites too:

  • digital inputs
  • structured data
  • clear logical rules can be applied

And when the conditions are right, organizations:

  • realize operational cost benefits
  • have less errors and more consistent rule application
  • benefit from increased productivity
  • are able to refocus talent on higher-value work
  • strengthen auditability for key tasks
  • have enhanced task execution data to analyze and improve processes

But this is not enough for success. Hackett prescribes three criteria for success, which they define as:

  • selecting the right RPA opportunities
  • planning the journey
  • building an RPA team or COE

and you’ll have to check out Robotic Process Automation: A Reality Check and a Route Forward for more details, but is this enough?

Maybe, maybe not. It depends on how good of an RPA team is built, and how good they are at identifying appropriate use cases for RPA, and how good they are at the successful implementation. Success breeds success, but failure eliminates the option of continued use of RPA, at least until a management changeover.

The Days of Black Box Marketing May Soon Be Over!

In what marketing will refer to as the good old days of the Source-to-Pay marketplace, when the space was just emerging and most analysts couldn’t see past the shiny UI to what features were, or more importantly, were NOT, lurking underneath, it was a wild-west, anything goes marketplace.

Marketers could make grandiose claims as to what the platform did and did not do, and if they could give a good (PowerPoint) presentation to the analysts, the analysts would buy it and spread the word, and the story would grow bigger and bigger until it should be seen as crazy and unrealistic, but instead was seen as the new gospel according to the power on high.

Big names would get bigger, pockets would get fatter, but customers would lose out when they needed advanced functionality or configurability that just was not there. On the road-map, maybe, but would it get implemented before the company got acquired by a bigger company, which would halt innovative development dead in its tracks?

But those days, which still exist for some vendors with long-standing relationships with the big name analyst firms, may soon be numbered. Why? Now that SpendMatters is doing SolutionMaps, which are deep dives into well defined functionality, a customer can know for sure whether or not a certain provider has a real solution in the area, how deep it goes, and how it compares to other providers. As a result, the depth of insight that will soon be expected by a customer has been taken up a couple of notches, and any analyst firm and consultancy that doesn’t up the bar, is going to be avoided, left behind.

Once (potential) customers realize the degree of information that is available, and should be available, they’ll never settle for less. And that’s a good thing. Because it means the days of black box marketing will soon be over. While North America may never be a Germany where accurate technical specs lead the way, at least accurate claims will. And every vendor will be pushed to do better.

Get Your Head Out of the Clouds!

SaaS is great, but is cloud delivery great?

Sure it’s convenient to not have to worry about where the servers are, where the backups are, and whether or not more CPUs have to be spun up, more memory needs to be added, or more bandwidth is needed and it’s time to lay more pipe.

However, sometimes this lack of worrying leads to an unexpectedly high invoice when your user base decided to adopt the solution as part of their daily job, spin up a large number of optimization and predictive analytics scenarios, and spike CPU usage from 2 server days to 30 server days, resulting in a 15-fold bill increase over night. (Whereas hosting on your own rack has a fixed, predictable, cost.)

But this isn’t the real problem. (You could always have set up alerts or limits and prevented this from happening had you thought ahead.) The real problem is regulatory compliance and the massive fines that could be headed your way if you don’t know where your data is and cannot confirm you are 100% in compliance with every regulation that impacts you.

For example, EU and Canada privacy regulations limit where data on their citizens can live and what security protocols must be in place. And even if this is a S2P system, which is focussed on corporations and not people, you still have contact data — which is data on people. Now, by virtue of their employment, these people agree to make their employment (contact) information available, so you’re okay … until they are not employed. Then, if any of that data was personal (such as cell phone or local delivery address), it may have to be removed.

But more importantly, with GDPR coming into effect May 25, you need to be able to provide any EU citizen, regardless of where they are in the world and where you are in the world, with any and all information you have on them — and do so in a reasonable timeframe. Failure to do so can result in a fine of up to €20 Million or 4% of global turnover. For ONE violation. And, if you no longer have a legal right to keep that data, you have to be able to delete all of the data — including all instances across all systems and all (backup) copies. If you don’t even know where the data is, how can you ensure this happens? The answer is, you can’t.

Plus, not every country will permit sensitive or secure data to be stored just anywhere. So, if you want a client that works as a defense contractor, even if your software passes the highest security standards tests, that doesn’t mean that the client you want can host in the cloud.

With all of the uncertainty and chaos, the SaaS of the future is going to be a blend of an (in-house) ASP and provider managed software offering where the application, and databases, are housed in racks in a location selected by the provider in a dedicated hardware environment, but the software, which is going to be managed by the vendor, is going to run in virtual machines and update via vendor “pushes”, where the vendor will have the capability to shut-down and restart the entire virtual machine if a reboot is necessary. This method will also permit the organization to have on-site QA of new release functionality if they like, as that’s just another VM.

Just like your OS can auto-update on schedule or reboot, your S2P application will auto-update in a similar fashion. It will register a new update, schedule it for the next, defined, update cycle. Prevent users from logging in 15 minutes prior. Force users to start log-off 5 minutes before. Shutdown. Install the updates. Reboot if necessary. Restart. And the new version will be ready to go. If there are any issues, an alert will be sent to the provider who will be able to log in to the instance, and even the VM, and fix it as appropriate.

While it’s not the one-instance (with segregated databases) SaaS utopia, it’s the real-world solution for a changing regulatory and compliance landscape, which will also comfort security freaks and control freaks. So, head in the cloud vendors, get ready. It’s coming.