Category Archives: rants

The Big X are Pushing Operate Services … But Can They Really Offer Them? And Are They Real?

And if they are real, can anyone?

Backing up, in the beginning, there was traditional Business Process Outsourcing (BPO), which became very common in the 1980s and 1990s as the result of constant claims by the big consultancies and their ilk that the only way businesses could enhance their flexibility and agility and maximize their competitive advantage was to outsource processes they weren’t good at to the Big X Outsourcing offices. (In some cases they weren’t wrong. When the business had no competence in a function, grossly overpaying someone with reasonable competence, even if that someone was not the expert the Big X claimed, generated a good return for the business. The function was done efficiently and effectively, negating the loss the business used to suffer, and it allowed the business to focus on the functions they did well, which increased their profit even as they (often unnecessarily) forked out seven (7) and eight (8) figures to the Big X every year. (And we say unnecessarily because most of the time they could have outsourced to a smaller, niche consultancy at one third to one half of the cost and achieved the same result.)

Then, as Big X tried to steal business from their competitors and niche firms tried to break in, they upgraded to “Managed Services” which was supposed to be more than just performing the service for you cost efficiently (by supposedly reducing your costs by doing it better, and thus, cheaper) and adding value. The idea was that it didn’t just take over a point-based function, but instead provided a dedicated team that basically took over an entire department for you, just offsite, and worked exclusively on your projects. They learned your business, and improved the service offering over time to not only maximize efficiency, but maximize value. If they took over your IT department, they learned the systems you used, optimized those, learned to provide quick and effective problem resolution on the help desk, and, when you needed a new solution, helped you identify the one that would work best with the systems you had. If they took over your AP, they learned your suppliers, your payment rules, your PO formats, and implemented systems that allowed them to match POs to invoices for high-value invoices to reduce overspend. They also helped you build catalogs from suppliers that could meet your MRO / internal needs at the lowest possible cost. And so on. Over time, they not only met SLAs, but improved on all key metrics.

But now a few of the Big X are saying that Managed Services is not enough to maximize value and you need premium “Operate Services” (which come at a premium price, of course). So what’s the difference? Hard to tell. The best definition we can find is it’s a “holistic approach that is focused on delivering outcomes and spurring innovation in a model that leverages automation and data insight to generate substantial business value”. the doctor thought that was what managed services was supposed to do for you? Other definitions indicate that “operate services” differentiate by providing “on demand access to expert talent”. Isn’t that why you use a managed service, so they can identify when the team needs a new expert and add that expert? Other definitions also indicate that “operate services” are more “collaborative”. Are they saying that the managed services they provided to you in the past, where they often acted as an entire department, weren’t collaborative? WTF?

In other words, while they are presenting it as a more advanced premium service model, for which they want to charge you a premium, it really isn’t, or shouldn’t be, because if it is, they are admitting they have been ripping you off for decades!

In some consultancies, it is just a specialization of managed services for IT/IT Security, Analytics-Heavy Functions like Strategic Procurement or Network Analysis, or highly technical functions like supplier identification in direct manufacturing. And it costs more because those people, who are much rarer than experts in traditional business functions and processes, are more expensive, as are the tools that they need to secure your enterprise, analyze your global spend, analyze your supply network, or analyze potential suppliers for your electronic components. And we can see how that could be fair, as long as they aren’t using “operate services” to increase costs across the board where there is absolutely no justification for it.  (And only using it to differing a subclass of specialized services they offer, and admitting its nothing more than managed services, just applied to a new set of business functions.)

But if the consultancy is trying to pitch these “Operate Services” across the board with claims that these new services are better and more specialized for your business than any other kind of service, then they are admitting they are currently ripping you off in your managed services and you should just fire them. Because there should be no difference with the exception that the subclass of operate services we defined in the last paragraph generally require more advanced systems and more resources with a high TQ, which usually cost more. But that’s it.

So don’t blindly fall for this brand new business pitch if they try to pull it on you — simply compare what they are offering to any other firm that says they can fully meet your needs with a traditional managed services model and give the business to the firm that is the most honest among those that can meet your needs.  Now, it might be new and more in depth and more valuable, but that’s not guaranteed.

PostScript: We do believe Big X can offer a lot of value.  See this post on When You Should Use a Big X!

The first jobs lost to OpenAI were at OpenAI? I LOVE IT!

In honour of the first five jobs that were lost to OpenAI, at Open AI (where it was announced the CEO, president, and 3 senior staff were stepping down and/or let go this week).

To the tune of I Love It by Icona Pop (feat. Charli XCX)!

I got this feeling on the winter day when you were gone
You crashed your car into the bridge
I watched, you let it burn
You threw our shit into a bag and pushed it down the stairs
You crashed your car into the bridge

I don’t care, I love it
I don’t care

I got this feeling on the winter day when you were gone
You crashed your car into the bridge
I watched, you let it burn
You threw our shit into a bag and pushed it down the stairs
You crashed your car into the bridge

I don’t care, I love it
I don’t care

I’m on an Earthern road, you’re in the Milky Way
You want me down on earth, but you’re up in space
You’re so damn hard to find, that AI took over
You said it’d take our jobs, but it f*ck3d you over!

I love it
I love it

9% of Companies Claim To Be Ready to Managed Risks Posed by AI? Bull Crap.

the doctor could not believe the recent headline in Forbes that said Only 9% of surveyed companies are ready to manage risks posed by AI. Because there is no way that 9% of companies are ready to manage the risks posed by AI. There’s no way even 0.9% of companies are ready to manage the risks posed by AI.

Why? Because of the rampant introduction of massive LLMs and DNNs that no one understands, for which I’m sure we’ve yet to seen the last of the abysmal failures, hallucinations, and suicide coaxing. There’s simply no way we can even begin to predict all of the potential errors they are going to make, the risks they are putting us under, the repercussions if those errors are made and risks materialize, and how the risks can be minimized, if not mitigated. No way whatsoever.

Not only is it theoretically impossible to be fully prepared, but when you consider that the average organization is not even equipped to handle regular software failures, how can the average organization expect to handle a software-based AI failure it can’t even predict?

The article, which quoted a recent study by RisKonnect (who are obviously able to detect and protect against most types of risk by using RisKonnect, and maybe that’s why they are so confident they can protect and defend against AI risks, but RisKonnect is for traditional enterprise and third-party risk, not cyber risk, and definitely not AI risk — no one can protect against a risk when they don’t even know what the risk is), did quote some very useful statistics on areas of concern. Specifically, of the companies surveyed

  • 65% are concerned about data and cyber,
  • 60% are worried about employees making decisions on erroneous information,
  • 55% are worried about employee misuse and ethical risk,
  • 34% are worried about copyright and intellectual property, and
  • 17% are worried about discrimination risk.

The risks are the right risks, and the order of priority is about the right order, but the percentage of companies concerned is much too low.

1. 100% of companies should be concerned about data and cyber. Not only are we in the age of state-sponsored hacking, which makes any company with useful confidential designs and information a target, but with almost all significant commerce being conducted online, all companies are a target for financial fraud.

2. 100% of companies that need to make decisions based on data analysis should be concerned about erroneous information, as all companies have bad data, and the bigger the company, the worse the data.

But none of these match the risks of AI. As per the quote in the article from Caitlin Begg, an over-reliance on AI can risk robotic, insensitive, spammy, or off-topic messaging, and that’s just the beginning. As noted, most companies haven’t simulated their worst case scenario, and since one can’t even predict what that is with AI, they aren’t even close to ready. It’s not just another article in the organization’s tech stack, even though the article seemed to indicate it is. One can prioritize transparency, accountability, threat and vulnerability monitoring, and risk mitigation, but when most AI applications can’t explain their actions, aren’t accountable humans, have no realistic threat and risk assessments, and there is no way to mitigate risk except not to use the technology in the first place for any decision that should be made by a HUMAN, it’s just not enough.

The precautionary steps are not to identify where AI can be most effective and incorporate it, the steps should be to

  1. identify where partners and third parties are using AI and putting your organization at risk
  2. identify where employees might be using unapproved web-based AI applications and put a stop to it
  3. identify where your SaaS providers are not only using, but introducing, AI into their applications after purchase and delivery and ensure that any utilization is bounded, tested, and properly constrained to prevent risk

Then, instead of unbounded AI, identify appropriate automation technologies that can be properly configured, integrated, and managed as part of an enterprise stack. And reap the rewards while your competitors deal with risks.

Do you want to get analytics and AI right? Don’t hire a F6ckW@d from a Big X!

Note the Sourcing Innovation Editorial Disclaimers and note this is a very opinionated rant!  Your mileage will vary!  (And not about any firm in particular.)

Now, I’m going to upset a lot of people with this, but I don’t care because the linked article below is literally the best article I ever read on why you should NOT hire F6ckW@ds from Big X (or any other) Consulting Firms who claim to be analytics and AI experts when they don’t actually know

  • the difference between a mathematical formula to calculate the center of gravity of a falling object and to calculate the median spend in a category
  • proper software architecture
  • proper compute resource allocation
  • your business
  • the difference between real ML technology, RPA and a few formulas, and the current Gen-“AI” where the “AI” stands for artificial idiocy

because

  • you’ll spend 3 years and millions of dollars to implement something that should take 3 to 6 months
  • you’ll spend hundreds of thousands on big vendor software licenses you don’t need
  • you’ll spend hundreds of thousands on compute power you don’t need

After all, these guys and gals get paid by the hour and the commission on the resell license is a percentage of the total price they convince you to pay for it. So, the longer the project takes and the more licenses and compute power they sell …

Read the linked article. Twice. And then tape it up to your fridge. The situation described in the article is NOT the exception. As a former CTO and 25 year consultant/analyst, I know this is the norm!


I Accidentally Saved Half A Million Dollars
 

Now, if you’re wondering how to tell who is a F6ckW@d and who’s not when it comes to analytics and AI at the Big X, I’m sorry to say that it’s not so easy (especially when it only takes a few bad apples to spoil the bunch, and while the good firms will do mandatory pruning of the consulting tree annually to weed those bad apples out, you don’t want to be the unlucky client who gets one on your project) .

It used to be if they were there for more than a year or two, their was a possibility that they were, or at least not as good as they claimed to be,  that especially if they were junior, right out off school, no real experience. This was because, first of all, tech talent wants to go either to the big glorious tech firms (Alphabet, Meta, etc.) or the wild-west startup frontier, and big consultancies were the backup until they got enough talent to move on.

Thus, the real talent in tech and analytics, who didn’t get promoted quickly in the Big X, usually didn’t stay long before they moved on to specialist firms where they felt they were more respected, higher up, could control the projects, and, more importantly, being higher up, were higher paid.

(Tech/Analytics people take pride in their work [and not their title], and seek the job that gives them the most pride.  Also, even though good tech/analytics people won’t contradict managers because they want to be important, and will only contradict managers because they want the job done right, the reality is that junior people or new hires in big firms often have the impression that this is discouraged in a larger firm [even if it’s not] where you are supposed to learn from and follow your manager’s lead because you don’t see the big picture and may not speak up on the way a project is being approached when they are unsure.  They might be wrong, and should stay quiet, but they don’t learn if they don’t ask.)

However, now that all the big firms are acquiring mid-market experts, with some of the Big X acquiring 3 or 4 specialist plays in analytics and AI over the past couple of years, it’s much harder to differentiate if you are getting the best talent or not.  You have to vet every candidate.  Not the Big X.  YOU!

And you need to remember that some of this AI and analytics stuff is literally so complicated that you need degrees in mathematics and computer science and sometimes a decade of experience to get it right! (It took the doctor two advanced degrees and building advanced analytics and optimization systems for multiple leading companies in the 2000s before he really understood the art of the possible and, more importantly, what was relevant for an industry and what was not.)

In other words, it’s okay if you don’t really get it as a manager. Just find those one or two people who do who you can trust, pay them well, and let them do what they need to make your department look good (be it hire internally, choose a consulting firm you never heard of, hire former colleagues on short-term contracts, use their contacts to get the right person at the Big X, etc.).

They’ll get the job done right and be quite happy to let you take all the credit IF you give them regular raises and a bonus any time they do particularly well. Just put your ego aside and let the people who get it make the tech/analytics decisions, and everyone will win!

But, whatever you do, don’t throw a poorly formed project description over the wall in advanced analytics and AI to a Big X (or any other vendor) and expect good results.

If you don’t know what you need, why, and how you expect to get it, instead focus on what you understand and Use the Big X firm for all of the things you know it is good at, understands implicitly, and has the history and experience to figure out simply based on the type of company you are.   Used appropriately, like any service provider, a Big X can deliver amazing value.   See the linked article on when you should use Big X in our opinion.

Fail Fast And Forward? How About Not Failing At All?

A recent article over on The Sourcing Journal indicated that one should Fail Fast and Fail Forward When Implementing AI into Workflows. WTF? Why fail at all? Especially since if you’re using AI where you are expecting a high risk of failure, there’s no reason to expect that you’ll only fail once, or that you can actually fail forward.

Now, if we were talking traditional ML, where it’s just a matter of continually expanding and refining the model and training data, tweaking the parameters, and starting small, then fail fast, fail forward, get it working, use the spice weasel, knock it up another notch, and continue until you have automation across the platform in appropriate places, it would be good advice.

But when we are talking full fledged Gen-AI (which is the article’s focus) based on massively large and entirely unpredictable LLMs or super-sized DNNs, you can fail fast, but, with absolutely no way to control the models, you can’t fail forward. So while fail fast and fail forward is a good motto in general for technology, process digitization, and automation, as long as you take things step by step and control the risk, it’s not appropriate at all when we are talking about AI!