Category Archives: AI

9% of Companies Claim To Be Ready to Managed Risks Posed by AI? Bull Crap.

the doctor could not believe the recent headline in Forbes that said Only 9% of surveyed companies are ready to manage risks posed by AI. Because there is no way that 9% of companies are ready to manage the risks posed by AI. There’s no way even 0.9% of companies are ready to manage the risks posed by AI.

Why? Because of the rampant introduction of massive LLMs and DNNs that no one understands, for which I’m sure we’ve yet to seen the last of the abysmal failures, hallucinations, and suicide coaxing. There’s simply no way we can even begin to predict all of the potential errors they are going to make, the risks they are putting us under, the repercussions if those errors are made and risks materialize, and how the risks can be minimized, if not mitigated. No way whatsoever.

Not only is it theoretically impossible to be fully prepared, but when you consider that the average organization is not even equipped to handle regular software failures, how can the average organization expect to handle a software-based AI failure it can’t even predict?

The article, which quoted a recent study by RisKonnect (who are obviously able to detect and protect against most types of risk by using RisKonnect, and maybe that’s why they are so confident they can protect and defend against AI risks, but RisKonnect is for traditional enterprise and third-party risk, not cyber risk, and definitely not AI risk — no one can protect against a risk when they don’t even know what the risk is), did quote some very useful statistics on areas of concern. Specifically, of the companies surveyed

  • 65% are concerned about data and cyber,
  • 60% are worried about employees making decisions on erroneous information,
  • 55% are worried about employee misuse and ethical risk,
  • 34% are worried about copyright and intellectual property, and
  • 17% are worried about discrimination risk.

The risks are the right risks, and the order of priority is about the right order, but the percentage of companies concerned is much too low.

1. 100% of companies should be concerned about data and cyber. Not only are we in the age of state-sponsored hacking, which makes any company with useful confidential designs and information a target, but with almost all significant commerce being conducted online, all companies are a target for financial fraud.

2. 100% of companies that need to make decisions based on data analysis should be concerned about erroneous information, as all companies have bad data, and the bigger the company, the worse the data.

But none of these match the risks of AI. As per the quote in the article from Caitlin Begg, an over-reliance on AI can risk robotic, insensitive, spammy, or off-topic messaging, and that’s just the beginning. As noted, most companies haven’t simulated their worst case scenario, and since one can’t even predict what that is with AI, they aren’t even close to ready. It’s not just another article in the organization’s tech stack, even though the article seemed to indicate it is. One can prioritize transparency, accountability, threat and vulnerability monitoring, and risk mitigation, but when most AI applications can’t explain their actions, aren’t accountable humans, have no realistic threat and risk assessments, and there is no way to mitigate risk except not to use the technology in the first place for any decision that should be made by a HUMAN, it’s just not enough.

The precautionary steps are not to identify where AI can be most effective and incorporate it, the steps should be to

  1. identify where partners and third parties are using AI and putting your organization at risk
  2. identify where employees might be using unapproved web-based AI applications and put a stop to it
  3. identify where your SaaS providers are not only using, but introducing, AI into their applications after purchase and delivery and ensure that any utilization is bounded, tested, and properly constrained to prevent risk

Then, instead of unbounded AI, identify appropriate automation technologies that can be properly configured, integrated, and managed as part of an enterprise stack. And reap the rewards while your competitors deal with risks.

Do you want to get analytics and AI right? Don’t hire a F6ckW@d from a Big X!

Note the Sourcing Innovation Editorial Disclaimers and note this is a very opinionated rant!  Your mileage will vary!  (And not about any firm in particular.)

Now, I’m going to upset a lot of people with this, but I don’t care because the linked article below is literally the best article I ever read on why you should NOT hire F6ckW@ds from Big X (or any other) Consulting Firms who claim to be analytics and AI experts when they don’t actually know

  • the difference between a mathematical formula to calculate the center of gravity of a falling object and to calculate the median spend in a category
  • proper software architecture
  • proper compute resource allocation
  • your business
  • the difference between real ML technology, RPA and a few formulas, and the current Gen-“AI” where the “AI” stands for artificial idiocy

because

  • you’ll spend 3 years and millions of dollars to implement something that should take 3 to 6 months
  • you’ll spend hundreds of thousands on big vendor software licenses you don’t need
  • you’ll spend hundreds of thousands on compute power you don’t need

After all, these guys and gals get paid by the hour and the commission on the resell license is a percentage of the total price they convince you to pay for it. So, the longer the project takes and the more licenses and compute power they sell …

Read the linked article. Twice. And then tape it up to your fridge. The situation described in the article is NOT the exception. As a former CTO and 25 year consultant/analyst, I know this is the norm!


I Accidentally Saved Half A Million Dollars
 

Now, if you’re wondering how to tell who is a F6ckW@d and who’s not when it comes to analytics and AI at the Big X, I’m sorry to say that it’s not so easy (especially when it only takes a few bad apples to spoil the bunch, and while the good firms will do mandatory pruning of the consulting tree annually to weed those bad apples out, you don’t want to be the unlucky client who gets one on your project) .

It used to be if they were there for more than a year or two, their was a possibility that they were, or at least not as good as they claimed to be,  that especially if they were junior, right out off school, no real experience. This was because, first of all, tech talent wants to go either to the big glorious tech firms (Alphabet, Meta, etc.) or the wild-west startup frontier, and big consultancies were the backup until they got enough talent to move on.

Thus, the real talent in tech and analytics, who didn’t get promoted quickly in the Big X, usually didn’t stay long before they moved on to specialist firms where they felt they were more respected, higher up, could control the projects, and, more importantly, being higher up, were higher paid.

(Tech/Analytics people take pride in their work [and not their title], and seek the job that gives them the most pride.  Also, even though good tech/analytics people won’t contradict managers because they want to be important, and will only contradict managers because they want the job done right, the reality is that junior people or new hires in big firms often have the impression that this is discouraged in a larger firm [even if it’s not] where you are supposed to learn from and follow your manager’s lead because you don’t see the big picture and may not speak up on the way a project is being approached when they are unsure.  They might be wrong, and should stay quiet, but they don’t learn if they don’t ask.)

However, now that all the big firms are acquiring mid-market experts, with some of the Big X acquiring 3 or 4 specialist plays in analytics and AI over the past couple of years, it’s much harder to differentiate if you are getting the best talent or not.  You have to vet every candidate.  Not the Big X.  YOU!

And you need to remember that some of this AI and analytics stuff is literally so complicated that you need degrees in mathematics and computer science and sometimes a decade of experience to get it right! (It took the doctor two advanced degrees and building advanced analytics and optimization systems for multiple leading companies in the 2000s before he really understood the art of the possible and, more importantly, what was relevant for an industry and what was not.)

In other words, it’s okay if you don’t really get it as a manager. Just find those one or two people who do who you can trust, pay them well, and let them do what they need to make your department look good (be it hire internally, choose a consulting firm you never heard of, hire former colleagues on short-term contracts, use their contacts to get the right person at the Big X, etc.).

They’ll get the job done right and be quite happy to let you take all the credit IF you give them regular raises and a bonus any time they do particularly well. Just put your ego aside and let the people who get it make the tech/analytics decisions, and everyone will win!

But, whatever you do, don’t throw a poorly formed project description over the wall in advanced analytics and AI to a Big X (or any other vendor) and expect good results.

If you don’t know what you need, why, and how you expect to get it, instead focus on what you understand and Use the Big X firm for all of the things you know it is good at, understands implicitly, and has the history and experience to figure out simply based on the type of company you are.   Used appropriately, like any service provider, a Big X can deliver amazing value.   See the linked article on when you should use Big X in our opinion.

Gartner Inadvertently Makes the Case for NO AI in Supply Chains (which includes Source to Pay)

Gartner, which promotes the use of Generative AI in customer service, even though it did place Generative AI on the Peak of Inflated Expectations on the Hype Cycle for Emerging Technologies, just inadvertently made the best case for never, ever, ever using AI anywhere in the supply chain, including Source-to-Pay, and we love it!

In a press release on their newsroom in late September, where Gartner Says 80% of Supply Chain Not Accounted for in Current Digital Decision Models, the subheading clearly stated that Digital-to-Reality Gap Shows Current Technology Use Fails to Improve Outcomes for Supply Chain Decision Makers.

As a result of this “digital-to-reality” gap, Gartner’s research, based on an analysis of 600 survey responses of supply chain decision makers, not only found that current use of digital models to analyze trade-offs made no meaningful impact on the rate of good decision outcomes but actually found that slightly more bad decisions were made with the use of digital tradeoff analysis than without and marginally increased the percentage of bad decision outcomes. Moreover, More than half of supply chain leaders reliant on digital technology to make a recent strategic decision told us that they felt they would have landed on better decision outcomes without the use of their models, and our analysis suggests that they are correct.

In other words, if source-to-pay and supply-chain decision makers cannot even make decisions when relying on traditional, focussed, machine learning and modelling technology, there’s no chance an unpredictable probabilistic incarnation of Artificial Idiocy that randomly changes its output by the millisecond is going to make good decisions. And the reason is the same — just like traditional (guided) (machine learning) models require good data and a digital representation that covers the majority (if not the entirety) of the process and relevant variables, so do Generative AI models and, in just about every organization on the planet, this necessary digital representation DOES NOT EXIST!

As a result, applying AI without the data it needs to have even a snowball’s chance in h3ll to make a decision is pretty much guaranteed to lead you to worse decisions than you, or any other intelligent human with a decent understanding of the situation, will make without the use of any technology whatsoever.

You don’t need AI, you need end to end process modelling, data collection, data enrichment, data validation, and the ability to use those end-to-end digital tools, interpret the data and recommendations, and make good decisions off of that. And since, with the current rate of digitization, it’s unlikely the majority of organizations will go from 20% supply chain digitization to 80% supply chain digitization (which is the minimum level of digitization you should have before even considering any AI, even for inconsequential decisions) by the end of the next decade, you should not even have AI for decision making on your future roadmap before the next decade rolls around.

the doctor doesn’t say this often, but thank you, Gartner. (Because it really is the case that stupid is as stupid does.)

The 1-Step Guide to Responsible AI in Procurement

Forbes recently published an article on Responsible AI Procurement: A Practical Guide For Selecting Trustworthy AI Vendors. It wasn’t bad, but it missed the point.

Today, there’s only one way to responsibly address AI in Procurement.

JUST SAY NO!

1) We don’t really understand proper AI Governance (especially when most vendors are using third parties which are illegally scarping content, not checking for bias, and tweaking models on the fly without consideration for the new problems the on-the-fly tweaks will cause).

Plus, it’s not just ethical codes of conduct, it’s agreeing on what the ethics are, and, most importantly, making sure the models are transparent and unbiased — but we don’t know how to do that today, especially since all these models are huge black box models.

2) You can demand all the evidence you want from the vendor as backup for the vendor claims, but if you can’t verify it, how can you trust it?

3) These models require huge datasets to train. Even if you know the data set used and the processing method used, how can you be sure every element was properly vetted? Just like one bad apple can spoil the bunch, just one bad element in a clustering or optimization model can spoil the entire model. Just one!  It only takes a small amount of bad data to spoil a model, regardless of the model used.

4) These models can fail, and sometimes fail spectacularly. If you don’t understand the model, you don’t understand where it can fail, and thus what to look for. Also, many minor incidents (which can foretell future catastrophic failures) will go unnoticed if a human isn’t checking everything.

5) These models are not secure … the AI can leak any training data at any time without warning. Your vendor can have every security certification under the sun, and all will be for naught if they use LLMs.

So, JUST SAY NO!

Yes, McKinsey This Is Generative AI’s break out year, BUT:

We should NOT be celebrating the fact that it broke out of the prison it should be contained in only to:

So, even if your Global Survey confirms the explosive growth of AI, you should not be celebrating Generative AI’s breakout year and hold off celebrating until someone manages to put this destructive brain-dead genie we’ve unleashed back into the bottle it was released from!