Category Archives: Technology

Another “think tank” article on digitizing procurement that’s off-the-mark!

A recent article in Supply Chain Brain noted that you should be seizing the opportunity for digitizing procurement and the doctor completely agrees. Nothing should be paper based in Procurement today. There’s no excuse for it.

And yes, multiple developments in supply chain are converging to create an unprecedented digital opportunity for procurement professionals. Furthermore, if you work on mastering and combining emerging and maturing technologies in strategic ways since procurement teams are in a position to reshape how they work, and create value across the supply chain, you can revolutionize Procurement and business performance.

But digitizing, by definition, means moving processes from scrolls to systems, from the dark basement to the illuminated screens. It DOES NOT mean that:

  • you use Gen-AI or even machine learning
    there may be tasks where you apply point-based ML, but that comes after the digitization of an appropriate process
  • you use cognification to illuminate (concealed) processes
    especially when it could illuminate you should never have digitized the process in the first place
  • you accelerate workflow through automation
    you automate what you can, and while that includes the acceleration of tactical paperwork processing and thunking, sometimes humans have to step back and think about the data received, insights produced, and options available before making a decision … you don’t accelerate whatever amount of time it takes a human to make a good decision (and, instead, focus on automating and accelerating any non-strategic tactical “thunking” tasks that prevent them from focussing their brain power where it’s really needed)
  • you go straight to content personalization
    when the users might not even know how to use the baseline systems (and, in the process, create a nightmare for the support personnel)

Digitizing Procurement starts by:

  • understanding what processes you are using now
  • understanding if they are appropriate or they should be optimized
  • identifying off-the-shelf best-of-breed modules, mini-suites, suites, and/or
    intake-to-orchestrate platforms and implementing them
  • identifying key points where RPA, ML, or other advanced techs can make the process even more efficient
  • then identifying the right advanced tech to use

Not starting with it. You should never try to run a race before you can walk. The only “impactful opportunity” identified in the article you should start with is

  • adopting ecosystem thinking to enhance data

At the end of the day, nothing works well without good data. So get the data right, and everyone aligned to get the data right, and that will get you further, and help you do better, than any piece of modern tech you can try to throw at the problem.

Darkbeam: Shining a Light on your Supply Base Cyber Risk

In part 9 of our Source-to-Pay+ series, we talked about the need for cyber risk monitoring and prevention because, in today’s hyper-connected SaaS world, nearly half of an organization’s data breaches originate in the cloud. These risks don’t just come from cyber criminals. Some come from less-than-scrupulous employees and others come from suppliers, even well meaning ones. After all, who cares if the front door is locked when the back door is wide open.

Why do you care about your supplier’s back door? What do cyber-criminals want?

  • money
  • valuable intellectual property
  • exploitable personal data

Where can they get this?

  • account hacking, which is hard, or payment redirection, which is a lot easier
  • your ultra-secure server which is locked down tighter than Fort Knox with everything on it encrypted in 256-bit AES encryption, or the relatively unprotected Google Drive your supplier stores it on (as the file will be open to anyone who can compromise the account)
  • your double encrypted HR database stored in a secure AWS instance or the plain-text Microsoft word documents stored on the supplier’s sales rep laptop with its unencrypted hard drive and an utter lack of virus protection and internet security software

In other words, if your supplier has:

  • a lot of your money coming its way
  • your intellectual property
  • your executives’ personal data

and their cybersecurity is not as good as yours, you can be sure the cybercriminals are going to be going to, and through, them to get to you.

So you need to know which of your suppliers are at risk, so you can reach out to them and work with them to close the holes and eliminate the risks to them, and you. And for suppliers that you do significant business with (and regularly send million dollar payments), who hold your patented IP (for custom manufactured electronics, etc.), or store your employees and/or customers HR data, you need to not only assess their vulnerabilities but continuously monitor for threats.

You need a supplier vulnerability assessment and monitoring solution that can identify vulnerabilities, help you communicate those to your supplier, detect improvements, and, most importantly, identify new threats as they emerge that could cost you, or your supplier, significantly.

Darkbeam is one of these solutions. The Darkbeam solution offers both of these capabilities, continuous vulnerability monitoring across your entire supply base (at a very affordable price point that starts at a mere £25,000 a year, which is low-end for any cybersecurity solution) and continuous threat monitoring, and assessment, of critical suppliers in your supply base (which you can add for an incremental cost that can be as low as £10,000 a year for your ten most critical suppliers).

The vulnerability assessment solution monitors:

  • Connections: SSL certificates and associated validations (hosts, IP, TLS, etc.)
  • Privacy: e-mail and cloud servers and configurations and breaches (esp. email addresses)
  • HTTPS: web site configuration, cookies, and port security
  • DNS: DNS record completeness, security, and recent changes
  • Blacklist: domain and email blacklist monitoring
  • Exposure: shared host identification, domain permutation monitoring, favicon, exposed subdomain monitoring, etc.

Cyber-weakness in each of these areas is highly relevant because it could allow hackers and cyber-criminals to exploit your supplier, and you, in ways that include, but are not limited to, the following:

  • an expired SSL certificate could allow a cybercriminal to register a fake certificate that validates a fraudulent facsimile of the actual site
  • exposed email accounts could allow a cybercriminal to masquerade as a supplier representative and change banking details for payment
  • an insecure site configuration could provide a backdoor into your entire network
  • incomplete DNS records could be completed by a cybercriminal and redirect traffic to a fraudulent site
  • if a domain shows up on a blacklist it could prevent email/traffic to/from the domain; and if emails show up on a blacklist, it could indicate compromised emails and/or emails not being received by their intended recipients
  • if a supplier’s website is on a shared host that is used by a lot of other sites (that are insecure), a number of (one-character-off) permutations of the supplier’s domain have been registered, favicons are being replicated, etc. then that is a strong sign the supplier is being targeted by cyber criminals (that could be coming for you, or your customers, through them)

Based on their assessment, they will compute a cyber-risk score (out of 999), the lower the better, and the higher the more concerned you should be (and the sooner you should reach out to your [potential] supplier to have a conversation about what they are doing to increase their cybersecurity, especially if they have, or will have, your IP or personnel data).

The threat monitoring and assessment solution is a service-based solution where the Darkbeam cyber-intelligence team continuously monitors the web and dark web for potential threats, investigates those threats when they are detected, and if the threats are relevant, they send you a report on which you can take immediate action which can include, but not be limited to, involving the proper authorities, that they have experience working with in multiple countries.

They literally monitor dozens of legit security and threat-intelligence sites (where general cyber security firms release warnings of cloud or software insecurity along with known breaches) as well as dozens of dark-web sites where shady characters like to sell, or at least indicate the presence of, IT, Trade and Finance secrets they should not have. On many occasions, they have detected breaches and data theft even before the supplier’s IT team knew about it (and definitely well before you did, if you were ever told).

If an incident or threat is detected, the threat report you receive will outline the issue (e.g. data exposure / breach), the root cause (e.g. system breach, ransomware, etc.), when it was detected, how it was confirmed, and what is currently being done / monitored. It will then outline the perceived severity (e.g. medium due to potential IP leakage, high due to personal data likely being stolen) as well as any potential follow on risks (i.e. personal logins that can compromise other systems). It will summarize the currently known information uncovered by the analysts and the current status (which could be ongoing). And it will provide current recommendations, such as reaching out to the supplier, changing logins and/or locking down your systems, reaching out to various agencies, etc.

All in all, Darkbeam is a great Supply Chain Cybersecurity solution and should be on your consideration list if you don’t have such a solution already. Cyber attacks are coming, and it’s best to be ahead of the issue, then behind it.

Thank you Vladimir Putin!

Thank you Vladimir Putin for saying what needed to be said.

(Open/Gen-) AI is dangerous. Very dangerous! And something needs to be done about it!

Humanity has to consider what is going to happen due to the newest developments in genetics or in AI. One can make an approximate prediction of what will happen. Once mankind felt an existential threat coming from nuclear weapons, all nuclear nations began to come to terms with one another since they realized that negligent use of nuclear weaponry could drive humanity to extinction.

It is impossible to stop research in genetics or AI today, just as it was impossible to stop the use of gunpowder back in the day. But as soon as we realize that the threat comes from unbridled and uncontrolled development of AI, or genetics, or any other fields, the time will come to reach an international agreement on how to regulate these things.

Transcript

I don’t know about you, but with respect to what has been advertised, these are the six variants of Open/Gen-AI the doctor sees:

Gender/Race-Biased: especially in HR; it’s trained on “good resumes”, but, guess what, when those “good resumes” were selected from a pool of hired candidates that have predominantly been white men, guess what the AI looks for?

Hallucinatory: too many stories to track now of AI creating fake summaries on fake articles by fake authors for which it created fake profiles; Lawyers have fall for this multiple times!

Harmful/Hateful: train it on open data which contains hate speech, just like a kid exposed to its first profanity, it mimics … non-stop

Murderous: multiple examples of self-help chat systems literally telling people to kill themselves (and then a few examples of people actually doing this) as well as self-driving systems ignoring the “shadows” of what were people RIGHT in front of them

Sleeper: the newest threat, sleeper behaviour that can go undetected for days, months, or years until a specific date or phrase is entered (in combination); the perfect sleeper agent!

Thieving: not only are these open AI plays generally trained on stolen data, but since all your queries and outputs are directly used (or indirectly influence) the network, they steal your data (even when the designers didn’t set about to do so)

Roughly Half a Trillion Dollars Will Be Wasted on SaaS Spend This Year and up to One Trillion Dollars on IT Services. How Much Will You Waste?

Before we continue, yes, that is TRILLION, numerically represented as 1,000,000,000,000, repeated twice in the title and yes we mean US (as in United States of America) dollars!

Gartner projects that IT spend will surpass 5 Trillion this year. When you consider that 30% of IT spend is usually for software, and that one third (or more) of software spend is wasted (for unused licenses, which is why we have a whole category of IT and SaaS specialists that analyze your out-of-control SaaS and software spend and typically find 30% to 40% overspend in a few days), that means that roughly half a trillion dollars will be wasted on software this year.

Even worse, Gartner projects that spending on IT Services will reach 1.5 Trillion. And the waste here could be two thirds! Now, we all know that you need IT services to implement, integrate, and maintain those IT systems you buy. But how much do you need? And how much should you pay? Consider that an intermediate software developer should be making 150K a year (or 75/hour), that says that an intermediate implementation specialist shouldn’t be making any more than that, and not billed at more than 3 times that (or 225/hour). But how much are you being billed for relatively inexperienced implementation consultant, with maybe a few years of overall experience and maybe six months on the system that you are installing? the doctor knows that rates of $300 to $500 are not uncommon for these resources that are oversold and overcharged for.

But this isn’t the worst of it. As per our upcoming article Fraud And Waste Are Not The Same Thing, many implementation “partners” will try to get all they can get and make sure that when you go in for a penny, you go in for a pound and they will push for:

  • frequent change orders during implementation, usually billed at excessively high day rates as they have to “divert resources” or “work overtime”
  • unnecessary customizations or real-time integrations that are an extensive amount of work (and cost) when out-of-the-box or daily flat-file synchs are more than sufficient
  • extensive “process evaluation” or “process transformation” processes that are well beyond what you need to eat up consulting hours
  • extensive “best practice” education when your practices are good enough for now and/or those best practices are already encoded in the system you just bought and paid a pretty penny for and just following the default process gives you the same education

That will often double to triple the cost. But that’s not the worst of it. As per comments the doctor has made on LinkedIn, he regularly hears stories of niche providers losing 200K deals because customers said their quote was too low because all the Big X companies quoted over 1,000K for 100K worth of work. Literally. This is because, as the doctor has noted in previous posts and comments on LinkedIn:

  • they don’t have the talent in advanced tech (and even The Prophet has noted their lack of talent in areas of advanced tech in multiple LinkedIn posts, though he has been much more diplomatic than the doctor in discussing their lack thereof; but he did note in a 2024 advice post that consultancies are going to have a hard time attracting talent this year) — for every area, they’ll have a team leader who’s a superstar, two or three handpicked lieutenants who are above average, and then 20 to 40 benchwarmers who are junior and not worth the rate they are charging)
  • they have an incredible overhead — posh offices to house the partners making more than top lawyers who have a lifestyle to maintain
  • they don’t have the knowledge of, or experience in, modern tools — some of which are ten times more powerful than last generation tools; this, of course, means the Big X benchwarmers are using last generation tools which take ten times the manual labour to extract value from
  • etc.

There’s a reason the doctor said that if you want to get analytics and AI right, DON’T HIRE A F6CKW@D FROM A BIG X! and stands by it! Unless you want to pay 1K an hour, you’re not getting that one superstar resource trying to be the front end to two dozen projects that his three lieutenants are trying to manage, all of which are staffed by junior to intermediate individuals who can barely follow the three to five year old playbook.

There’s a reason that The Prophet predicted in his 9th prediction that SaaS Management Solutions [will] Start to Eat Services Procurement Tech and that many companies will go in house if they have tech expertise. Because he realizes that these consultancies will have a hard time not only hiring, but retaining, tech talent when they have hiring freezes, salary freezes, and reduced engagements as more and more companies can’t afford the ridiculous rates they’ve been charging recently. (Companies may not have had a choice during COVID where it was implement on-line collaboration and B2B tech or perish, but now they do.)

But there are still many companies who will, when they encounter a (perceived) tech need, immediately pick up the phone and call Accenture, CapGemini, Deloitte, McKinsey, etc. and bring them in to help them understand who to bring in for an engagement, instead of widening the net to niche providers who are 3 to 5 times cheaper, and who will deliver results at least as good, if not better.

Now, again, the doctor would like to stress that, despite how much he insists they are usually not the right solution for advanced tech implementation, that Big X are not all bad, and sometimes worth more than the high fees they charge. Most of these companies started off as management/operational/finance/strategy consultants and grew big because they were one of the best, and in certain domains, each of these companies still are. But being good at a few things doesn’t mean they are good at everything, and that’s very important to remember.

And while there will be exceptions to the rule (as every one of these companies has some tech geniuses), the reality is that when you need more bodies than there are talented bodies in an entire industry, you’re not going to get them and, because consultancies are not cool when you want to be a tech superstar (and join a startup that becomes a unicorn), the ratio of superstar to above average to average to below average talent in these organizations is much worse than in multinational tech companies (like Alphabet, Apple, Meta, Microsoft, etc.) where you know the majority of their employees are not the best of the best. (Because if they were the best of the best, there’s no way they’d lay off 10,000 employees at a time every time the market jitters.)

In short, manage that IT services spend carefully, or you’ll be double paying, triple paying, or worse and providing a big chunk of the roughly ONE TRILLION DOLLARS in IT services overspend that the doctor predicts will happen (again) this year. (Unless, of course, you agree with Doctor Evil who says, why make trillions when we could make … billions. Because that’s exactly what happens when you overpay for software and services. Don’t expect the Big X to say anything as they get the majority that overspend, and that’s how they stay so [insanely] profitable.)

COUPA: Centralized Optimization Underlies Procurement Adoption …

… or at least that’s what it SHOULD stand for. Why? Well, besides the fact that optimization is only one of two advanced sourcing & procurement technologies that have proven to deliver year-over-year cost avoidance (“savings”) of 10% or more (which becomes critical in an inflationary economy because while there are no more savings, negating the need for a 10% increase still allows your organization to maintain costs and outperform your competitors), it’s the only technology that can meet today’s sourcing needs!

COVID finally proved what the doctor and a select few other leading analysts and visionaries have been telling you for over a decade — that your supply chain was overextended and fraught with unnecessary risk and cost (and carbon), and that you needed to start near-sourcing/home-sourcing as soon as possible in order to mitigate risk. Plus, it’s also extremely difficult to comply with human rights acts (which mandate no forced or slave labour in the supply chain), such as the UK Modern Slavery Act, California Supply Chains Act, and the German Supply Chain Act if your supply chain is spread globally and has too many (unnecessary) tiers. (And, to top it off, now you have to track and manage your scope 1, 2, and 3 carbon in a supply chain you can barely manage.)

And, guess what, you can’t solve these problems just with:

  • supplier onboarding tools — you can’t just say “no China suppliers” when you’ve never used suppliers outside of China, the suppliers you have vetted can’t be counted on to deliver 100% of the inventory you need, or they are all clustered in the same province/state in one country
  • third party risk management — and just eliminate any supplier which has a risk score above a threshold, because sometimes that will eliminate all, or all but one, supplier
  • third party carbon calculators — because they are usually based on third party carbon emission data provided by research institutions that simply produce averages for a region / category of products (and might over estimate or under estimate the carbon produced by the entire supply base)
  • or even all three … because you will have to migrate out of China slowly, accept some risk, and work on reducing carbon over time

You can only solve these problems if you can balance all forms of risk vs cost vs carbon. And there’s only one tool that can do this. Strategic Sourcing Decision Optimization (SSDO), and when it comes to this, Coupa has the most powerful platform. Built on TESS 6 — Trade Extensions Strategic Sourcing — that Coupa acquired in 2017, the Coupa Sourcing Optimization (CSO) platform is one of the few platforms in the world that can do this. Plus, it can be pre-configured out-of-the-box for your sourcing professionals with all of the required capabilities and data already integrated*. And it may be alone from this perspective (as the other leading optimization solutions are either integrated with smaller platforms or platforms with less partners). (*The purchase of additional services from Coupa or Partners may be required.)

So why is it one of the few platforms that can do this? We’ll get to that, but first we have to cover what the platform does, and more specifically, what’s new since our last major coverage in 2016 on SI (and in 2018 and 2019 on Spend Matters, where the doctor was part of the entire SM Analyst team that created the 3-part in-depth Coupa review, but, as previously noted, the site migration dropped co-authors for many articles).

As per previous articles over the past fifteen years, you already know that:

So now all we have to focus on are the recent improvements around:

  • “smart scenarios” that can be templated and cross-linked from integrated scenario-aware help-guides
  • “Plain English” constraint creation (that allows average buyers & executives to create advanced scenarios)
  • fact-sheet auto-generation from spreadsheets, API calls, and other third-party data sources;
    including data identification, formula derivation and auto-validation pre-import
  • bid insights
  • risk-aware functionality

“Smart Events”

Optimization events can be created from event templates that can themselves be created from completed events. A template can be populated with as little, or as much as the user wants … all the way from simply defining an RFX Survey, factsheet, and a baseline scenario to a complete copy of the event with “last bid” pricing and definitions of every single scenario created by the buyer. Also, templates can be edited at any time and can define specific baseline pricing, last price paid by procurement, last price in a pre-defined fact-sheet that can sit above the event, and so on. Fixed supplier lists, all qualified suppliers that supply a product, all qualified suppliers in an area, no suppliers (and the user pulls from recommended) and so on. In addition to predefining a suite of scenarios that can be run once all the data is populated, the buyer can also define a suite of default reports to be run, and even emailed out, upon scenario completion. This is in addition to workflow automation that can step the buyer through the RFX, auto-respond to suppliers when responses are incomplete or not acceptable, spreadsheets or documents uploaded with hacked/cracked security, and so on. The Coupa philosophy is that optimization-backed events should be as easy as any other event in the system, and the system can be configured so they literally are.

Also, as indicated above, the help guides are smart. When you select a help article on how to do something, it takes you to the right place on the right screen while keeping you in the event. Some products have help guides that are pretty dumb and just take you to the main screen, not to the right field on the right sub-screen, if they even link into the product at all!

“Plain English” Constraint Creation

Even though the vast majority of constraints, mathematically, fall into three/four primary categories — capacity/allocation, risk mitigation, and qualitative — that isn’t obvious to the average buyer without an optimization, analytical, or mathematical background. So Coupa has spent a lot of time working with buyers asking them what they want, listening to their answers and the terminology they use, and created over 100 “plain english” constraint templates that break down into 10 primary categories (allocation, costs, discount, incumbent, numeric limitations, post-processing, redefinition, reject, scenario reference, and collection sheets) as well as a subset of most commonly used constraints gathered into a a “common constraints” collection. For example, the Allocation Category allows for definition “by selection sheet”, “volume”, “alternative cost”, “bid priority”, “fixed divisions”, “favoured/penalized bids”, “incumbent allocations maintained”, etc. Then, when a buyer selects a constraint type, such as “divide allocations”, it will be asked to define the method (%, fixed amount), the division by (supplier, group, geography), and any other conditions (low risk suppliers if by geography). The definition forms are also smart and respond to each, sequential, choice appropriately.

Fantastic Fact Sheets

Fact Sheets can be auto-generated from uploaded spreadsheets (as their platform will automatically detect the data elements (columns), types (text, math, fixed response set, calculation), mappings to internal system / RFX elements), and records — as well as detecting when rows / values are invalid and allow the user to determine what to do when invalid rows/values are detected. Also, if the match is not high certainty, the fact-sheet processor will indicate the user needs to manually define and the user can, of course, override all of the default mappings — and even choose to load only part of the data. These spreadsheets can live in an event or live above the event and be used by multiple events (so that company defined currency conversions, freight quotes for the month, standard warehouse costs, etc. can be used across events).

But even better, Fact Sheets can be configured to automatically pull data in from other modules in the Coupa suite and from APIs the customer has access to, which will pull in up to date information every time they are instantiated.

Bid Insights

Coupa is a big company with a lot of customers and a lot of data. A LOT of data! Not only in terms of prices its customers are paying in their procurement of products and services, but in terms of what suppliers are bidding. This provides huge insight into current marketing pricing in commonly sourced categories, including, and especially, Freight! Starting with freight, Coupa is rolling out a new bid pricing insights for freight where a user can select the source, the destination, the type (frozen/wet/dry/etc), and size (e.g. for ocean freight, the source and destination country, which defaults to container, and the container size/type combo and get the quote range over the past month/quarter/year).

Risk Aware Functionality

The Coupa approach to risk is that you should be risk-aware (to the extent the platform can make you risk aware) with every step you take, so risk data is available across the platform — and all of that risk data can be integrated into an optimization project and scenarios to reject, limit, or balance any risk of interest in the award recommendations.

And when you combine the new capabilities for

  • “smart” events
  • API-enabled fact sheets
  • risk-aware functionality

that’s how Coupa is the first platform that literally can, with some configuration and API integration, allow you to balance third party risk, carbon, and cost simultaneously in your sourcing events — which is where you HAVE to mange risk, carbon, and cost if you want to have any impact at all on your indirect risk, carbon, and cost.

It’s not just 80% of cost that is locked in during design, it’s 80% of risk and carbon as well! And in indirect, you can’t do much about that. You can only do something about the next 20% of cost, risk and carbon that is locked in when you cut the contract. (And then, if you’re sourcing direct, before you finalize a design, you can run some optimization scenarios across design alternatives to gauge relative cost, carbon, and risk, and then select the best design for future sourcing.) So by allowing you to bring in all of the relevant data, you can finally get a grip on the risk and carbon associated with a potential award and balance appropriately.

In other words, this is the year for Optimization to take center stage in Coupa, and power the entire Source-to-Contract process. No other solution can balance these competing objectives. Thus, after 25 years, the time for sourcing optimization, which is still the best kept secret (and most powerful technology in S2P), has finally come! (And, it just might be the reason that more users in an organization adopt Coupa.)