Category Archives: SaaS

The B2B Software Marketplaces Will Rise. Then the Hammer will Fall!

Thanks to Apple, every consumer thinks there’s an app for that. And for most consumer desires, there probably is. Especially since Apple’s App Commerce climbed to 1.1 Trillion in 2022. Yes, that’s 1,100,000,000,000 US Dollars! That’s a lot of money, especially when most apps are being sold for a few bucks.

When you consider:

  • consumer app marketplaces are now a Trillion dollar business
  • enterprises are buying more SaaS than ever, as every employee in every department wants an app(lication) to support every task they do
  • enterprises pay 10X to 100X what individuals pay per user license, and, thus, the opportunity of enterprise app marketplaces is in the tens (to hundreds) of Trillions
  • enterprises want easy, centralized, acquisition to limit the number of vendors they need to deal with / handle subscription invoices from

It’s easy to see why all the big software / cloud vendors are opening their own app marketplaces. A recent article on IOT Analytics shouted the rise of the B2B software marketplaces while quoting their B2B Technology Marketplaces Market Report (2024-2030) that noted that:

  • they are the fastest growing procurement channel (for software)
  • dedicated platform providers are seeing success
  • some sellers make Billions

And they will continue to grow for a few years. But then, the hammer will fall.

What one has to remember is the following:

  • many of these marketplaces are taking a big cut, like 30% or more, which is what a sales partner would have taken to compensate its employee(s) that actively sold the product, but they are doing NOTHING but creating a listing, making it searchable, taking an order, collecting a payment, and providing a license key … even when you consider cloud fees, payment processor fees, platform maintenance fees, they could be very profitable at 13% (remember that recent article on how roughly half a trillion dollars will be wasted on SaaS spend this year … well, this is only going to increase that as you’re paying almost 20% more than you need to for the licenses you do need and use)
  • apps, licenses, and overspend is going to proliferate rapidly as “approved” app stores make it easy for every employee with a p-card to buy what they want, when they want
  • those SaaS audits and rationalizations that identify 33%+ overspend are only going to reclaim at most 20% of that, if you’re lucky, because, even if the software developer is willing to refund unused licenses, they’re not going to refund that 30%+ they already paid the marketplace … and that’s if they’ll even talk to you because you acquired the license through a third party
  • there’s no real negotiation opportunity when you buy from a marketplace

So as businesses race to digitization, they will embrace the marketplace as it will help them get part of the way there very quickly, but then when they realize just how much they are spending on app(lication)s, and turn Procurement on strategic procurement of SaaS, the first thing to go will be the app marketplace purchases … and then … it will be time for the hammer to fall.

Spendata: The Power Tool for the Power Spend Analyst — Now Usable By Apprentices as Well!

We haven’t covered Spendata much on Sourcing Innovation (SI), as it was only founded in 2015 and the doctor did a deep dive review on Spend Matters in 2018 when it launched (Part I and Part II, ContentHub subscription required), as well as a brief update here on SI where we said Don’t Throw Away that Old Spend Cube, Spendata Will Recover It For You!. the doctor did pen a 2020 follow up on Spend Matters on how Spendata was Rewriting Spend Analysis from the Ground Up, and that was the last major coverage. And even though the media has been a bit quiet, Spendata has been diligently working as hard on platform improvement over the last four years as they were the first four years and just released Version 2.2 (with a few new enhancements in the queue that they will roll out later this year). (Unlike some players which like to tack on a whole new version number after each minor update, or mini-module inclusion, Spendata only does a major version update when they do considerable revamping and expansion, recognizing that the reality is that most vendors only rewrite their solution from the ground up to be better, faster, and more powerful once a decade, and every other release is just an iteration, and incremental improvement of, the last one.)

So what’s new in Spendata V 2.2? A fair amount, but before we get to that, let’s quickly catch you up (and refer you to the linked articles above for a deep dive).

Spendata was built upon a post-modern view of spend analysis where a practitioner should be able to take immediate action on any data she can get her hands on whenever she can get her hands on it and derive whatever insights she can get for process (or spend) improvement. You never have perfect data, and waiting until Duey, Clutterbuck, and Howell1 get all your records in order to even run your first report when you have a dozen different systems to integrate data from, multiple data formats to map, millions of records to classify, cleanse and enrich, and third party data feeds to integrate will take many months, if not a year, and during that year where you quest for the mythical perfect cube you will continue to lose 5% due to process waste, abuse, and fraud, and 3% to 15% (or more) across spend categories where you don’t have good management but could stem the flow simply by identifying them and putting in place a few simple rules or processes. And you can identify some of these opportunities simply by analyzing one system, one category, and one set of suppliers. And then moving on to the next one. And, in the process, Spendata automatically creates and maintains the underlying schema as you slowly build up the dimensions, the mapping, cleansing, and categorization rules, and the basic reports and metrics you need to monitor spend and processes. And maybe you can only do 60% to 80% piecemeal, but during that “piecemeal year”, you can identify over half of your process and cost savings opportunities and start saving now, versus waiting a year to even start the effort. When it comes to spend (related) data analysis, no adage is more true than “don’t put off until tomorrow what you can do today” with Spendata, because, and especially when you start, you don’t need complete or perfect data … you’d be amazed how much insight you can get with 90% in a system or category, and then if the data is inconclusive, keeping drilling and mapping until you get into the 95% to 98% accuracy range.

Spendata was also designed from the ground up to run locally and entirely in the browser, because no one wants to wait for an overburdened server across a slow internet connection, and do so in real time … and by that we mean do real analysis in real time. Spendata can process millions of records a minute in the browser, which allows for real time data loads, cube definitions, category re-mappings, dynamically derived dimensions, roll-ups, and drill downs in real-time on any well-defined data set of interest. (Since most analysis should be department level, category level, regional, etc., and over a relevant time span, that should not include every transaction for the last 10 years because beyond a few years, it’s only the quarter over quarter or year over year totals that become relevant, most relevant data sets for meaningful analysis even for large companies are under a few million transactions.) The goal was to overcome the limitations of the first two generations of spend analysis solutions where the user was limited to drilling around in, and deriving summaries of, fixed (R)OLAP cubes and instead allow a user to define the segmentations they wanted, the way they wanted, on existing or newly loaded (or enriched federated data) in real time. Analysis is NOT a fixed report, it is the ability to look at data in various ways until you uncover an inefficiency or an opportunity. (Nor is it simply throwing a suite of AI tools against a data set — these tools can discover patterns and outliers, but still require a human to judge whether a process improvement can be made or a better contract secured.)

Spendata was built as a third generation spend analysis solution where

  • data can be loaded and processed at any point of the analysis
  • the schema is developed and modified on the fly
  • derived dimensions can be created instantly based on any combination of raw and previously defined derived dimensions
  • additional datasets from internal or external sources can be loaded as their own cubes, which can then be federated and (jointly) drilled for additional insight
  • new dimensions can be built and mapped across these federations that allow for meaningful linkages (such as commodities to cost drivers, savings results to contracts and purchasing projects, opportunities by size, complexity, or ABS analysis, etc.)
  • all existing objects — dimensions, dashboards, views (think dynamic reports that update with the data), and even workspaces can be cloned for easy experimentation
  • filters, which can define views, are their own objects, can be managed as their own objects, and can be, through Spendata‘s novel filter coin implementation, dragged between objects (and even used for easy multi-dimensional mapping)
  • all derivations are defined by rules and formula, and are automatically rederived when any of the underlying data changes
  • cubes can be defined as instances of other cubes, and automatically update when the source cube updates
  • infinite scrolling crosstabs with easy Excel workbook generation on any view and data subset for those who insist on looking at the data old school (as well as “walk downs” from a high-level “view” to a low-level drill-down that demonstrates precisely how an insight was found
  • functional widgets which are not just static or semi-dynamic reporting views, but programmable containers that can dynamically inject data into pre-defined analysis and dimension derivations that a user can use to generate what-if scenarios and custom views with a few quick clicks of the mouse
  • offline spend analysis is also available, in the browser (cached) or on Electron.js (where the later is preferred for Enterprise data analysis clients)

Furthermore, with reference to all of the above, analyst changes to the workspace, including new datasets, new dashboards and views, new dimensions, and so on are preserved across refresh, which is Spendata’s “inheritance” capability that allows individual analysts to create their own analyses and have them automatically updated with new data, without losing their work …

… and this was all in the initial release. (Which, FYI, no other vendor has yet caught up to. NONE of them have full inheritance or Spendata‘s security model. And this was the foundation for all of the advanced features Spendata has been building since its release six years ago.)

After that, as per our updates in 2018 and 2020, Spendata extended their platform with:

  • Unparalleled Security — as the Spendata server is designed to download ONLY the application to the browser, or Spendata‘s demo cubes and knowledge bases, it has no access to your enterprise data;
  • Cube subclassing & auto-rationalization — power users can securely setup derived cubes and sub-cubes off of the organizational master data cubes for the different types of organizational analysis that are required, and each of these sub-cubes can make changes to the default schema/taxonomy, mappings, and (derived) dimensions, and all auto-update when the master cube, or any parent cube in the hierarchy, is updated
  • AI-Based Mapping Rule Identification from Cube Reverse Engineering — Spendata can analyze your current cube (or even a report of vendor by commodity from your old consultant) and derive the rules that were used for mapping, which you can accept, edit, or reject — we all know black box mapping doesn’t work (no matter how much retraining you do, as every “fix” all of a sudden causes an older transaction to be misclassified); but generating the right rules that can be human understood and human maintained guarantees 100% correct classification 100% of the time
  • API access to all functions, including creating and building workspaces, adding datasets, building dimensions, filtering, and data export. All Spendata functions are scriptable and automatable (as opposed to BI tools with limited or nonexistent API support for key functions around building, distributing, and maintaining cubes).

However, as we noted in our introduction, even though this put Spendata leagues beyond the competition (as we still haven’t seen another solution with this level of security; cube subclassing with full inheritance; dynamic workspace, cube, and view creation; etc.), they didn’t stop there. In the rest of this article, we’ll discuss what’s new from the viewpoint of Spendata Competitors:

Spendata Competitors: 7 Things I Hate About You

Cue the Miley Cyrus, because if competitors weren’t scared of Spendata before, if they understand ANY of this, they’ll be scared now (as Spendata is a literal wrecking ball in analytic power). Spendata is now incredibly close to negating entire product lines of not just its competitors, but some of the biggest software enterprises on the planet, and 3.0 may trigger a seismic shift on how people define entire classes of applications. But that’s a post for a later day (but should cue you up for the post that will follow this on on just precisely what Spendata 2.2 really is and can do for you). For now, we’re just going to discuss seven (7) of the most significant enhancements since our last coverage of Spendata.

Dynamic Mapping

Filters can now be used for mapping — and as these filters update, the mapping updates dynamically. Real-time reclassify on the fly in a derived cube using any filter coin, including one dragged out of a drill down in a view. Analysis is now a truly continuous process as you never have to go back and change a rule, reload data, and rebuild a cube to make a correction or see what happens under a reclassification.

View-Based Measures

Integrate any rolled up result back into the base cube on the base transactions as a derived dimension. While this could be done using scripts in earlier versions, it required sophisticated coding skills. Now, it’s almost as easy as a drag-and-drop of a filter coin.

Hierarchical Dashboard Menus

Not only can you organize your dashboards in menus and submenus and sub-sub menus as needed, but you can easily bookmark drill downs and add them under a hierarchical menu — makes it super easy to create point-based walkthroughs that tell a story — and then output them all into a workbook using Spendata‘s capability to output any view, dashboard, or entire workspace as desired.

Search via Excel

While Spendata eliminates the need for Excel for Data Analysis, the reality is that is where most organizational data is (unfortunately) stored, how most data is submitted by vendors to Procurement, and where most Procurement Professionals are the most comfortable. Thus, in the latest version of Spendata, you can drag and drop groups of cells from Excel into Spendata and if you drag and drop them into the search field, it auto-creates a RegEx “OR” that maintains the inputs exactly and finds all matches in the cube you are searching against.

Perfect Star Schema Output

Even though Spendata can do everything any BI tool on the market can do, the reality is that many executives are used to their pretty PowerBI graphs and charts and want to see their (mostly static) reports in PowerBI. So, in order to appease the consultancies that had to support these executives that are (at least) a generation behind on analytics, they encoded the ability to output an entire workspace to a perfect star schema (where all keys are unique and numeric) that is so good that many users see a PowerBI speed up by a factor of almost 10. (As any analyst forced to use PowerBI will tell you, when you give PowerBI any data that is NOT in a perfect star schema, it may not even be able to load the data, and that it’s ability to work with non-numeric keys at a speed faster than you remember on an 8088 is nonexistent.)

Power Tags

You might be thinking “tags, so what“. And if you are equating tags with a hashtag or a dynamically defined user attribute, then we understand. However, Spendata has completely redefined what a tag is and what you can do with it. The best way to understand it is a Microsoft Excel Cell on Steroids. It can be a label. It can be a replica of a value in any view (that dynamically updates if the field in the view updates). It can be a button that links to another dashboard (or a bookmark to any drill-down filtered view in that dashboard). Or all of this. Or, in the next Spendata release, a value that forms the foundation for new derivations and measures in the workspace just like you can reference a random cell in an Excel function. In fact, using tags, you can already build very sophisticated what-if analysis on-the-fly that many providers have to custom build in their core solutions (and take weeks, if not months, to do so) using the seventh new capability of Spendata, and usually do it in hours (at most).

Embedded Applications

In the latest version of Spendata, you can embed custom applications into your workspace. These applications can contain custom scripts, functions, views, dashboards, and even entire datasets that can be used to instantly augment the workspace with new analytic capability, and if the appropriate core columns exist, even automatically federate data across the application datasets and the native workspace.

Need a custom set of preconfigured views and segments for that ABC Analysis? No sweat, just import the ABC Analysis application. Need to do a price variance analysis across products and geographies, along with category summaries? No problem. Just import the Price Variance and Category Analysis application. Need to identify opportunities for renegotiation post M&A, cost reduction through supply base consolidation, and new potential tail spend suppliers? No problem, just import the M&A Analysis app into the workspace for the company under consideration and let it do a company A vs B comparison by supplier, category, and product; generate the views where consolidation would more than double supplier spend, save more than 100K on switching a product from a current supplier to a lower cost supplier; and opportunities for bringing on new tail spend suppliers based upon potential cost reductions. All with one click. Not sure just what the applications can do? Start with the demo workspaces and apps, define your needs, and if the apps don’t exist in the Spendata library, a partner can quickly configure a custom app for you.

And this is just the beginning of what you can do with Spendata. Because Spedata is NOT a Spend Analysis tool. That’s just something it happens to do better than any other analysis tool on the market (in the hands of an analyst willing to truly understand what it does and how to use it — although with apps, drag-and-drop, and easy formula definition through wizardly pop-ups, it’s really not hard to learn how to do more with Spendata than any other analysis tool).

But more on this in our next article. For The Times They Are a-Changin’.

1 Duey, Clutterbuck, and Howell keeps Dewey, Cheatem, and Howe on retainer … it’s the only way they can make sure you pay the inflated invoices if you ever wake up and realize how much you’ve been fleeced for …

The Best Way Procurement Chiefs Can Create a Solid Foundation to Capitalize on AI

As per our recent post on how I want to be Gen AI Free, the best way to capitalize on Gen-AI is to avoid it entirety. That being said, the last thing you should avoid is the acquisition of modern technology, including traditional ML-AI that has been tried and tested and proven to work extremely well in the right situation.

That being said, if you ignore the reference to Gen-AI, a recent article on Acceleration Economy on 5 Ways Procurement Chiefs Can Create a Solid Foundation had some good tips on how to go about adopting ML-AI with success.

The five foundations were quite appropriate.

1. Organize

A plan for

  1. exactly where the solution will be deployed,
  2. what use cases it will be deployed for,
  3. how valid use cases will be identified, and
  4. how the solution is expected to perform on them.

There’s no solution, even AI, that can do everything. Even limited to a domain, no AI will work for all situations that may arise. As a result, you need a methodology to identify the valid use cases and the invalid use cases and ensure that only the valid uses cases are processed. You also need to ensure you know the expected ranges of the answers that will be provided. Then you need to implement checks to ensure that no only are only valid situations processed but that only output in an expected range is accepted in any automated process, and if anything is outside the expected norms anywhere, a human with appropriate education and training is brought into the loop.

2. Create a Policy

No technology should be deployed in critical situations without a policy dictating valid, and invalid, use. Moreover, any technology definitely shouldn’t be used by people who aren’t trained in both the job they need to do and proper use of the tool. Even though most AI is not as dangerous as Gen-AI, any AI, if improperly used, can be dangerous. It’s critical to remember that computers cannot think, and only thunk on the data they are given (performing millions of calculations in the time it takes an average person to perform two). As such, the quality of output is limited both to the quality of data input and the knowledge built into the model used. Neither will be complete or perfect, and there will always be external factors not considered, which, even if normally not relevant, could be relevant — and only an educated and experienced human will know that. (Moreover, that human needs to be involved in the policy creation to ensure the technology is only used where, when, and how appropriate.)

3. Understand Your Platform(s) of Choice

Just like there are a plethora of Gen-AI applications, a lot of different vendors offer AI applications, and even if most are similar, not all are created equal. It’s important to understand the similarities and differences between them and select the one that is right for your business. (Consider the algorithms and models used, the extent of human validated training available, typical accuracy / results, and the vendor’s experience in your use case in particular when evaluating an AI solution.)

4. Practice

Introducing new tools requires process changes. Before introducing the tool, make sure you can execute the associated process changes, first by executing training exercises on the different types of output you might get and then, possibly by way of a third party who uses a tool on your behalf, using real inputs and associated outputs. While the AI may automate more of the process, it’s even more critical that you respond appropriately to parts of the process that cannot be automated or where the application throws an exception because the situation is not appropriate to either the use of AI or the use of the AI output. (And if you don’t get any exceptions, question the AI … it’s not likely not working right! And if you get too many exceptions, it’s not the right AI for you.)

5. ALWAYS Ask Yourself: “Does that Make Sense?”

Just like Gen-AI hallucinates, traditional AI, even tried-and-true AI that is highly predictable, will sometimes give wrong results. This will usually happen if bad data slips in, if the use case is on the boundary of expected use cases, or the external situation has changed considerably since the last time the use case arose. Thus, it’s always important to ask yourself if the output makes sense. For tried-and-true AI where the confidence is high, it will make sense the vast majority of the time, but there will still be the occasional exception. Human confirmation is, thus, always required!

With proper use, AI, unlike Gen-AI (which fails regularly and sometimes hallucinates so convincingly that even an expert has a hard time identifying false results), will give great results the majority of the time — so you should seek it out and implement it. Just also implement checks and balances to catch those rare situations it doesn’t and put a human in the loop when that happens. Because traditional use-cases are more constrained, and predictable, it’s a lot easier to identify and implement these checks and balances. So do it … and see great success!

Strategic Sourcing & Procurement for Technology Cost Optimization

Given that we recently published a piece noting that Roughly Half a Trillion Dollars Will Be Wasted on SaaS Spend This Year and up to One Trillion Dollars on IT Services, it’s obvious that one has to be very careful with technology acquisition as it is very easy to overspend on the license and the implementation for something that doesn’t even solve your problem.

As a result, you need to be very strategic about it. While you certainly can’t put the majority of your technology acquisitions (which can be 6, 7, and even 8 figures) up for auction (as products are never truly apples to apples to apples), you definitely have to be strategic about it. As a result, you should be doing multi-round RFPs and then awarding to the vendor who brings you the best overall value for the term you want to commit to, once all things are considered.

But these have to be well thought out … you need to make sure that you are only inviting providers that are likely to meet 100% of your must haves, 80% of your should haves, and 60% of your nice to haves (and, moreover, that you have really separated out absolute vs highly desired vs wanted but not needed because the more you insist on, especially when it’s not necessary, the shallower the vendor pool, and the more you are going to end up paying*).

To do this, as the article notes, you have to know what processes you need to support, what improvements you are expecting, what measurements you need the platform to take, and what business objectives it needs to support. Then you need to align your go-to-market sourcing/procurement strategy with those objectives and make sure the RFP covers all the core requirements (without asking 100 unnecessary questions about features you’ll never actually use in practice).

You also need to know what quantifiable benefits the platform should deliver, both in terms in tactical work(force) reduction (as the tech you acquire should be good at thunking), and the value that will be obtained from the strategic enablement (in terms of analysis, intelligence gathering, guided events, etc.) the platform should deliver. If it is a P2P platform, how much invoice processing is it going to automate, and, based on that, how much is it going to reduce your average invoice processing cost? If it’s a sourcing platform, how much more spend will you be able to source (without increasing person-power) and what is a reasonable savings percentage to expect on that? Understand the value before you go to market.

Then you need to understand how much support and help you need from the vendor. If you just want a platform that does a function, then you just need to know the vendor can support the platform in supporting that function. But if you need help in process transformation or optimization, customized development or third party tool integration for advanced/custom processes, etc. you need a vendor that cannot only provide services, but also be a strategic provider for you as well.

And so on. For more insights, we suggest you check out a recent article by Alix Partners on Strategic Sourcing and Procurement for Technology Cost Optimisation. It has a lot of great advice for those starting their strategic procurement technology journey.

*Just remember, if you’re a mid-market, and you’re flexible (i.e. define what a module needs to accomplish for you vs. a highly specific process) you can get your absolute functionality and most of your desired functionality for 120K in annual SaaS license fees, excluding data feeds and services. If you’re not flexible, or not really strict in really separating out absolute vs strongly desired vs nice-to-have, you can easily be paying four times that.

Also remember, if you’re enterprise, your absolutes and strongly desired are much more extensive, typically require a lot more advanced tech (like optimization, predictive analytics, ML/AI, etc.), and licenses fees alone will cost you in the 500K to 1M range annually at a minimum, not counting the 100K to 1M you will need to spend on the implementation, data cleansing and enrichment, integration, training, and real-time data feed access, so it is absolutely vital you get it right!

Source-to-Pay+ Part 9: Cyber

In Part 1 we noted that Risk Management went much beyond Supplier Risk, and the primitive Supplier “Risk” Management application that is bundled in many S2P suites. Then, in Part 2, we noted that there are risks in every supply chain entity; with the people and materials used; and with the locales they operate in. In Part 3 we moved onto an overview of Corporate Risk, in Part 4 we took on Third Party Risk (in Part 4A and Part 4B), in Part 5 we laid the foundation for Supply Chain Risk (Generic), in Part 6 we addressed the first major supply chain risk: in-transport, followed by the second major supply chain risk: lack of multi-tier visibility in Part 7. In our last article, Part 8, we discussed the baseline Analytics that should be part of all of the different risk systems we covered in Parts 3 through 7, as well as a control centre.

Today, in Part 9, we move onto Cyber Risks. In today’s hyperconnected SaaS world, nearly half of an organization’s data breaches originate in the cloud (see this recent article by Illumio on Cyber Magazine, for example). So cyber security is important, but not just for your organization — for your entire supply chain.

Note that we are not going to dive deep, there are plenty of security firms that will do that for you. We’re just going to highlight key points of risk that must be covered in your cyber security plan.

Internal Cyber Risk Monitoring and Prevention System
Risks that must be addressed.

Risk Description
E-mail Plenty of risks come in through e-mail. The biggest one you are likely aware of is fraudlent requests for payment from fraudsters posing as fake suppliers / service providers / consultants or new employees in a remote office asking you to approve an emergency payment. However, since fraudsters blast these far and wide (as it takes less work to create them), the most common fraudulent emails are usually phishing/ransom attempts where you have to click an email and enter your system login information to retain access to your email account (or another system you use). (Then they use those credentials you freely gave them to login to your systems, lock you out of them, and demand payment to unlock your account.)

Your email system needs to do more than identify an external sender. It, or the security plug in, needs

  1. to verify the originating domain of the email (since most fraudsters can’t mask the domain they send from),
  2. to identify the domain and location of the first intermediate server the message hits (since that can’t be masked unless they’ve hacked that) as well as if it matches the locale of the domain the email purports to come from, and
  3. to identify the domain of each embedded link and the company it belongs to (as fraudsters are great at registering domains just ONE letter of an actual domain and cloning the contents of the faked domain; e.g. chaEse.com vs chase.com … one is your bank, one will soon be scooped up by a fraudster who will skim account logins for a day during a “maintenance window”, then drain all the accounts dry (or at least to the transfer limits) the next day and wire the money to a foreign account in a jurisdiction with no extradition or banking treaties with the US, then empty the account the day after that, and then disappear never to be seen again …
Hacking Hackers will constantly be trying to penetrate your firewalls, the web servers and underlying operating systems of machines in the DMZ, the applications you are running, and the underlying security systems you use for monitoring and detection (but these are likely the most secure, especially if you are having them maintained and monitored by a professional, big name, IT security firm); You need to be monitoring for unusual activity, (D)DoS attacks, repeated login failures or access abandonments at particular ports or in particular application logs, and so on; You also need a few attractive honeypots that emulate the systems the hackers would want to access most, and if you don’t understand this, or why, talk to your security guru.
Ransomeware Hackers want to access your systems for two reasons, to steal money and IP or lock you out of them (if they can’t access any IP worth stealing or you don’t use any finance systems capable of [authorizing] payments) so you will pay them to get back into your systems. You need to be very careful to not only detect hacking attempts, but the installation of new software that is unrecognized / not authorized by security. This is because you could be totally screwed and have no choice but to pay the ransomware even if you do complete, incremental, daily backups across all systems because smart hackers will install the ransomware, let it sit for a few weeks or so, and then activate when you can’t roll back to a backup because you’d lose weeks or months of data (as you’d have to roll back to just before the ransomware was installed because the majority of backup systems would not be able to identify the actual file changes and there’s no way you could do a restore and not restore the ransomeware after the ransomware was discretely installed).
Infected Websites Your users love to surf, surf, surf the web and go where the hidden links take them. You can’t expect they will all keep their browsers up to date, keep the underlying OS up to date, and, simply put, not be careless. You need to enforce security software on their machine, and check for it, before that machine accesses your network and that the security software is up to date because if they visit the right infected website (from a fraudster’s point of view), it can be an instant hack and/or backdoor for the automatic installation of ransomware on their machine and/or your network.

External Cyber Risk Monitoring and Prevention System
Risks that must be addressed.

Risk Description
Compromised Supplier Site If a supplier site or system is compromised, and you engage with that system in any way, then your system could be compromised. You need a system that monitors for supplier system/site/cloud risks as well as (known) supplier breaches.
Compromised Data All of your systems run off of data. Compromised data is the easiest way to compromise a system. If an email gets intercepted and altered in-transit with a man in the middle account and the hacker changes bank account information, you’re paying a fraudster and not the supplier. If the third party risk metrics are adjusted, your system can be tricked to diverting all business to a single, new, supplier which, while a legal entity, was setup by the founder to take your money and run. And so on.
Compromised Identities Identity theft is on the rise, and it’s often the easiest way for a fraudster to get funds from a business. You need to track all known cases of identify theft associated with all individuals associated with all businesses associated with your business as you will need to do extra verifications on requests from those individuals.
Web-Based Vulnerabilities You need to be aware of where the biggest web-based vulnerabilities are in your suppliers and partners, make sure your suppliers and partners monitor and address those, and make sure you lock down your security to the max when you have to interact with their systems that are classified as high risk for vulnerability.

And more. There’s a lot of risk in cyberspace thanks to the fact that the information and financial worlds have merged, and your organization needs to be on top of it. Identify appropriate providers, or you will need very good luck to not fall victim to a significant cyber-based threat.