Category Archives: Technology

“Generative AI” or “CHATGPT Automation” is Not the Solution to your Source to Pay or Supply Chain Situation! Don’t Be Fooled. Be Insulted!

If you’ve been following along, you probably know that what pushed the doctor over the edge and forced him back to the keyboard sooner than he expected was all of the Artificial Indirection, Artificial Idiocy & Automated Incompetence that has been multiplying faster than Fibonacci’s rabbits in vendor press releases, marketing advertisements, capability claims, and even core product features on the vendor websites.

Generative AI and CHATGPT top the list of Artificial Indirection because these are algorithms that may, or may not, be useful with respect to anything the buyer will be using the solution for. Why?

Generative AI is simply a fancy term for using (deep) neural networks to identify patterns and structures within data to generate new, and supposedly original, content by pseudo-randomly producing content that is mathematically, or statistically, a close “match” to the input content. To be more precise, there are two (deep) neural networks at play — one that is configured to output content that is believed to be similar to the input content and a second network that is configured to simply determine the degree of similarity to the input content. And, depending on the application, there may be a post-processor algorithm that takes the output and tweaks it as minimal as possible to make sure it conforms to certain rules, as well as a pre-processor that formats or fingerprints the input for feeding into the generator network.

In other words, you feed it a set of musical compositions in a well-defined, preferably narrow, genre and the software will discern general melodies, harmonies, rhythms, beats, timbres, tempos, and transitions and then it will generate a composition using those melodies, harmonies, rhythms, beats, timbres, tempos, transitions and pseudo-randomization that, theoretically, could have been composed by someone who composes that type of music.

Or, you feed it a set of stories in a genre that follow the same 12-stage heroic story arc, and it will generate a similar story (given a wider database of names, places, objects, and worlds). And, if you take it into our realm, you feed it a set of contracts similar to the one you want for the category you just awarded and it will generate a usable contract for you. It Might Happen. Yaah. And monkeys might fly out of my butt!

CHATGPT is a very large multi-modal model that uses deep learning that accepts image and text as inputs and produces outputs expected to be inline with what the top 10% of experts would produce in the categories it is trained for. Deep learning is just another word for a multi-level neural network with massive interconnection between the nodes in connecting layers. (In other words, a traditional neural network may only have 3 levels for processing with nodes only connected to 2 or 3 nearest neighbours on the next level while a deep learning network will have connections to more near neighbors and at least one more level [for initial feature extraction] than a traditional neural network that would have been used in the past.)

How large? Large enough to support approximately 100 Trillion parameters. Large enough to be incomprehensible in size. But not in capability, no matter how good its advocates proclaim it to be. Yes, it can theoretically support as many parameters as the human brain has synapses, but it’s still computing its answers using very simplistic algorithms and learned probabilities, neither of which may be right (in addition to a lack of understanding as to whether or not the inputs we are providing are the right ones). And yes it’s language comprehension is better as the new models realize that what comes after a keyword can be as important, or more, than what came before (as not all grammars, slang, or tones are equal), but the probability of even a ridiculously large algorithm interpreting meaning (without tone, inflection, look, and other no verbal cues when someone is being sarcastic, witty, or argumentative, for example) is still considerably less than a human.

It’s supposed to be able to provide you an answer to any query for which an answer can be provided, but can it? Well, if it interprets your question properly and the answer exists, or a close enough answer exists and enough rules for altering that answer to the answer that you need exists, then yes. Otherwise, no. And yes, over time, it can get better and better … until it screws up entirely and when you don’t know the answer to begin with, how will you know the 5 times in a hundred it’s wrong and which one of those 5 times its so wrong that if you act on it, you are putting yourself, or your organization, in great jeopardy?

And its now being touted as the natural language assistant that can not only answer all your questions on organizational operations and performance but even give you guidance on future planning. I’d have to say … a sphincter says what?

Now, I’m not saying properly applied these Augmented Intelligence tools aren’t useful. They are. And I’m not saying they can’t greatly increase your efficiency. They can. Or that appropriately selected ML/PA techniques can’t improve your automation. They most certainly can.

What I am saying are these are NOT the magic beans the marketers say they are, NOT the giant beanstalk gateway to the sky castle, and definitely NOT the goose that lays the golden egg!

And, to be honest, the emphasis on this pablum, probabilistic, and purposeless third party tech is not only foolish (because a vendor should be selling their solid, specialty built, solution for your supply chain situation) but insulting. By putting this first and foremost in their marketing they’re not only saying they are not smart enough to design a good solution using expert understanding of the problem and an appropriate technological solution but that they think you are stupid enough to fall for their marketing and buy their solution anyway!

Versus just using the tech where it fits, and making sure it’s ONLY used where it fits. For example, how Zivio is using #ChatGPT to draft a statement of work only after gathering all the required information and similar Statements of Work to feed into #ChatGPT, and then it makes the user review, and edit as necessary, knowing that while the #ChatGPT solution can generate something close with enough information and enough to work with, every project is different and an algorithm never has all the data and what is therefore produced will never be perfect. (Sometimes close enough that you can circulate it is a draft, or even post it for a general purpose support role, but not for any need that is highly specific, which is usually the type of need an organization goes to market for.)

Another example would be using #ChatGPT as your Natural Language Interface to provide answers on performance, projects, past behaviour, best practices, expert suggestions, etc. instead of having the users go through 4+ levels of menus, designing complex reports/views and multiple filters, etc. … but building in logic to detect when a user is asking a question on data versus asking for a prediction on data vs. asking for a decision instead of making one themself … and NOT providing an answer to the last one, or at least not a direct answer. For example, how many units of our xTab did we sell last year is a question on data the platform should serve up quickly. How many units do we forecast to sell in the next 12 months is a question on prediction the platform should be able to derive an answer for using all the data available and the most appropriate forecasting model for the category, product, and current market conditions. How many units should I order is asking the tool to make a decision for the human so either the tool should detect it is being asked to make a decision where it doesn’t have the intelligence or perfect information to do and respond with I’m not programmed to make business decisions or return an answer that the current forecast for the next quarter’s demand for xTab for which we will need stock is 200K units, typically delivery times are 78 days, and based on this, the practice is to order one quarter’s units at a time. The buyer may not question the software and blindly place the order, but the buyer still has to make the decision to do that.

And no third party AI is going to blindly come up with the best recommendation as it has to know the category specifics, what forecasting algorithms are generally used, why, the typical delivery times, the organization’s preferred inventory levels and safety stock, and the best practices the organization should be employing.

AI is simply a tool that provides you with a possible (and often probable, but never certain) answer when you haven’t yet figured out a better one, and no AI model will ever beat the best human designed algorithm on the best data set for that algorithm.

At the end of the day, all these AI algorithms are doing is learning a) how to classify the data and then b) what the best model is to use on that data. This is why the best forecasting algorithms are still the classical ones developed 50 years ago, as all the best techniques do is get better and better and selecting the data for those algorithms and tuning the parameters of the classical model, and why a well designed, deterministic, algorithm by an intelligent human can always beat an ill designed one by an AI. (Although, with the sheer power of today’s machines, we may soon reach the point where we reverse engineer what the AI did to create that best algorithm versus spending years of research going down the wrong paths when massive, dumb, computation can do all that grunt work for us and get us close to the right answer faster).

Tealbook: Laying the Groundwork for the Supplier Data Foundations

It wasn’t that long ago that we asked you if you had a data foundation because a procurement management platform, should you be lucky enough to get one (which is much more than a suite), generally only supports the data it needs for Procurement to function and doesn’t support the rest of the organization. Furthermore, when you look across the Source-to-Pay and Supply Chain spectrums, there are a lot of different applications that support a lot of different processes that have a lot of different data requirements that need to be maintained as different data types in different encoding formats.

Furthermore, as we noted in the aforementioned post, it’s rare enough to find MDM capability that will even support procurement. This is because most suites are built on transactions, most supplier networks on relational supplier data records, and contracts on documents and simple, hierarchical, meta-data indexes. But you also need models, meta-models, semi-structured, unstructured, and media support. And more. The need is broad, and even if you restrict the need to supplier data, it’s quite broad.

As you will soon garner from our ongoing Source-to-Pay+ is Extensive series, in which we just started tackling Supplier Management in Part XV, the supplier data an organization needs is extremely varied and extensive. Given that Supplier Management is a CORNED QUIP mash with ten (10) major areas of functionality, not counting broader enterprise needs around ESG, innovation, product design / manufacturing management, and other needs tied to operations management, engineering, and enterprise risk management, among other functions, it’s easy to see just how difficult even Supplier Master Data Management can be.

Considering not a single Supplier Management solution vendor (as you will come to understand as we progress through the Source-to-Pay is Extensive series) covers all of the basic functions we’re outlining, it’s obviously that not a single vendor can effectively do Supplier Master Data Management today. However, Tealbook, which has realized this since their exception, is aiming to be the first to fix this problem. As of the first release of their open API next month, they are transitioning to a Supplier Data Platform and will no longer focus on being just a supplier discovery platform or diversity data enrichment platform. (They will still offer those services, and will be upgrading them in Q4 with general release expected by the end of 2023, but their primary focus will be on the supplier data foundation that enables this.)

This is significant, and illustrates how far they’ve come in the nine (9) years since their founding when their original focus was on building a community supplier intelligence platform that was reliable, scalable, extensible, and appropriate for new supplier discovery (via a large database of verified suppliers with community reviews). From these humble beginnings, where they didn’t even have a million suppliers in their platform after their third year of existence, they grew into the largest supplier network with over 5 Million detailed supplier profiles that is integrated with the largest S2P suites out there (Ariba, GEP, Ivalua, Jaggaer, and Workday, to name a few) and powers some of the largest organizations on the planet. As part of this empowerment, they can take in an organization’s entire supplier data ecosystem, transform it into their standard formats, match to their records, verify or correct existing data, and then enrich the organization’s supplier records before sending them back. In addition, they integrate with a multitude of BI tools, databases / lakes / warehouses (including ERPs), digital platforms, and so on.

To summarize, that’s a ten fold increase in suppliers and an explosion in global utilization and usage. At the same time, the platform has been augmented with over 2.3M supplier certifications, global diversity data, and the ability to track an organization’s tier 2 supplier diversity data. Quite impressive.

And while this meets most of an organization’s discovery needs, Tealbook knew that it didn’t meet all of an organization’s supplier data needs, especially when you think about all of the regulatory, financial, compliance, performance, sustainability, risk, contract, product/service, relationship, quality, and enablement/innovation data an organization needs to maintain on a supplier. As a result, they have been aggressively working on two key pieces of functionality. An extended universal supplier profile and a fully open, extensible, API that an organization can use to do supplier master data management across their enterprise with the Tealbook Supplier Data Platform. An organization can use the Tealbook Supplier Data Platform to classify, cleanse, and enrich supplier records; augment those records with third party data for sustainability, compliance, and risk; find new suppliers in the network; and so on.

In short, Tealbook is on a mission to be the organization’s trusted supplier data source, and to constantly improve their data offering both with their own ML/AI enabled technology that monitors over 400M+ public websites for supplier-related data (supplier web sites, business registries, certification providers, supplier data providers, etc. etc. etc.), maintains data provenance (when was it last updated, by what/who, etc.), and provides trust scores (in their proprietary framework that indicates Tealbook’s confidence in accuracy and correctness).

The real mission begins next month when they release their new Open API that will allow an organization to integrate, and interact with, Tealbook the way it needs to across its enterprise applications. Congruent with this release, they will also start releasing their enrichment data-packs that will, within the next year, allow the organization to plug-and-play the data they need to confirm firmographics, contact channels and key information, diversity, supplier offerings, financials, certifications, and basic risk data (which Tealbook will offer through partnerships with specialty supplier data providers, giving an organization a one-stop shop vs. having to license with multiple providers separately to build its 360-degree supplier profile).

Then, over the next year, Tealbook will enhance the usability of their data platform by first rebuilding their diversity and discovery applications and then building out new applications around sustainability, risk, benchmarking, and other areas that their customers would rather a data platform handle for them.

Source-to-Pay+ Is Extensive (P13) … But I Can’t Touch The Sacred Cows!

In our last installment (Part 12) of this series here on Sourcing Innovation (SI), we provided you a list of forty-plus (40+) vendors that could potentially meet your spend analysis needs and help you identify the cost savings, reduction, and avoidance opportunities you have in your organization as well as the best modules to achieve those cost savings, reduction, and avoidance opportunities. The right spend analysis tool properly applied will generate returns that are many orders of magnitude greater than the cost of the tool and will surprise you.

However, some of those best opportunities will be in the “sacred cows” of Marketing, Legal, and SaaS subscriptions. And you probably think you can’t do anything because you don’t have the data, Marketing and Legal won’t let you touch their spend (or give you the detail you need to even analyze it, often because they didn’t collect it), and you have no idea on what SaaS is actually being used and how much you overspend.

the doctor knows this, and knows that you might need custom solutions to manage, and analyze, this spend, so, before we move on and tackle the next module in the Source-to-Pay queue, we’re going to take a brief sidebar and provide you with short lists of vendors that specialize in each area that will collect the data you need — and sometimes even provide you with deep, customized, integrated analytics that provide you with the insights that matter (including the insights that matter on your matter spend) — to enable deep spend analysis, benchmark creation, and opportunity identification.

But first, we have to repeat our disclaimer that, as per the lists of e-Procurement vendors provided in Part 7 and the list of Spend Analysis vendors provided in Part 12, this list is most definitely in no way complete (as no analyst is aware of every company, and neither Marketing nor Legal are the particular domains of expertise of SI), is only valid as of the date of posting (as companies get acquired and go out of business, often without notice), and does not include the broader range of offerings that are available for SaaS Management (including provisioning and cloud management), Marketing (including agency management pure-plays, although DecideWare, for example, does this), or Legal (including contract authoring, management, and clause analysis — although we will cover some of these players when we get to Contract Lifecycle Management [CLM]).

Again, and we can’t say this enough, not all vendors are equal and we’d venture to say that this most definitely applies to the lists below. The companies below are of all sizes (very small to very large), offer different functionality (focussing in on different aspects of Marketing, Legal, and/or SaaS Spend Management), different levels of customization and integration, different types of companion services, focus on different company sizes and/or company types, and integrate with different Source-to-Pay and Enterprise ecosystems.

Do your research, and reach out to an expert for help if you need it in compiling a starting short list of relevant, comparable, vendors for your organization and its specific needs. For a few of these vendors, you may find a write up in the Sourcing Innovation archives, Spend Matters Pro, or Gartner cool vendor write-ups, but for many of these vendors, you’ll have to look beyond your typical sources of information as they are highly specialized and don’t fall into the typical Source-to-Pay bucket. But if you have enough Marketing, Legal, or SaaS spend, they can be highly valuable.

Note that, due to the newness of SaaS spend management, the different marketing and legal needs of every organization, and the high degree of differentiation between many of the solutions below, we are not (yet) defining baseline functionality and instead advising you to do a detailed analysis of your spend, processes, and needs and judge potential solutions based on that. If you need help with that, seek out a pro who can do the (gap) analysis and RFI creation for you.

SaaS (Software-as-a-Service) Subscription Cost Management

Company LinkedIn
HQ (State)
Beamy 60 France
BetterCloud 305 New York, USA
Cledera 63 Colorado, USA
Flexera 1026 Illinois, USA
G2 Track 792 Illinois, USA
Hudled 8 Australia
NPI Financial 410 Georgia, USA
Productiv 139 California, USA
SaaSRooms 9 United Kingdom
SaaSTrax ?? North Carolina, USA
Sastrify 166 Germany
Setyl 14 United Kingdom
Spendflo 70 California, USA
Substly Sweden
Torii 114 New York, USA
Trelica 12 United Kingdom
TRG Screen 179 New York, USA
Tropic 240 New York, USA
Vendr 404 Massachusetts, USA
Viio 18 Columbia
Zluri 111 California, USA
Zylo 144 Indiana, USA

Legal Spend Management

Company LinkedIn
HQ (State)
Apperio 48 United Kingdom
Brightflag 150 New York, USA
(LexisNexis) CounselLink 28 Ohio, USA
Fulcrum GT 158 Illinois, USA
Mitratech TeamConnect 1119 Texas, USA
Onit 339 Texas, USA
Ontra 421 California, USA
Persuit 100 New York, USA
Thomson Reuters Legal Tracker ?? Ontario
Tonkean LegalWorks 76 California
Wolters Kluwer (TyMetrix 360) ??? Netherlands

Marketing (Procurement) Spend Management

Company LinkedIn
HQ (State)
DecideWare 27 Australia
HH Global ?? United Kingdom
Mtivity 15 United Kingdom
Promost 68 Poland
RightSpend 23 New York, USA
SourceIt Market 6 Australia

Onwards to Part 14.

Do You Have a Procurement FocalPoint?

Last month we asked where’s the procurement management platform primarily because we now have a plethora of procurement-centric applications but very little integration between them. However, once you tackle that issue, you have the secondary issue of all these applications, but often no clear starting point and, even worse, no way for an average organizational employee outside of Procurement to interact with Procurement beyond an inbound email to “please get this for me” and the eventual, possibly many months later, outbound email to “we got it, it’s finally here … it will be on your desk tomorrow“.

This is a big problem, even in organizations that supposedly have market leading source-to-pay suites. While all the modules are connected, and the integrated workflow will guide a buyer from project selection to sourcing to supplier selection to award to contracting to supplier onboarding to order creation to receipt creation to invoice confirmation and payment approval and loop back to the order creation until pending contract expiration when the contract can be renewed, renegotiated, or
revoked and the sourcing process started all over. This is great, but for predefined sourcing projects on encoded categories only!

It’s not great for any category not already encoded and typically strategically sourced, and it’s atrocious as new product and service needs arise within the organization, as new hires need new assets for onboarding, as customer requirements change and the organization needs to adapt rapidly and source new products or services to meet new, or one-off, needs. There’s no intake, and no collaboration with the organizational stakeholders Procurement is there to serve.

And that’s a huge problem. That’s why you’re seeing a few companies talking about “intake”, “orchestration”, or “PPM” (which stands for either Procurement Performance Management or Procurement Process Management, depending on who is talking about it) because, without this capability, a Procurement platform will never be complete or support the organization.

Following the introductory post on the procurement management platform, we lamented and celebrated that Per Angusta was going away and being integrated into SpendHQ as the foundations of a new PPM. It’s a great start, but today the focus of SpendHQ is on managing the existing workflows and creating visibility into existing projects — and savings tracking is limited to integrated projects. However, when it comes to intake support and project tracking for arbitrary organizational needs, that’s not there yet.

However, there are other players which are strong here, and one of those players is Focal Point, which was built from the ground up as an intake-to-orchestrate solution that is capable of

  • capturing all organizational requests for Procurement and Procurement-related activities,
  • assigning those requests to customizable workflows using either built in automation rules or manual (re-)assignment,
  • allowing an end-user to see exactly where any request is in the process at any time,
  • allowing for in-platform communication between the stakeholder and Procurement,
  • integrating with any external tool through jump-out/jump-in to support the process, and
  • supporting whatever approval chains are required, among other intake and orchestration functions.

The tool was built to solve the most significant problem the founders repeatedly saw as CPOs and implementers of various leading sourcing solutions — little to no intake management or general purpose procurement process orchestration. And it does it incredibly well. The visual workflow construction is extremely usable, and the wizards that power both the process, form construction, and form completion automatically extend and compress the form as needed based upon user selections and actual needs, making for a very smooth flow.

All of the workflow elements and steps support deep conditional logic, allowing the organization to create as many branches as possible but ensuring that the end user making a request, and the end buyer assigned to deal with that request, only see the relevant paths and only need to enter the relevant information to be guided by the platform.

There can be as many intake types, with associated branching workflows, as the organization needs, each can have the appropriate level of automation, and, most importantly, each can have as many milestones as needed to walk the process through at a high level, allowing the requester to easily see at a high level where the process is, and then, if interested, dive into the detailed workflow within the current milestone to get a more accurate picture of where the process is.

The only thing the platform doesn’t do is actual sourcing, supplier management, contract management, analytics, procurement, or payment management. It expects the organization to have tools for this already and integrates into the appropriate modules in those tools as needed to accomplish the workflow in progress.

In terms of getting up and running, Focal Point typically has a fully fleshed out, functioning, and integrated instance that captures all of the organization’s workflows up and running within 90 days, even if the organization is a multi-(multi-)billion dollar organization, which is Focal Point’s target market size. This is because it’s typically the 1B+ organizations that have a lot of tools, and a lot of stakeholders, but no way to manage those tools effectively or to give stakeholders any visibility into where their requests are and how their spending is being managed.

The reason it typically takes 90 days is that, unlike many sourcing suite providers, who just flip a virtual switch and drop an empty SaaS suite on you and say “good luck“, Focal Point fully configures the platform as part of their statement of work. This includes:

  • working with the organization to understand all of their requirements and current workflows
  • encoding all of those intake workflows with milestones, task-breakdowns, and existing platform jump-outs
  • integrating any existing procurement system you need to complete the workflow
  • creating a UAT instance and allowing for at least one iteration and approval before it goes live
  • training your team on how to use the system and maintain the workflows

So even though Focal Point has obviously achieved efficiency in terms of workflow creation and customization, external platform integration, and implementation project management, it takes time for an average organization to collect and document their existing processes and requirements and for FocalPoint (or a third party consulting organization if that is the customer’s preference) to fill in the gaps, so it’s not possible to get it much below 90 days. But when you think about the fact that they have fully implemented a 10B+ organization in that timeframe, when some major suite players will take 18 months working with a consulting partner to fully implement those solutions, that’s an incredible time to value, which is generated day one when every request flows into the tool; gets tracked, assigned, and executed; and stakeholders have full visibility into the process and can intervene if necessary.

Focal Point solves the problem it was built to solve, fills the hole the vast majority of sourcing and procurement solutions make, and does it incredibly well. If any part of this post resonates with you, the doctor encourages you to check them out.

What’s Your Data Foundation? And is it enough?

A few weeks ago, we asked Do You Have a Data Foundation as a follow up on our post that asked Where’s The Procurement Management Platform because, as has been made clear in our ongoing Source-to-Pay is Extensive Series (which is now at Part 6), even the best platform is useless without data — so what’s your data foundation? And is it enough?

You need a LOT of data for effective Procurement. This includes, but is not limited to:

Catalog Data
which represents commodity goods and packaged services that your buyers can buy in an e-commerce fashion
Contract Data
that encapsulates custom/proprietary goods and services you can buy and the obligations made by both parties as well as standard clauses you use
Supplier Data
that describes suppliers you have done business with, are doing business, and that you are considering doing business with
Product Data
that represents products a potential supplier could provide you with, not in a standard catalog, or product descriptions (and bills of materials) for products you need a supplier to contract manufacture
Purchase (order) Data
that represents what you have bought from suppliers, vendors, and service providers
Invoice & Billing Data
that represents what suppliers bill you for the goods and services you order, regular service/utility/rental payments, and other external payments requested by third parties
AP Data
that represents what Finance actually paid
Inventory Data
that represents what the organization actually received, and what it actually sold
Carrier Data
what carriers are available to bring the organization it’s goods from suppliers and then transport the organization’s products to its end customers, as well as what modes (truck, train, plane, or cargo ship) and types (dry, liquid, frozen, hazardous) of transport they support, the lanes they ship down, and their standard LTL/FTL crate/pallet rates
Risk Data
because you want to understand the inherent risk of a supplier from its operations, finances, regions, and inbound supply chain before you place your survival in their hands
ESG & Carbon/GHG Data
because reporting, and sometimes even reductions, are required in countries where organizations have limits
Supplier Diversity Data
as you need to support goals, and sometimes hit targets to do business with governments or keep existing customers
Supplier Bid Data
from tenders, RFQs, RFBs, and other RFX activities you send out
Market / Benchmark Data
that you can use to analyze your quotes, spend, risk factors, etc.
Document Data
which represent your contracts, product sheets, sales and marketing artifacts, financial reports, etc.
Organizational Data
employees, org structure, office locations, plant locations, etc.
Application Specific Data
created by other applications in the enterprise application ecosystem that power the business and impact what Procurement needs to do

And, moreover, this data takes multiple formats — numeric, fixed value from fixed list, free form text, image, audio file, video file — of various lengths and sizes, and is organized in various ways. Sometimes in a record structure, sometimes in a document structure, sometimes in a spreadsheet structure, and sometimes in a table structure. And it’s stored in various formats (ANSI, UTF8, UTF16, etc.) and communicated in various standards (EDI, c(XML), JSON, etc.)

And you need all of this data to do your job. And, moreover, you need to mangle all of this into a coherent federated schema so that you can do the analysis you need to make the necessary business decisions that Procurement must make to accomplish its task and achieve the business objectives.

But point to one platform that can

  1. Store all this data
  2. Organize all of this data into a federate schema to support holistic analysis
  3. Allow the organizational users to create arbitrary slices (cubes in spend analysis) for analysis
  4. Allow for the creation of arbitrary analysis on those slices
  5. Use the results as baselines for forecasting and predictive analytics
  6. Extract prescription advice based on those results

while integrating with the other modules and applications in the larger ecosystem the organization needs, and do it with a flick-of-the-switch or out-of-the-box configuration (engine).

SAP, Oracle, and other databases and ERPs don’t normally make it past 1 with a baseline implementation. With snowflaking and other advanced offerings (that support warehouses, lakes, and lake houses), maybe you get some of level 2. You then need to buy separate BI tools to get part of level 3 and part of level 4. You then need to turn to external tools and inject the right data to get level 5. And level 6 is still few and far between (and AI ain’t gonna help you here for a while because AI is just very advanced algorithms that can, depending on the problem, do millions, billions, and sometimes trillions of calculations on large, very large, and, if available, extremely large data sets to find likely outcomes — but only if there is enough good data to populate the data set [size] it needs — and where the internet is concerned, that’s usually not the case and the old adage of “Garbage In, Garbage Out” applies here).

But you need this in your future “platform” (ecosystem), and you will only get this if you have a good data foundation that captures all of the data elements above as well as providing a data foundation to enable the six (6) levels of capability that an organization will require at a minimum.