Category Archives: SaaS

Tealbook: Laying the Groundwork for the Supplier Data Foundations

It wasn’t that long ago that we asked you if you had a data foundation because a procurement management platform, should you be lucky enough to get one (which is much more than a suite), generally only supports the data it needs for Procurement to function and doesn’t support the rest of the organization. Furthermore, when you look across the Source-to-Pay and Supply Chain spectrums, there are a lot of different applications that support a lot of different processes that have a lot of different data requirements that need to be maintained as different data types in different encoding formats.

Furthermore, as we noted in the aforementioned post, it’s rare enough to find MDM capability that will even support procurement. This is because most suites are built on transactions, most supplier networks on relational supplier data records, and contracts on documents and simple, hierarchical, meta-data indexes. But you also need models, meta-models, semi-structured, unstructured, and media support. And more. The need is broad, and even if you restrict the need to supplier data, it’s quite broad.

As you will soon garner from our ongoing Source-to-Pay+ is Extensive series, in which we just started tackling Supplier Management in Part XV, the supplier data an organization needs is extremely varied and extensive. Given that Supplier Management is a CORNED QUIP mash with ten (10) major areas of functionality, not counting broader enterprise needs around ESG, innovation, product design / manufacturing management, and other needs tied to operations management, engineering, and enterprise risk management, among other functions, it’s easy to see just how difficult even Supplier Master Data Management can be.

Considering not a single Supplier Management solution vendor (as you will come to understand as we progress through the Source-to-Pay is Extensive series) covers all of the basic functions we’re outlining, it’s obviously that not a single vendor can effectively do Supplier Master Data Management today. However, Tealbook, which has realized this since their exception, is aiming to be the first to fix this problem. As of the first release of their open API next month, they are transitioning to a Supplier Data Platform and will no longer focus on being just a supplier discovery platform or diversity data enrichment platform. (They will still offer those services, and will be upgrading them in Q4 with general release expected by the end of 2023, but their primary focus will be on the supplier data foundation that enables this.)

This is significant, and illustrates how far they’ve come in the nine (9) years since their founding when their original focus was on building a community supplier intelligence platform that was reliable, scalable, extensible, and appropriate for new supplier discovery (via a large database of verified suppliers with community reviews). From these humble beginnings, where they didn’t even have a million suppliers in their platform after their third year of existence, they grew into the largest supplier network with over 5 Million detailed supplier profiles that is integrated with the largest S2P suites out there (Ariba, GEP, Ivalua, Jaggaer, and Workday, to name a few) and powers some of the largest organizations on the planet. As part of this empowerment, they can take in an organization’s entire supplier data ecosystem, transform it into their standard formats, match to their records, verify or correct existing data, and then enrich the organization’s supplier records before sending them back. In addition, they integrate with a multitude of BI tools, databases / lakes / warehouses (including ERPs), digital platforms, and so on.

To summarize, that’s a ten fold increase in suppliers and an explosion in global utilization and usage. At the same time, the platform has been augmented with over 2.3M supplier certifications, global diversity data, and the ability to track an organization’s tier 2 supplier diversity data. Quite impressive.

And while this meets most of an organization’s discovery needs, Tealbook knew that it didn’t meet all of an organization’s supplier data needs, especially when you think about all of the regulatory, financial, compliance, performance, sustainability, risk, contract, product/service, relationship, quality, and enablement/innovation data an organization needs to maintain on a supplier. As a result, they have been aggressively working on two key pieces of functionality. An extended universal supplier profile and a fully open, extensible, API that an organization can use to do supplier master data management across their enterprise with the Tealbook Supplier Data Platform. An organization can use the Tealbook Supplier Data Platform to classify, cleanse, and enrich supplier records; augment those records with third party data for sustainability, compliance, and risk; find new suppliers in the network; and so on.

In short, Tealbook is on a mission to be the organization’s trusted supplier data source, and to constantly improve their data offering both with their own ML/AI enabled technology that monitors over 400M+ public websites for supplier-related data (supplier web sites, business registries, certification providers, supplier data providers, etc. etc. etc.), maintains data provenance (when was it last updated, by what/who, etc.), and provides trust scores (in their proprietary framework that indicates Tealbook’s confidence in accuracy and correctness).

The real mission begins next month when they release their new Open API that will allow an organization to integrate, and interact with, Tealbook the way it needs to across its enterprise applications. Congruent with this release, they will also start releasing their enrichment data-packs that will, within the next year, allow the organization to plug-and-play the data they need to confirm firmographics, contact channels and key information, diversity, supplier offerings, financials, certifications, and basic risk data (which Tealbook will offer through partnerships with specialty supplier data providers, giving an organization a one-stop shop vs. having to license with multiple providers separately to build its 360-degree supplier profile).

Then, over the next year, Tealbook will enhance the usability of their data platform by first rebuilding their diversity and discovery applications and then building out new applications around sustainability, risk, benchmarking, and other areas that their customers would rather a data platform handle for them.

Do You Have a Procurement FocalPoint?

Last month we asked where’s the procurement management platform primarily because we now have a plethora of procurement-centric applications but very little integration between them. However, once you tackle that issue, you have the secondary issue of all these applications, but often no clear starting point and, even worse, no way for an average organizational employee outside of Procurement to interact with Procurement beyond an inbound email to “please get this for me” and the eventual, possibly many months later, outbound email to “we got it, it’s finally here … it will be on your desk tomorrow“.

This is a big problem, even in organizations that supposedly have market leading source-to-pay suites. While all the modules are connected, and the integrated workflow will guide a buyer from project selection to sourcing to supplier selection to award to contracting to supplier onboarding to order creation to receipt creation to invoice confirmation and payment approval and loop back to the order creation until pending contract expiration when the contract can be renewed, renegotiated, or
revoked and the sourcing process started all over. This is great, but for predefined sourcing projects on encoded categories only!

It’s not great for any category not already encoded and typically strategically sourced, and it’s atrocious as new product and service needs arise within the organization, as new hires need new assets for onboarding, as customer requirements change and the organization needs to adapt rapidly and source new products or services to meet new, or one-off, needs. There’s no intake, and no collaboration with the organizational stakeholders Procurement is there to serve.

And that’s a huge problem. That’s why you’re seeing a few companies talking about “intake”, “orchestration”, or “PPM” (which stands for either Procurement Performance Management or Procurement Process Management, depending on who is talking about it) because, without this capability, a Procurement platform will never be complete or support the organization.

Following the introductory post on the procurement management platform, we lamented and celebrated that Per Angusta was going away and being integrated into SpendHQ as the foundations of a new PPM. It’s a great start, but today the focus of SpendHQ is on managing the existing workflows and creating visibility into existing projects — and savings tracking is limited to integrated projects. However, when it comes to intake support and project tracking for arbitrary organizational needs, that’s not there yet.

However, there are other players which are strong here, and one of those players is Focal Point, which was built from the ground up as an intake-to-orchestrate solution that is capable of

  • capturing all organizational requests for Procurement and Procurement-related activities,
  • assigning those requests to customizable workflows using either built in automation rules or manual (re-)assignment,
  • allowing an end-user to see exactly where any request is in the process at any time,
  • allowing for in-platform communication between the stakeholder and Procurement,
  • integrating with any external tool through jump-out/jump-in to support the process, and
  • supporting whatever approval chains are required, among other intake and orchestration functions.

The tool was built to solve the most significant problem the founders repeatedly saw as CPOs and implementers of various leading sourcing solutions — little to no intake management or general purpose procurement process orchestration. And it does it incredibly well. The visual workflow construction is extremely usable, and the wizards that power both the process, form construction, and form completion automatically extend and compress the form as needed based upon user selections and actual needs, making for a very smooth flow.

All of the workflow elements and steps support deep conditional logic, allowing the organization to create as many branches as possible but ensuring that the end user making a request, and the end buyer assigned to deal with that request, only see the relevant paths and only need to enter the relevant information to be guided by the platform.

There can be as many intake types, with associated branching workflows, as the organization needs, each can have the appropriate level of automation, and, most importantly, each can have as many milestones as needed to walk the process through at a high level, allowing the requester to easily see at a high level where the process is, and then, if interested, dive into the detailed workflow within the current milestone to get a more accurate picture of where the process is.

The only thing the platform doesn’t do is actual sourcing, supplier management, contract management, analytics, procurement, or payment management. It expects the organization to have tools for this already and integrates into the appropriate modules in those tools as needed to accomplish the workflow in progress.

In terms of getting up and running, Focal Point typically has a fully fleshed out, functioning, and integrated instance that captures all of the organization’s workflows up and running within 90 days, even if the organization is a multi-(multi-)billion dollar organization, which is Focal Point’s target market size. This is because it’s typically the 1B+ organizations that have a lot of tools, and a lot of stakeholders, but no way to manage those tools effectively or to give stakeholders any visibility into where their requests are and how their spending is being managed.

The reason it typically takes 90 days is that, unlike many sourcing suite providers, who just flip a virtual switch and drop an empty SaaS suite on you and say “good luck“, Focal Point fully configures the platform as part of their statement of work. This includes:

  • working with the organization to understand all of their requirements and current workflows
  • encoding all of those intake workflows with milestones, task-breakdowns, and existing platform jump-outs
  • integrating any existing procurement system you need to complete the workflow
  • creating a UAT instance and allowing for at least one iteration and approval before it goes live
  • training your team on how to use the system and maintain the workflows

So even though Focal Point has obviously achieved efficiency in terms of workflow creation and customization, external platform integration, and implementation project management, it takes time for an average organization to collect and document their existing processes and requirements and for FocalPoint (or a third party consulting organization if that is the customer’s preference) to fill in the gaps, so it’s not possible to get it much below 90 days. But when you think about the fact that they have fully implemented a 10B+ organization in that timeframe, when some major suite players will take 18 months working with a consulting partner to fully implement those solutions, that’s an incredible time to value, which is generated day one when every request flows into the tool; gets tracked, assigned, and executed; and stakeholders have full visibility into the process and can intervene if necessary.

Focal Point solves the problem it was built to solve, fills the hole the vast majority of sourcing and procurement solutions make, and does it incredibly well. If any part of this post resonates with you, the doctor encourages you to check them out.

Services Struggles? Get Zivio. It’s Apropos!

In Friday’s post we told you not to use a sub-standard sourcing solution for services sourcing because, in the end, you won’t realize the value you expect or collect the data you need to make better awards in the process. And we know that left you with questions because all the big platforms you know don’t do services, or at least do not do services well.

So, today, we provide one answer to that problem — Zivio, a relatively new player that specializes in complex services sourcing, that is Best of Breed, and that meets the requirement of being able to integrate into an existing platform or ecosystem that contains open APIs and that can accept all of the data it can capture, generate, and exchange, with its complete, open, APIs.

Zivio was designed to manage the entire process from initial project creation through supplier onboarding, selection, and approval to milestone tracking and management to close-out, final bill-out and reporting. Each step of the process is designed to be easy to use and efficient and makes use of any existing templates and knowledge in the tool, using AI where (and only where) appropriate.

Their new project definition wizard, called ScopeIQ, is designed for quick Statement of Work (SoW) creation and all a requisitioner has to do is enter a few short sentences with the most relevant keywords and the solution will suggest a title based upon similar projects in the past, which the user can accept or edit, and then, using past project descriptions (from the company and publicly available datasets), it will use AI to assemble a project description and statement of work that the user can then review and edit. If the organization does a number of similar projects, it works exceptionally well and the starting statements of work and project descriptions are quite good and often need little editing (comparatively speaking).

Once the user has accepted the SoW, they can complete the project definition by defining the appropriate metadata (category, subcategory, budget, milestones, project release date, bid closing date, award criteria, etc.) and send the project out for bid. The system can automatically identify the best suppliers based on project categorization, milestones, and past performance on similar project and the user can select these suppliers and invite them to bid with just one click.

When the bids are submitted, the users can see an overarching summary and select a sub-set for side-by-side comparison. At any time before award, the buyer can easily modify the project description and add or modify milestones. Milestones can also be added and modified after award with the right approvals and agreement from both parties.

The product has good supplier management, performance management, and approval management, especially around supplier onboarding, milestone approvals, and payment approvals. By default, the platform tracks on time performance, operational best practice, and on budget metrics by supplier, but can be configured on implementation to track more. It also computes an overall score for easy ranking purposes (which can also be customized on implementation). When it comes to reports, there are a large number of project, milestone, supplier, and financial reports out-of-the-box, and more can be easily configured on implementation. Plus, as the platform was built to integrate with your existing S2P/ERP platform / ecosystem, it can push all of the data out to an external tool where you can do additional reporting and analysis.

But the best part about the tool is the ability to define complex services projects to any level of detail needed, with as many milestones, tasks, and approvals as required, customized for the project, with breakdown costing and interim payments as needed. And then to log into the system at any time, see where a project is, see where all projects are with a supplier or where all suppliers are with a set of related projects. And the ability to quickly bring up summary reports of relevance to the appropriate level of detail at any time. It’s project based sourcing and it works great, especially when you’ve defined your first few projects and the system can use (and learn) from those templates and suggest SoWs, suppliers, and steps for you. It’s what general services sourcing should be.

Now, before we sign off, we should make it clear that we are not saying that Zivio is the only solution (especially as we’re sure we will see more in the months and years ahead as more people realize how critical proper services sourcing is), or the solution for every business (as there are custom solutions for Legal, Marketing, and SaaS, that we will be covering in our Source-to-Pay is Extensive series), but that Zivio is a solid general purpose solution for an organization with a wide array of services needs that should be considered if the organization does not have a services sourcing solution. It could be the right solution for your organization and, if it is, given the typical overspend in services categories, that means you should have been using it yesterday!

The Platform is Becoming Ever More Important …

In Monday’s post, we quoted an except from Magnus’ interview on Spend Matters where he noted how important it is to start with the most important capabilities / modules and build out towards a full S2P suite (because he knows as well as the doctor does that a big bang approach typically results in a big explosive bang that usually takes your money and credibility with it). If you examine this closely, you see that you need to select not only the right starting solution, but a starting solution that can grow.

This requires a platform approach from the get-go. It doesn’t need to underlie the starting modules, it doesn’t need to underlie the ending modules, it just needs to underlie the suite you want to put together. It can be part of an application you already have or a third party application you buy later. But it has to exist.

The simple fact of the matter is that you can’t put together an integrated solution that supports an integrated source-to-pay workflow if you don’t have a platform to build it on. And you can’t patch it together just using endpoint integrations using whatever APIs — that’s just enabling you to push data from one point into another … or pull it from one point to another. That’s not an integrated solution, which requires an integrated workflow, just data integration. And while that is a start, it’s not enough. Especially when there is no one-size fits all category strategy and source to contract or procure to pay workflow for even the smallest of organizations with the simplest of needs.

So before you select any solution, the first thing you have to make sure is that it is built on, or works with, a true platform … otherwise, you may find as you undertake your S2P journey that a component you selected early does not fit the bill and you have to repeat steps … which is something you really can’t afford to do.

Get Your Head Out of the Clouds!

SaaS is great, but is cloud delivery great?

Sure it’s convenient to not have to worry about where the servers are, where the backups are, and whether or not more CPUs have to be spun up, more memory needs to be added, or more bandwidth is needed and it’s time to lay more pipe.

However, sometimes this lack of worrying leads to an unexpectedly high invoice when your user base decided to adopt the solution as part of their daily job, spin up a large number of optimization and predictive analytics scenarios, and spike CPU usage from 2 server days to 30 server days, resulting in a 15-fold bill increase over night. (Whereas hosting on your own rack has a fixed, predictable, cost.)

But this isn’t the real problem. (You could always have set up alerts or limits and prevented this from happening had you thought ahead.) The real problem is regulatory compliance and the massive fines that could be headed your way if you don’t know where your data is and cannot confirm you are 100% in compliance with every regulation that impacts you.

For example, EU and Canada privacy regulations limit where data on their citizens can live and what security protocols must be in place. And even if this is a S2P system, which is focussed on corporations and not people, you still have contact data — which is data on people. Now, by virtue of their employment, these people agree to make their employment (contact) information available, so you’re okay … until they are not employed. Then, if any of that data was personal (such as cell phone or local delivery address), it may have to be removed.

But more importantly, with GDPR coming into effect May 25, you need to be able to provide any EU citizen, regardless of where they are in the world and where you are in the world, with any and all information you have on them — and do so in a reasonable timeframe. Failure to do so can result in a fine of up to €20 Million or 4% of global turnover. For ONE violation. And, if you no longer have a legal right to keep that data, you have to be able to delete all of the data — including all instances across all systems and all (backup) copies. If you don’t even know where the data is, how can you ensure this happens? The answer is, you can’t.

Plus, not every country will permit sensitive or secure data to be stored just anywhere. So, if you want a client that works as a defense contractor, even if your software passes the highest security standards tests, that doesn’t mean that the client you want can host in the cloud.

With all of the uncertainty and chaos, the SaaS of the future is going to be a blend of an (in-house) ASP and provider managed software offering where the application, and databases, are housed in racks in a location selected by the provider in a dedicated hardware environment, but the software, which is going to be managed by the vendor, is going to run in virtual machines and update via vendor “pushes”, where the vendor will have the capability to shut-down and restart the entire virtual machine if a reboot is necessary. This method will also permit the organization to have on-site QA of new release functionality if they like, as that’s just another VM.

Just like your OS can auto-update on schedule or reboot, your S2P application will auto-update in a similar fashion. It will register a new update, schedule it for the next, defined, update cycle. Prevent users from logging in 15 minutes prior. Force users to start log-off 5 minutes before. Shutdown. Install the updates. Reboot if necessary. Restart. And the new version will be ready to go. If there are any issues, an alert will be sent to the provider who will be able to log in to the instance, and even the VM, and fix it as appropriate.

While it’s not the one-instance (with segregated databases) SaaS utopia, it’s the real-world solution for a changing regulatory and compliance landscape, which will also comfort security freaks and control freaks. So, head in the cloud vendors, get ready. It’s coming.