Spendata: The Power Tool for the Power Spend Analyst — Now Usable By Apprentices as Well!

We haven’t covered Spendata much on Sourcing Innovation (SI), as it was only founded in 2015 and the doctor did a deep dive review on Spend Matters in 2018 when it launched (Part I and Part II, ContentHub subscription required), as well as a brief update here on SI where we said Don’t Throw Away that Old Spend Cube, Spendata Will Recover It For You!. the doctor did pen a 2020 follow up on Spend Matters on how Spendata was Rewriting Spend Analysis from the Ground Up, and that was the last major coverage. And even though the media has been a bit quiet, Spendata has been diligently working as hard on platform improvement over the last four years as they were the first four years and just released Version 2.2 (with a few new enhancements in the queue that they will roll out later this year). (Unlike some players which like to tack on a whole new version number after each minor update, or mini-module inclusion, Spendata only does a major version update when they do considerable revamping and expansion, recognizing that the reality is that most vendors only rewrite their solution from the ground up to be better, faster, and more powerful once a decade, and every other release is just an iteration, and incremental improvement of, the last one.)

So what’s new in Spendata V 2.2? A fair amount, but before we get to that, let’s quickly catch you up (and refer you to the linked articles above for a deep dive).

Spendata was built upon a post-modern view of spend analysis where a practitioner should be able to take immediate action on any data she can get her hands on whenever she can get her hands on it and derive whatever insights she can get for process (or spend) improvement. You never have perfect data, and waiting until Duey, Clutterbuck, and Howell1 get all your records in order to even run your first report when you have a dozen different systems to integrate data from, multiple data formats to map, millions of records to classify, cleanse and enrich, and third party data feeds to integrate will take many months, if not a year, and during that year where you quest for the mythical perfect cube you will continue to lose 5% due to process waste, abuse, and fraud, and 3% to 15% (or more) across spend categories where you don’t have good management but could stem the flow simply by identifying them and putting in place a few simple rules or processes. And you can identify some of these opportunities simply by analyzing one system, one category, and one set of suppliers. And then moving on to the next one. And, in the process, Spendata automatically creates and maintains the underlying schema as you slowly build up the dimensions, the mapping, cleansing, and categorization rules, and the basic reports and metrics you need to monitor spend and processes. And maybe you can only do 60% to 80% piecemeal, but during that “piecemeal year”, you can identify over half of your process and cost savings opportunities and start saving now, versus waiting a year to even start the effort. When it comes to spend (related) data analysis, no adage is more true than “don’t put off until tomorrow what you can do today” with Spendata, because, and especially when you start, you don’t need complete or perfect data … you’d be amazed how much insight you can get with 90% in a system or category, and then if the data is inconclusive, keeping drilling and mapping until you get into the 95% to 98% accuracy range.

Spendata was also designed from the ground up to run locally and entirely in the browser, because no one wants to wait for an overburdened server across a slow internet connection, and do so in real time … and by that we mean do real analysis in real time. Spendata can process millions of records a minute in the browser, which allows for real time data loads, cube definitions, category re-mappings, dynamically derived dimensions, roll-ups, and drill downs in real-time on any well-defined data set of interest. (Since most analysis should be department level, category level, regional, etc., and over a relevant time span, that should not include every transaction for the last 10 years because beyond a few years, it’s only the quarter over quarter or year over year totals that become relevant, most relevant data sets for meaningful analysis even for large companies are under a few million transactions.) The goal was to overcome the limitations of the first two generations of spend analysis solutions where the user was limited to drilling around in, and deriving summaries of, fixed (R)OLAP cubes and instead allow a user to define the segmentations they wanted, the way they wanted, on existing or newly loaded (or enriched federated data) in real time. Analysis is NOT a fixed report, it is the ability to look at data in various ways until you uncover an inefficiency or an opportunity. (Nor is it simply throwing a suite of AI tools against a data set — these tools can discover patterns and outliers, but still require a human to judge whether a process improvement can be made or a better contract secured.)

Spendata was built as a third generation spend analysis solution where

  • data can be loaded and processed at any point of the analysis
  • the schema is developed and modified on the fly
  • derived dimensions can be created instantly based on any combination of raw and previously defined derived dimensions
  • additional datasets from internal or external sources can be loaded as their own cubes, which can then be federated and (jointly) drilled for additional insight
  • new dimensions can be built and mapped across these federations that allow for meaningful linkages (such as commodities to cost drivers, savings results to contracts and purchasing projects, opportunities by size, complexity, or ABS analysis, etc.)
  • all existing objects — dimensions, dashboards, views (think dynamic reports that update with the data), and even workspaces can be cloned for easy experimentation
  • filters, which can define views, are their own objects, can be managed as their own objects, and can be, through Spendata‘s novel filter coin implementation, dragged between objects (and even used for easy multi-dimensional mapping)
  • all derivations are defined by rules and formula, and are automatically rederived when any of the underlying data changes
  • cubes can be defined as instances of other cubes, and automatically update when the source cube updates
  • infinite scrolling crosstabs with easy Excel workbook generation on any view and data subset for those who insist on looking at the data old school (as well as “walk downs” from a high-level “view” to a low-level drill-down that demonstrates precisely how an insight was found
  • functional widgets which are not just static or semi-dynamic reporting views, but programmable containers that can dynamically inject data into pre-defined analysis and dimension derivations that a user can use to generate what-if scenarios and custom views with a few quick clicks of the mouse
  • offline spend analysis is also available, in the browser (cached) or on Electron.js (where the later is preferred for Enterprise data analysis clients)

Furthermore, with reference to all of the above, analyst changes to the workspace, including new datasets, new dashboards and views, new dimensions, and so on are preserved across refresh, which is Spendata’s “inheritance” capability that allows individual analysts to create their own analyses and have them automatically updated with new data, without losing their work …

… and this was all in the initial release. (Which, FYI, no other vendor has yet caught up to. NONE of them have full inheritance or Spendata‘s security model. And this was the foundation for all of the advanced features Spendata has been building since its release six years ago.)

After that, as per our updates in 2018 and 2020, Spendata extended their platform with:

  • Unparalleled Security — as the Spendata server is designed to download ONLY the application to the browser, or Spendata‘s demo cubes and knowledge bases, it has no access to your enterprise data;
  • Cube subclassing & auto-rationalization — power users can securely setup derived cubes and sub-cubes off of the organizational master data cubes for the different types of organizational analysis that are required, and each of these sub-cubes can make changes to the default schema/taxonomy, mappings, and (derived) dimensions, and all auto-update when the master cube, or any parent cube in the hierarchy, is updated
  • AI-Based Mapping Rule Identification from Cube Reverse Engineering — Spendata can analyze your current cube (or even a report of vendor by commodity from your old consultant) and derive the rules that were used for mapping, which you can accept, edit, or reject — we all know black box mapping doesn’t work (no matter how much retraining you do, as every “fix” all of a sudden causes an older transaction to be misclassified); but generating the right rules that can be human understood and human maintained guarantees 100% correct classification 100% of the time
  • API access to all functions, including creating and building workspaces, adding datasets, building dimensions, filtering, and data export. All Spendata functions are scriptable and automatable (as opposed to BI tools with limited or nonexistent API support for key functions around building, distributing, and maintaining cubes).

However, as we noted in our introduction, even though this put Spendata leagues beyond the competition (as we still haven’t seen another solution with this level of security; cube subclassing with full inheritance; dynamic workspace, cube, and view creation; etc.), they didn’t stop there. In the rest of this article, we’ll discuss what’s new from the viewpoint of Spendata Competitors:

Spendata Competitors: 7 Things I Hate About You

Cue the Miley Cyrus, because if competitors weren’t scared of Spendata before, if they understand ANY of this, they’ll be scared now (as Spendata is a literal wrecking ball in analytic power). Spendata is now incredibly close to negating entire product lines of not just its competitors, but some of the biggest software enterprises on the planet, and 3.0 may trigger a seismic shift on how people define entire classes of applications. But that’s a post for a later day (but should cue you up for the post that will follow this on on just precisely what Spendata 2.2 really is and can do for you). For now, we’re just going to discuss seven (7) of the most significant enhancements since our last coverage of Spendata.

Dynamic Mapping

Filters can now be used for mapping — and as these filters update, the mapping updates dynamically. Real-time reclassify on the fly in a derived cube using any filter coin, including one dragged out of a drill down in a view. Analysis is now a truly continuous process as you never have to go back and change a rule, reload data, and rebuild a cube to make a correction or see what happens under a reclassification.

View-Based Measures

Integrate any rolled up result back into the base cube on the base transactions as a derived dimension. While this could be done using scripts in earlier versions, it required sophisticated coding skills. Now, it’s almost as easy as a drag-and-drop of a filter coin.

Hierarchical Dashboard Menus

Not only can you organize your dashboards in menus and submenus and sub-sub menus as needed, but you can easily bookmark drill downs and add them under a hierarchical menu — makes it super easy to create point-based walkthroughs that tell a story — and then output them all into a workbook using Spendata‘s capability to output any view, dashboard, or entire workspace as desired.

Search via Excel

While Spendata eliminates the need for Excel for Data Analysis, the reality is that is where most organizational data is (unfortunately) stored, how most data is submitted by vendors to Procurement, and where most Procurement Professionals are the most comfortable. Thus, in the latest version of Spendata, you can drag and drop groups of cells from Excel into Spendata and if you drag and drop them into the search field, it auto-creates a RegEx “OR” that maintains the inputs exactly and finds all matches in the cube you are searching against.

Perfect Star Schema Output

Even though Spendata can do everything any BI tool on the market can do, the reality is that many executives are used to their pretty PowerBI graphs and charts and want to see their (mostly static) reports in PowerBI. So, in order to appease the consultancies that had to support these executives that are (at least) a generation behind on analytics, they encoded the ability to output an entire workspace to a perfect star schema (where all keys are unique and numeric) that is so good that many users see a PowerBI speed up by a factor of almost 10. (As any analyst forced to use PowerBI will tell you, when you give PowerBI any data that is NOT in a perfect star schema, it may not even be able to load the data, and that it’s ability to work with non-numeric keys at a speed faster than you remember on an 8088 is nonexistent.)

Power Tags

You might be thinking “tags, so what“. And if you are equating tags with a hashtag or a dynamically defined user attribute, then we understand. However, Spendata has completely redefined what a tag is and what you can do with it. The best way to understand it is a Microsoft Excel Cell on Steroids. It can be a label. It can be a replica of a value in any view (that dynamically updates if the field in the view updates). It can be a button that links to another dashboard (or a bookmark to any drill-down filtered view in that dashboard). Or all of this. Or, in the next Spendata release, a value that forms the foundation for new derivations and measures in the workspace just like you can reference a random cell in an Excel function. In fact, using tags, you can already build very sophisticated what-if analysis on-the-fly that many providers have to custom build in their core solutions (and take weeks, if not months, to do so) using the seventh new capability of Spendata, and usually do it in hours (at most).

Embedded Applications

In the latest version of Spendata, you can embed custom applications into your workspace. These applications can contain custom scripts, functions, views, dashboards, and even entire datasets that can be used to instantly augment the workspace with new analytic capability, and if the appropriate core columns exist, even automatically federate data across the application datasets and the native workspace.

Need a custom set of preconfigured views and segments for that ABC Analysis? No sweat, just import the ABC Analysis application. Need to do a price variance analysis across products and geographies, along with category summaries? No problem. Just import the Price Variance and Category Analysis application. Need to identify opportunities for renegotiation post M&A, cost reduction through supply base consolidation, and new potential tail spend suppliers? No problem, just import the M&A Analysis app into the workspace for the company under consideration and let it do a company A vs B comparison by supplier, category, and product; generate the views where consolidation would more than double supplier spend, save more than 100K on switching a product from a current supplier to a lower cost supplier; and opportunities for bringing on new tail spend suppliers based upon potential cost reductions. All with one click. Not sure just what the applications can do? Start with the demo workspaces and apps, define your needs, and if the apps don’t exist in the Spendata library, a partner can quickly configure a custom app for you.

And this is just the beginning of what you can do with Spendata. Because Spedata is NOT a Spend Analysis tool. That’s just something it happens to do better than any other analysis tool on the market (in the hands of an analyst willing to truly understand what it does and how to use it — although with apps, drag-and-drop, and easy formula definition through wizardly pop-ups, it’s really not hard to learn how to do more with Spendata than any other analysis tool).

But more on this in our next article. For The Times They Are a-Changin’.

1 Duey, Clutterbuck, and Howell keeps Dewey, Cheatem, and Howe on retainer … it’s the only way they can make sure you pay the inflated invoices if you ever wake up and realize how much you’ve been fleeced for …

The Best Way Procurement Chiefs Can Create a Solid Foundation to Capitalize on AI

As per our recent post on how I want to be Gen AI Free, the best way to capitalize on Gen-AI is to avoid it entirety. That being said, the last thing you should avoid is the acquisition of modern technology, including traditional ML-AI that has been tried and tested and proven to work extremely well in the right situation.

That being said, if you ignore the reference to Gen-AI, a recent article on Acceleration Economy on 5 Ways Procurement Chiefs Can Create a Solid Foundation had some good tips on how to go about adopting ML-AI with success.

The five foundations were quite appropriate.

1. Organize

A plan for

  1. exactly where the solution will be deployed,
  2. what use cases it will be deployed for,
  3. how valid use cases will be identified, and
  4. how the solution is expected to perform on them.

There’s no solution, even AI, that can do everything. Even limited to a domain, no AI will work for all situations that may arise. As a result, you need a methodology to identify the valid use cases and the invalid use cases and ensure that only the valid uses cases are processed. You also need to ensure you know the expected ranges of the answers that will be provided. Then you need to implement checks to ensure that no only are only valid situations processed but that only output in an expected range is accepted in any automated process, and if anything is outside the expected norms anywhere, a human with appropriate education and training is brought into the loop.

2. Create a Policy

No technology should be deployed in critical situations without a policy dictating valid, and invalid, use. Moreover, any technology definitely shouldn’t be used by people who aren’t trained in both the job they need to do and proper use of the tool. Even though most AI is not as dangerous as Gen-AI, any AI, if improperly used, can be dangerous. It’s critical to remember that computers cannot think, and only thunk on the data they are given (performing millions of calculations in the time it takes an average person to perform two). As such, the quality of output is limited both to the quality of data input and the knowledge built into the model used. Neither will be complete or perfect, and there will always be external factors not considered, which, even if normally not relevant, could be relevant — and only an educated and experienced human will know that. (Moreover, that human needs to be involved in the policy creation to ensure the technology is only used where, when, and how appropriate.)

3. Understand Your Platform(s) of Choice

Just like there are a plethora of Gen-AI applications, a lot of different vendors offer AI applications, and even if most are similar, not all are created equal. It’s important to understand the similarities and differences between them and select the one that is right for your business. (Consider the algorithms and models used, the extent of human validated training available, typical accuracy / results, and the vendor’s experience in your use case in particular when evaluating an AI solution.)

4. Practice

Introducing new tools requires process changes. Before introducing the tool, make sure you can execute the associated process changes, first by executing training exercises on the different types of output you might get and then, possibly by way of a third party who uses a tool on your behalf, using real inputs and associated outputs. While the AI may automate more of the process, it’s even more critical that you respond appropriately to parts of the process that cannot be automated or where the application throws an exception because the situation is not appropriate to either the use of AI or the use of the AI output. (And if you don’t get any exceptions, question the AI … it’s not likely not working right! And if you get too many exceptions, it’s not the right AI for you.)

5. ALWAYS Ask Yourself: “Does that Make Sense?”

Just like Gen-AI hallucinates, traditional AI, even tried-and-true AI that is highly predictable, will sometimes give wrong results. This will usually happen if bad data slips in, if the use case is on the boundary of expected use cases, or the external situation has changed considerably since the last time the use case arose. Thus, it’s always important to ask yourself if the output makes sense. For tried-and-true AI where the confidence is high, it will make sense the vast majority of the time, but there will still be the occasional exception. Human confirmation is, thus, always required!

With proper use, AI, unlike Gen-AI (which fails regularly and sometimes hallucinates so convincingly that even an expert has a hard time identifying false results), will give great results the majority of the time — so you should seek it out and implement it. Just also implement checks and balances to catch those rare situations it doesn’t and put a human in the loop when that happens. Because traditional use-cases are more constrained, and predictable, it’s a lot easier to identify and implement these checks and balances. So do it … and see great success!

Tonkean: Making Enterprise Procurement work with ProcurementWorks, Part 2

In Part 1, we introduced you to Tonkean, an enterprise applications provider founded in 2015 to transform the enterprise back office. Tonkean leverages smart technology to bring people, process, and technology together in a manner that revolutionizes how businesses operate, allowing people to focus on high value work that gets results, and not redundant data processing, unnecessary application usage (which requires unnecessary training and unnecessary time), or unnecessary emails. The primary goal is to increase adoption and push employee requests, and actions, through official channels, instead of having enterprise employees find backdoors and dark hallways to get around cumbersome systems and processes they don’t want to use.

After providing a brief history and an overview, most of Part I focussed on Tonkean’s AI Front Door, a smart AI interface that was built by a team that understands the strengths, weaknesses, and, most importantly, the limits of AI, especially LLMs and (Open) Gen AI, and that includes pre- and post- processing to verify the requests and responses as reasonable, and where confidence is lacking, not provide any response (and send the inquiry, and response, or lack thereof, for a human who can, if necessary, tune the underlying system after review).

Today we will overview the rest of the Tonkean Intake Orchestration Platform for Procurement and how it can help your organization.

Procurement Intake and Guided Buying

The core of ProcurementWorks is their intake ability described above and guided buying that gets a requester to the right process and form and allows them to monitor, take part in, and/or execute the process end-to-end as required. They can do this through their platform, via e-mail, or a third party platform, like Slack or Microsoft Teams, through their integration capability. If the buy is small and can be put on a PCard, the system will direct the user to do so. If it’s large and requires a buyer to run a Procurement event, the system will guide the buyer to provide all the information the buyer needs and provide updates to the requester after each step of the process is concluded (which the buyer can proactively monitor through the ProcurementWorks request tracking application that monitors the entire workflow, which can be as simple as the request, Procurement approval, and PO/contract generation or as involved as a request, budgetary approval, procurement acceptance, RFX, award selection, InfoSec Approval, Risk & Compliance approval, Legal approval, contract generation, contract negotiation, signing, and completion).

All of this can be done in the Tonkean platform if desired, which will integrate with, push data into, and pull data out of any enterprise applications that are used for Finance, Procurement, Risk, Legal, and Contracts. For example, the system can pull the associated budget for the category from the budget system and send the appropriate manager the request for approval based on the expected cost and the category budget. If approval to proceed is received, the buyer can setup an RFP, which can then be pushed into the enterprise’s sourcing platform (which could be Coupa or Ariba) for execution, and then the results pulled back into the Tonkean request tracking and management module. Procurement can then select one, send it off to InfoSec, Risk, and Legal, in order, for approval, which can come in through the platform, and, once received, use the platform to push the award details into the CLM that can generate the contract, which the platform can then push to the supplier for signature through their e-Signature platform, and when it’s signed, push it back into the CLM.

The platform comes with a number of built-in Procurement process templates that can be customized as needed to support every organizational category and buying process based upon organizational needs. It can be integrated with all the applications and all the document stores, pull in the necessary attachments, help with exchange and version management, and track approvals. And it can be accessed through the form based interface or the chatbot, natively or through API connections.

Procurement Center (Workspace Apps)

The enterprise can create multiple views (and even multiple, separate portals if they like) to support both the Procurement Team and the employees that need to interact with Procurement (so that Legal, Risk & Compliance, IT/InfoSec can all have their own views, and even their own portals if they want to handle their workflows through Tonkean versus their current applications). (Thus, in addition to the Procurement Request Tracker and Manager, Procurement could build a custom Risk and Compliance Portal, Contract Negotiation Tracking and Management Portal, Vendor Inquiry Portal, and so on.)

How little or how much is enabled by default in the Procurement Center is up to the customer, but typically the Procurement Center will contain:

  • AI Front Door: Their name for their AI request intake experience, which can process free text requests or guided requests based on drop downs
  • My Requests: A listing of all of the users’s requests where they can click into the Request Tracker (which listens in near real time for system updates from all integrated systems)
  • My Approvals: A listing of all of the approvals and reviews in the user’s queue (that other users are waiting on)
  • Reporting Dashboards: Tonkean is not a BI/Analytics platform (and integrates with yours for deep Analytics/BI), but comes with a number of out of the box templates for workflow/process/cycle time analysis, request tracking, review and approval tracking, supplier onboarding/review request tracking, sourcing request tracking, invoice monitoring, etc. and can build custom dashboards by role (CPO, CFO, etc.) and pull in data from the connected BI systems to populate those dashboards if desired
  • My Supplier Requests: A listing of the suppliers that the user has requested be onboarded, where they can click into the New Supplier Tracker that tracks the pre-onboarding and onboarding steps that must be completed for the supplier to be onboarded in the Supplier Master (with workflows that adapt to the supplier type; a software vendor offering a product that processes financial or personal data needs considerably more reviews than a new office supplies or janitorial supplies vendor)
  • Solutions Studio: where the super/admin user can update the workflow for the request tracker and all other modules in the system

Reports and Dashboards

As indicated above, there are a number of out-of-the-box reporting templates for Procurement that are easily instantiated/modified in the Tonkean platform. These include, but are not limited to:

Purchase Agreements
The Purchase Agreements Dashboard will summarize, by quarter or month, the number of requests, completed requests, total spend, average completion time for each step (FP&A Review, Management Review, Security Assessment Review, Legal & Privacy Review, IT Review, and Accounting Review, etc.), active requests by status, vendors, spend by vendors, and other key agreement metrics.
New Suppliers
The New Suppliers Dashboard will summarize the number of new supplier requests that came in, the suppliers in each state (NDA Sent, NDA Completed, InfoSec Review, Approved for Onboarding, Profile in Process, Approved, Onboarded in SIM, etc.), and allow a user to click in and see who the suppliers are in each state, who the requester/owner is, and other key data or flags as desired.
Existing Suppliers
The existing suppliers dashboard tracks all suppliers with contracts, insurance, certifications, etc. expiring in the next 90 days where something needs to be done.
CFO Dashboard
The CFO Dashboard will generally contain an overview of spend by quarter/year, compliance, overall productivity, productivity by stakeholder, cycle time by process, workload by buyer/analyst, etc. and other key metrics pulled in from the other reports.

Solutions Studio Module Builder

The core of the Tonkean Enterprise offering is the Solutions Studio that is used to create the no-code workflows from action blocks, triggers, and conditional checks. Action blocks tend to fall into coordination actions (which require people) or workflow (which connect coordination actions and data blocks). Conditional logic make it easy to define requests for information, status, and action items; respond, send updates, and provide notifications; and create approval cycles and assign owners. Workflows make it easy to update data fields, create new (instances of) data objects, trigger module actions, perform (textual) analyses and extract text, create models, train models, and introduce programmatic delays or waits (for information from parallel workstreams, for example).

Data actions provide the means to create, read, update, and delete as applicable (according to the principles of least access required, which will be configured by the Tonkean team on implementation so that any data that should only be capable of being changed in a base system will not be capable of being changed through Tonkean regardless of the user’s authority level) the relevant data in the connected source systems. There are blocks for each system that make it easy to drill in and work with the data in that system, as each data source is preconfigured with the default actions it supports.

For one of the 100+ systems already natively integrated with Tonkean, adding the system as a data source is simply a matter of providing maybe a few connection parameters, and the data source will be good to go for your team, with all of the standard actions available. In the Enterprise Component Manager, it’s easy to drill in and find out, for each data source, the inputs it will accept, the outputs and actions the interface supports, the data retention and audit policies, the access rights, and admins and owners, and general information on connectivity restrictions. It’s also easy to drill into the Tonkean access log and see a complete history and to drill into the data through the Tonkean app (so you don’t have to go to the native app to see what’s available in each object/record, how complete those objects/records usually are, etc. or do low-level SQL queries in the underlying database).

In addition, super users and admins can also define new custom actions if their implementation supports additional data or actions, or they want to define custom actions with more limited capability or complex actions that ensure a sequence of actions (such as updates) happen all-or-nothing. All they have to do is define the action type (Get, Post, Put, Patch, etc.), the URL, the data encoding format, the relevant field(s), and the relevant data and the platform creates the workflow logic for them.

Building the workflow is easy in their graphical select drag-and-drop environment where action blocks and data sources can be dropped and connected by arrows that can encapsulate the associated conditional logic for sequential and parallel workflows.

Procurement Knowledge Base / Component Library

The ProcurementWorks solution can also be configured to support the procurement knowledge base, either by housing documents natively, linking to relevant repositories, or both, allowing for Tonkean intake to also sere as the help center as well as the purchasing request center.

In addition, the Tonkean Component Library contains a large number of standard, out of the box, workflows with embedded standard/best practice, for Procurement, Legal, Compliance and other standard enterprise functions that the customer organization can enable and customize as desired in the Solutions Studio. With Tonkean, an organization doesn’t have to start from scratch, and Tonkean will help the organization pre-configure all of the modules/workflows of interest on go-live.

Data Source (& Communication Platform) Management

The Tonkean platform makes it easy to manage organizational data sources. It’s easy to query which sources exist, what data they have, where they are used, what data can be retrieved, which data can be updated, and what the Tonkean access policies are. Similarly, one can manage which communication platforms are integrated and what they can be used for.

In addition, for each connected data source, Tonkean can provide you a “native” view into the core application and data if you so desire. Want to query your Coupa invoices natively in Coupa? No problem! Tonkean can bring up the appropriate screen from Coupa in a frame where you can see exactly what invoices are there and their status. Want to see the full ticket created in JIRA for the IT Review Team? No problem! Tonkean can pull up the Read-Only JIRA screen for your perusal. It truly is people coordination and enterprise platform orchestration.

In conclusion, if you are a large midsize or global enterprise and have an adoption problem, are struggling to get the value out of your enterprise systems, or are looking for ways to make the whole greater than the some of its parts, and need a better intake platform to boot, consider including Tonkean in your evaluation. They just might be what you need to take your current enterprise software investments to the next level.

Tonkean: Making Enterprise Procurement work with ProcurementWorks, Part 1

Tonkean was founded in 2015 to transform the enterprise back office. Tonkean leverages smart technology to bring people, process, and technology together in a manner that revolutionizes how businesses operate, allowing people to focus on high value work that gets results, and not redundant data processing, unnecessary application usage (which requires unnecessary training and unnecessary time), or unnecessary emails. One of the big problems Tonkean saw with traditional enterprise systems is that anyone who didn’t need to use the system daily was resistant to learning yet another system they saw as difficult or cumbersome (which applied to any system that didn’t use their terminology), adoption was a major problem, and employees would constantly look for ways to circumvent the system. Tonkean’s goal was to solve the adoption problem by providing users a superior intake experience, that could be as simple as a standard form-based or natural language interface like they’d find on the web, that didn’t require any training and that helped these employees make their requests through official channels instead of sneaking through back doors and dark hallways.

After a few custom projects, they found an initial niche in the Legal department and created Tonkean LegalWorks to help Legal Teams with legal mail routing, legal matter intake, matter lifecycle management, legal discipline and category classification, conflict waiver processing, law firm onboarding, contract routing and review, and even legal risk monitoring. It brought together the systems used by Legal (email, word processors, specialized Legal Billing Management solution, etc.), any risk and compliance applications they use to ensure their lawyers and firms dot all the ies and cross all the tees they need to take on every case and practice in every state they are taking legal matters in, and any other enterprise applications the team used to work and communicate internally (Slack, Teams, etc.).

And while we’re not here to discuss LegalWorks, it is through the development of LegalWorks that they learned how to bridge the gap between people, process, and technology in a in a way that empowered their clients to spend more time on strategic (legal) work instead of redundant data entry and system usage, get more value out of the tools they already purchased, and be more productive and satisfied with their technology. They learned how to enable a department to use the tools they have in ways that went beyond the original use cases, and learned they could do more and set out to identify where the biggest needs were and where they could do more. And once they found Procurement, and realized that Procurement had a lot of the same challenges as Legal, but considerably more amplified (with more systems, more complexity, and higher stakes), they knew they had found an area where they could provide their enterprise clients with the most value (and especially those that were using the major S2P suites but getting low utilization rates due to lack of intake support and a lack of integration with other internal systems).

When investigating Procurement in their enterprise customers, they found that while the major suites were reasonably suited for, and well used by, the Procurement team in strategic projects, they weren’t used much in tactical purchasing, especially in tail spend, as most of the organizational users found the system too complicated and bypassed it whenever possible (as the P2P tool lives on the long tail of enterprise applications of choice for the average enterprise employee).

So, as with some of the new breed of vendors who started specifically with the goal of Procurement intake and/or orchestration, one of their first goals was to help their Enterprise customers get more value out of their big S2P suites (and Ariba and Coupa in particular; for example, they have Intake Orchestration for Ariba and the Coupa Intake Experience to help the organization route all indirect spend, no matter how far down the tail, through Ariba or Coupa). While that’s where they are still focussed (given their current Enterprise customer base), they’ve expanded their ProcurementWorks to be a full Procurement lifecycle orchestration solution, from intake to resolution, regardless of what solutions the customers have or don’t have, what enterprise applications the teams use to communicate, what external catalogs or data sources they need to integrate with, and what policies and procedures need to be followed. In this way, ProcurementWorks is a system-agnostic solution that wraps around the customer’s existing process and applications to orchestrate and better coordinate that process.

However, one major difference is that, to Tonkean, full orchestration means creating a solution that solves all of the Procurement related problems an organization’s employees have, not just Procurement requisitions or catalog buying. That means answering all of their Procurement related questions in addition to taking their product and service requests, guiding them to the right systems if needed, or being the one interface of choice if Tonkean can be that. That means a much smarter intake process that can take any Procurement related natural language request, interface with all of the organizational data sources, and provide an appropriate answer.

For Tonkean, that starts with a smart AI interface, that they call the AI Front Door. The AI Front Door, unlike many other LLM-based products, is not just ChatGPT in a shiny wrapper, but a hybrid solution based on in-house engineering, the client organization’s preferred LLM, and knowledge systems owned by the client. It’s a very sophisticated “chatbot” compared to most offerings on the market, a technical definition would be very extensive (and lose non-PhDs), but we can illustrate much of the uniqueness of the capability with a high level overview and an example or four.

For example, when a user inputs a request, the general approach the system takes is:

  • use their AI to process the question for the type, intent, and goal
    and inform the user if they have no information (or are unable to process it) while simultaneously
    redirecting any unanswerable query to a human expert for review
  • use internal, trusted, knowledge bases to get initial information and potential answers
  • feed the question, processed clarification, and internally retrieved knowledge into the organization’s LLM to provide Natural Language feedback to the user, which could be the answer, or a refinement question if ambiguity existed in the question or potential answers from organizational data sources, which causes (an extension of) this 3-step loop to repeat
  • verify the response is sensible before presenting to the user (and, if not confident, route to a human for feedback for future internal Tonkean model training while informing the user no relevant information can be found)

Thus, if the user asks if there is an agreement with Vendor V:

  • their AI Front Door will process the query and determine that the user is asking if there is a signed contractual agreement with Vendor V that is currently active, and potentially what that agreement is
  • create the appropriate queries for each organizational system that stores contracts and agreements
  • take the responses and construct a carefully engineered prompt for the LLM that will return an answer indicating if there are agreements, and, if so, what they are and where they can be found (possibly including a direct link if the document can be accessed through the Tonkean platform)

If the user asks if she can purchase a license for SaaS app S:

  • their AI Front Door processes the request, determines that the user wants to make a purchase, it falls in the software category, and asks a few clarifying questions about the type and purpose of the product and, if it discovers the organization already has a license for a tool of that type, asks why the other tool won’t do
  • the system takes the responses and prompts the user with a link to launch a purchase request, where the system then pre-populates key fields of the organization’s software license purchase request form based on its learnings from the AI Front Door interaction and data attributes from other relevant systems (such as budget information in the ERP)
  • the system bundles the appropriate information and prompts the LLM to create grammatically correct responses that not only explain the request to the Procurement Buyer, but a Supplier if an RFP is required
  • the draft form is then presented to the user to verify, and one click puts it into the Procurement Request queue (where it can be accessed from the ProcurementWorks My Requests page at any time)

If the user asks for the procurement policy for SWAG for the marketing event she is attending:

  • their AI FrontDoor processes the requests and determines its a policy question
  • it creates the appropriate pattern match, DQL, or index query for each of the organization’s policy document data stores and collects the appropriate responses and documents
  • creates an appropriate prompt for the LLM that appropriately forms the question while asking the LLM to use only the inputs fed to it to create the response
  • ensures the response that comes back has a decent similarity to a subset of the text from the documents and then presents the natural language summary to the user

If the user asks the system for the results of the hockey game he missed working late:

  • the system processes the requests, realizes it doesn’t have that information (unless, of course, the enterprise is a sports news outfit), informs the user it doesn’t have that information and ends that interaction there

In other words, it’s built to be the central information source and jumping-off point for all types of inquiries and tasks a Procurement professional or employee with Procurement needs is likely to have, with the intent of cutting out 90% of unnecessary emails, texts, questions, and requests an augmented intelligence system can answer or guide a user through.

Moving on, the core of the Tonkean Intake Orchestration Platform that their Procurement solutions were built on is a workflow automation platform with extensive built in workflow customization, data integration, and form creation capability. In the platform, the customer can build forms (using a no-code form editor) they need to power any Procurement process (which can be created and modelled using a no-code process editor) they have, and customize them for requesters, buyers, risk & compliance, IT, or any other department as needed. They used this capability as the foundation not only for their Coupa Intake Experience and Intake Orchestration for SAP Ariba (as organizations never replace major investments, but innovative organizations look to improve and expand upon them), but their guided buying experience, supplier onboarding, and tail spend automation (among others).

One key differentiator is that any workflow can be updated at any time, something which is generally not possible in your traditional Procurement Suite such as Coupa, Ariba, and Jaggaer. For example, many of their customers now require an additional AI Review of any platform that uses AI to determine the nature of the AI and any direct and indirect risks in its proposed application to the business from a technical, legal, and brand perspective. For example, if the vendor is using Open Gen AI (such as ChatGPT), there are technical risks in that these platforms have been repeatedly demonstrated to have biased, harmful (and even murderous), hallucinatory, thieving, and sleeper behaviour. There are direct legal risks in that you could be sued (and on the hook) if the AI makes a recommendation that ends up causing personal or business harm, and indirect legal risks if the technology was trained on stolen data or data that contained copyrighted, illegal, or national secret material. There are brand risks if the Open Gen AI product you are using all of a sudden suffers extreme public backlash for its actions (or your software results in a decision that tanks shareholder value or increases environmental harm). However, they have found that most of the suites they work with do not yet have many of these new “standard” compliance checks in their relatively rigid product workflows (and telling their customers to just include it in the InfoSec review), which increases the likelihood a key check will be missed. [Considering the attention that AI is getting and the fact that legal frameworks will need to come soon, not the best idea for a large organization NOT to be assessing AI risks now.] However, with Tonkean, it takes minutes to add a compliance check and ensure it gets done by the right people before a decision is made on any Software purchase or use.

In our next article, we will dive deep into the major components of the Tonkean ProcurementWorks offering.

Last Friday Was International Women’s Day. You Made a Big Fuss. Well, What Did You Do This Week?

This is taken from a LinkedIn post the doctor posted on Monday, March 11. It’s being reposted here for those who don’t follow LinkedIn and because, as expected, he hasn’t heard a single peep from any organization that was spewing platitudes last Friday as if praise one day a year was doing enough.

If you truly celebrate women, then please tell me:

What are you doing TODAY to

  1. increase the number of women in Management, STEM, Executive Suites, and Investment Firms,
  2. close the pay gap that is still 15% to 30% across these areas,
  3. encourage women to join your company to pursue their career, and
  4. enable the work life balance they need to be AS or MORE successful than their male counterparts?

As most of you are probably well aware from the deluge of “we support and honour our female leaders who … ” posts on LinkedIn last Friday, International Women’s Day was last Friday (2024-Mar-08). I stayed silent, as usual, because I found the majority of them very upsetting.

While some of the posts were very sincere, and some came from individuals I know had the best of intentions:

  1. Lip service does nothing to address the four major issues above.
  2. The lip service I saw in some of these posts was about as meaningful as a token thank you card at the annual Christmas party.
  3. Few addressed the real issues women still face in “traditional” workspaces run, and dominated, by men.
  4. Those few that honoured teams with equal representation or greater, or at least statistically average representation (in companies in fields where women are currently only 25% of the workforce, like STEM) have done nothing to educate their peers on how important this is and how successful they are because of it.

If you are a leader in a company (with actual employees) that truly cares, then I challenge you to celebrate their achievements and capability every day, and once a month make a post on efforts your company is taking to increase the number of women, close the pay and rank gaps, and support their work life balance, either through hiring, training, support for community programs that do such or at least make a post on the stellar accomplishments they have accomplished that would put an average salesman to shame.

And to keep doing this until they have the equality, and the respect they deserve.

The simple facts are

  1. women are half the population,
  2. are just as capable of men (as there is NO difference between average IQ scores), and
  3. should be half the workforce.

If women are not half the workforce at your company (or at least not represented statistically in line with the average representation in the field your company is in), it’s not their lack of achievement, dear men, it’s yours!