Monthly Archives: September 2017

Fifty Five Years Ago Today …

Evsei Grigorievich Liberman published “Plan, benefit, and prisms” in Pravda, a dissertation which proposed new methods of economic planning based on democratic centralism.

Democratic centralism is a method of leadership in which political decisions reached by the party (through democratically elected bodies) are binding upon all members of the party. His main proposal was that profits should be made the index of performance for Soviet planning, as well as the basis for bonuses to the personnel and directors of Soviet enterprises. This article stimulated a large debate and two years later, the Supreme Economic Council of the USSR converted some of the resulting conclusions into law, after some enterprises began to functionally experiment “on the basis of profit”. (Source: International Socialist Review, Vol. 26, No. 3, Summer 1965, pp 75 to 82 as transcribed by Einde O’Callaghan and found on the Ernest Mandel Internet Archive)

How is this relevant? It seems that no matter what the political climate, or what the governing structure, in the world of business, profit always seems to be top of mind for at least one party, especially when that party believes it’s their key to personal profit.

This means that there’s always going to be a stakeholder interested only in the bottom line and what it means to him, and that if you don’t keep this in the back of your mind, and come up with a decision that increased profit at least slightly, you’ll have a hard time getting it accepted, even if it is the most sustainable decision, the most corporately responsible decision, or the best long term decision from a value, and cost, perspective.

If profit can rear its ugly head in an environment governed by communism mindset, it can rear its head anywhere. Even in procurement which is supposed to focus on value creation and cost reduction. Keep this in mind when trying to ascertain, and balance, the desires of multiple stakeholders.

Introducing LevaData. Possibly the first Cognitive Sourcing Solution for Direct Procurement.

Who is LevaData? LevaData is a new player in the new optimization-backed direct material prescriptive analytics space, and, to be honest, probably the only player in the optimization-backed direct material prescriptive analytics space. While Jaggaer has ASO and Pool4Tool, it’s direct material sourcing is optimization backed and while it has VMI, it does not have advanced prescriptive analytics for selecting vendors who will ultimately manage that inventory.

LevaData was formed back in 2014 to close the gaps that the founders saw in each of the other sourcing and supply management platforms that they have been a part of over the last two decades. They saw the need for a platform that provided visibility, analytics, insight, direction, optimization, and assistant — and that is what they sent out to do.

So what is the LevaData platform? It is sourcing platform for direct materials that integrates RFX, analytics, optimization, (should) cost modelling, and prescriptive advice into a cohesive whole that helps a buyer buy better when they use and which, to date, has reduced costs (considerably) for every single client.

For example, the first year realized savings for a 5B server and network company who deployed the LevaData platform was 24M; for a 2.4B consumer electronics company, it was 18M; and for a 0.6B network customer, it was 8M. To date, they’ve delivered over 100M of savings across 50B of spend to their customer base, and they are just getting started. This is due to the combination of efficiency, responsiveness, and savings their platform generates. Specifically, about 60% of the value is direct material cost reduction and incremental savings, 30% is responsiveness and being able to take advantage of market conditions in real time, and 10% is improved operational efficiency.

The platform was built by supply chain pros for supply chain buyers. It comes with a suite of f analytics reports, but unlike the majority of analytics platforms, the reports are fine tuned to bill of materials, component, and commodity intelligence. The reports can provide deep insight to not only costs by product, but costs by component and/or raw material and roll up and down bill of materials and raw materials to create insights that go beyond simple product or supplier reports. Moreover, on top of these reports, the platform can create costs forecasts and amortization schedules, track rebates owed, and calculate KPIs.

In order to provide the buyer with market intelligence, the application imports data from multiple market fees, creates benchmarks, compares those benchmarks to internal market data, automatically creates competitive reports, and calculates the foundation costs for should cost models.

And it makes all the relevant data available within the RFX. When a user selects an RFX, it can identify suppliers, identify current market costs, use forecasts and anonymized community intelligence to calculate target costs, and then use optimization to determine what the award split would be, subject to business constraints, and identify the suppliers to negotiate with, the volumes to offer, and the target costs to strive for.

It’s a first of its kind application, and while some components are still basic (as there is no lane or logistics support in the optimization model), missing (as there is no ad-hoc report builder, or incomplete (such as collaboration support between stakeholders or a strong supplier portal for collaboration), it appears to meet the minimal requirements we laid out yesterday and could just be the first real cognitive sourcing application on the market in the direct material space.

Cognitive is the New Buzzword. But what does it mean?

It seems that everyone is talking about Procurement these days. A Google search for cognitive procurement returns about 650,000 results that include news sites, analyst firms, and vendors ranging in size from Old St. Labs to SAP Ariba to IBM.

Definitions are varied as well. Quora defines cognitive procurement as the application of self-learning systems that use data mining, pattern recognition and natural language process (NLP) to mimic the human brain to around the processes of acquiring, buying goods, services or works from an external source. IBM’s Vice President of Global Procurement defines cognitive procurement as the use of systems and approaches that are able to learn behaviour, manage structured and unstructured data, and unlock new insights to enable optimized outcomes. Vodafone defines cognitive procurement as augmented intelligence capabilities that allow a category manager to make faster and smarter data driven decisions that deliver competitive advantage.

But what does this all mean? First of all, the only commonality is using systems to do a task better. Which systems? Which tasks? Who gets the benefits? And what precisely are the benefits?

To figure this out, we have to go back and define what makes for better Procurement. The first step is good Sourcing. What are the keys to good Sourcing?

There are a number of keys to good Sourcing. Some of the most important include:

Visibility. Who are your potential suppliers? What do they provide? Where are they? What do you know about quality, reliability, delivery, etc? What are the risk factors with dealing with them? What data can you get on finances and sustainability? You need good information.

Analytics. Once you get the information, you need to make sense of it. Roll up component and material costs across bill of materials. Amalgamate risk ratings into meaningful scorecards. Aggregate demand across categories. Determine what you need, when, in what quantities, and how much it should cost before you start a negotiation.

Modelling. The ability to define detailed should cost models based on components or materials, production costs that include energy and labour and overhead, and other relevant cost factors. To define how those costs change with market data or production volumes. And so on.

Optimization. Once you get the data, you need to figure out the baseline costs and what the optimal awards are assuming nothing changes. Then how those change as costs change as bids change. Also, what are the optimal logistics strategies and costs. How does logistics impact the award decision? How should the logistics supply chain be designed?

Negotiation Support. At some point, the analysis needs to turn to negotiation, because the goal of sourcing is to acquire the products and services the organization needs to support its operations and satisfy its customers. All of this capability needs to be brought to bear in a cohesive, assistive, fashion that can help a buyer make the right decision.

That’s what cognitive procurement is — presenting a user with the information they need when they need it to make the right decision. Not automated buying. Not artificial intelligence which doesn’t exist. Not trying to mimic the human brain, as we don’t even fully understand how that works now.

So, does any application meet these requirements?

Tealbook … Not Just a Journal Anymore!

When you hear teal, you probably think of the colour which gets its name from the coloured area around the eyes of the common teal, and when you hear tealbook, you’re probably thinking of a notebook in the calming hue of teal, perfect for a journal or personal contact book … maybe even one you can keep your supplier contacts in!

But we all know the problems with a contact book. Contact information changes as people are shuffled around the company. Contacts leave the company, and you not only have to update their information but add a new contact. There is only a limited amount of room for notes. It’s really hard to share the information, and, if your peers are also using handwritten ‘teal journals’, get them to share the information, especially when you need it quickly.

That’s why supplier information management (SIM) modules and platforms were developed. All of the supplier and contact information in one place, accessible to, and updatable by, anyone in the organization. Plus, anyone can search the supplier database for suppliers new to them … but not new to the organization. This was one major limitation. Another was lack of community intelligence from peers. Were they selected or known for certain capabilities, or not? Do they have other customers for a product or service who will serve as references? Are they (now) capable of satisfying a minority designation or certification requirement (in a certain geography)? You can ask this, update the system to track it, but a community keeps this information up to date.

But most importantly, with traditional Supplier Information Management (SIM), you know what you know and you don’t know what you don’t know. You have no way of determining how many potential suppliers you don’t know about for any given category or requirement. Or how good the suppliers are for your needs relative to the suppliers you don’t know about.

That’s where a modern Supplier Information Management with Supplier Discovery platform comes into play. A modern supplier discovery platform, which is more than just a supplier network — as a supplier network is nothing more than a database of suppliers that have been transacted with through a particular platform, allows a community of organizations to keep track of, and provide information and recommendations on, potential suppliers (whether transacted through a platform or not); potential suppliers to self-identify and provide relevant information up front (such as diversity status and certifications); and all parties to share information of potential relevance.

tealbook‘s vision is to create a shared, trusted, supplier base with 100M suppliers that provides a central repository of reliable supplier intelligence that can be used as a stand-alone platform or integrated with your current ERP, sourcing, procurement, contract management, and other spend management systems of relevance through an easy to use API and an interface that is configurable to your organization’s processes and privacy preferences. tealbook already includes 1M vetted, and de-duplicated, suppliers with rich insights and expects to grow daily at an exponential rate to reach 4 million within 12 months.

And while this three-year-old start-up doesn’t have the 100M supplier database yet, they have the solid foundations for a reliable, scalable, extensible, and integratable community supplier intelligence platform that can be configured to your organization’s needs. That is getting the attention of some of the biggest organizations and consultancies in North America.

In the tealbook platform, a user can easily do a search for potential suppliers, review verified supplier profiles, review community generated expertise tags (similar to individual specialty tags on Linkedin), review provided supplier content, create a supplier list for vetting, interact with the supplier to get more information, interact with her teammates for initial vetting and review, and then select a subset of those suppliers for export for consideration in her sourcing/procurement project. And she can do it through the web platform, or the mobile app if she is documenting new potential suppliers at trade shows. Plus the database of connections and employees is always up to date, so she knows who to contact, and who she knows, or knows of, at the potential supplier.

Supplier Discovery (incumbent or new) can be quite time consuming without such a platform. Most organizations would resort to searching online databases, getting recommendations from professional societies, going to events to get information from peers, and so on. Discovery can take weeks on its own when a proper platform with a community built and maintained platform can knock that down to hours. And the information is a lot more reliable than that obtained from a single source. This reduces the time, effort, and risk to discover, pre-vet, and qualify new suppliers substantially — which makes for an improved sourcing and procurement process.

And the search in the tealbook platform is quite powerful — it’s not just keyword, industry, tag — it’s also specific to your data and connections — it’s semantic and it uses machine learning to increasingly improve the relevance of supplier recommendations. And that’s key to identifying the right suppliers for you. And it’s a great choice even if your platform has a basic SIM module. For example, tealbook complements newer sourcing platforms such as ScoutRFP (and eliminates the need for a supplier network entirely), Coupa customers can add on tealbook to fill in the holes in the Coupa S2P platform, and Ariba customers are, as you may have guessed from above, finding it provides that missing piece: mobile, user friendly and socially derived supplier intelligence. With tealbook, they are finally able to rapidly and easily look up updated supplier data, identify and qualify known or new suppliers without going through an extensive process before initiating a sourcing event in Ariba.

In other words, if you are looking to know more about suppliers who have already transacted with your company or regularly need to discover new suppliers (including increasing access to innovative and diversity suppliers) check out tealbook. It might be the platform for you.

BIQ: Alive and Well in the Opera House! Part II

Yesterday we noted that BIQ, from the sleepy little town of Southborough, that was acquired by Opera Solutions in 2012, is not only alive and well in the Opera House, but has been continually improved since its acquisition and the new version, 5(.05), even has a capability no other spend analytics product on the market has.

So what is this new capabilities? We’ll get to that. First of all, we want to note that since we last covered BIQ, a number of improvements have been made, and we’ll cover those.

Secondly, we want to note that the core engine is as powerful as ever. Since it runs entirely in memory, on data entirely in memory, it can process 1M transactions per second. Need to add a dimension? Change a measure? Recalculate a report? It’s instantaneous on data sets of 1M transactions or less. And essentially real-time on data sets of 10M transactions. Try getting that performance from your database or OLAP engine. Just try it.

One of the first big changes they made was complete separation of the engine from the viewer. This allowed them to do two things. One, create a minimal engine footprint (for in-memory execution) with a fully exposed API that allowed them to create a full web-based SaaS version as well as an improved desktop application and expose the full power of the BIQ engine to either instance.

They used QlikView for the web interface and through this interface have created a collection of CIQ (category intelligence) and PIQ (performance intelligence) dashboards for just about every indirect category and standard performance category (supplier, operations, finance, etc.) in addition to a standard spend dashboard with reports and insights that rivals any competitor dashboard. In addition, they have exposed all of the dimensions in the underlying data and measures that have been programmed and a user can not only create ad-hoc reports, but ad-hoc cross-tabs and pivot tables on the fly.

And they re-did the desktop interface to look like a modern analytics front-end that was built this decade. As those who saw it know, the old BIQ looked like a Windows 98 application, even though Microsoft never built anything with that amount of power. The new interface is streamlined, slick, and quick. It has all of the functionality of the old interface, plus modern widget that are easy to rearrange, expand, minimize, and deploy.

One of the best improvements is the new data loader. It’s still file based, but supports a plethora of file formats, can be used to transform data from one format to another, merge files into a single file or cube, picking some or all of the data. It’s quick, easy, user friendly, and can process massive amounts of data quickly, letting users know if there are errors or issues that need to be identified almost immediately.

Another great feature is the new anomaly detection engine that can be run in parallel with BIQ, built on the best of BIQ and Signal Hub technology. Right now, they only have an instance fine tuned to T&E spend in the procurement space, but you can bet more instances will be coming soon. But this is a great start. T&E spend is plentiful, a lot of small transactions, and hard to find those needles that represent off policy spend, off contract spend, and, more importantly, fraudulent spend. Using the new anomaly detection feature you can quickly identify when an employee is flying business instead of coach, using an off-contract airline, or, and this is key, charging pet kennels as lodging or strip club bills as executive dinners.

But this isn’t the best new feature. The best new feature is the new Open Extract capability that provides true open access to Python-based analytics in BIQ. The new version of BIQ engine, which runs 100% in memory, includes the python runtime and a fully integrated IDE. Any analyst or data scientist that can script python can access and manipulate the data in the BIQ engine in real time, using constructs built specifically for this purpose. And these custom built scripts run just as fast as the built in scripts as they run native in the engine. For example, you can run a Benford’s Law analysis on 1M transactions in less than a second. And building it in python, and the Anaconda distribution in particular, means that any of the open source analytics packages for Continuum Analytics can be used. There’s nothing else like it on the market. It takes spend analysis to a whole new level.