Category Archives: Vendor Review

Algorhythm: Twenty Years Later and the Optimization Rhythm Has Not Missed a Beat

It’s been almost a decade since we covered Algorhythm (Part I and Part II), and that’s because the last time the doctor caught up with them mid-decade, they were deep into creating their new accelerated cloud-native rapid application development platform, called AppliFire, with native mobile-first development support capabilities. And while it was very interesting, it was not Supply Chain focussed at the time, and not the core of what SI covers.

But fast forward about five years later, and Algorhythm has re-built their entire Supply Chain Planning, Optimization and Execution Management platform on top of this new development platform and now has one of the most modern cloud-native suites on the market — which not only has the capabilities of big name peers like Kinaxis, E2 Open and Infor, but also the ability to run on any mobile platform with seamless integration across modules and platforms.

And their optimization capabilities are still among the best on the market, and possibly only rivaled by Coupa Sourcing Optimization (powered by their Trade Extensions acquisition) — demonstrated by the fact that whether you are dealing with a demand plan, manufacturing plan, production plan, supply plan, logistics plan, route plan, or any other plan supported by the system, their system can find the optimal solution no matter how many demand locations, plans, sites, suppliers, products, lanes, etc. — and can do so rapidly if the user doesn’t overload the scenario with unnecessary constraints. (Even without constraints, these models can get huge, as the doctor knows all too well, but yet they solve rather rapidly in the Algorhythm platform.)

The Algorhythm suite of twelve (12) integrated Supply Chain Planning, Optimization, and Execution Management Modules is not only one of the most complete end-to-end suites on the market, but one of the most seamlessly integrated as well. It’s very easy to take the output of the “Demand Planner” (which allows the entire organization to collaborate on forecasts) and pump it into the “Manufacturing Network” (which integrates with the “Distribution Network” and “Inventory Planner”) to create a manufacturing (site) plan and then pump that into the “Production Planner” to create a manufacturing schedule by site and then push that into the “Logistics Planner” to determine the best logistics plan and then push that output into the “Route Planner” to optimize lanes and so on. (The suite also includes a “Supply Planner” to optimize individual shipments for JIT manufacturing; a S&OP planner to help sales and operations balance demand vs. supply; a “Manufacturing Execution System” to break PDI (Production Parameters) down, fetch actual production data, and validate results; a “Distributor Ordering” Management module to automatically create distributor orders across thousands of distributors; and a “Beat Planner” to optimize last mile delivery for outbound supply chain for distributors or CPG companies in geographies — like Asia — where last mile is difficult (due to inability to send large trucks, need to restock daily, etc.) With the exception of strategic sourcing and initial supplier selection, they basically have inbound demand to outbound supply covered in terms of supply chain optimization and management once you know the suppliers you are going to buy from and the products that are acceptable to you.

The UI is homogenous across the suite, and the modern web-based components such as drill-down menus, buttons, pop-ups, and so on make the suite easy to use — especially when it comes to tables and reports. The application supports built-in dynamic Excel like grids and tables which can be altered dynamically on the fly with built-in pagination to make navigation and view-control navigable, especially on tablets (for users on the go). It also supports standard (Excel-like) charts and graphs with drill-down, as well as modern calendar and interactive Google Map components. Navigation is easy, with bread-crumb trails so a user doesn’t get lost, and response time is great. It’s powerful and useable, which is exactly what you need to manage your supply chain on-the-go.

There’s a reason they have some of the biggest names in the F500 as clients, and that reason is their unique combination of

  1. power,
  2. ease of use, and
  3. and understanding of the Asian supply chain needs (especially around last-mile delivery).

The last point is especially relevant as many of the big name American (and even German) supply chain companies don’t really understand the unique complexities of (last-mile) supply chains in India and Asia. However, Algorhythm’s unique capability combined with their understanding has made their platform a force to be reckoned with in a market that is one of the hardest in the world. And as a result, they have built a platform that is more than sufficient for every other market as well. the doctor is looking forward to seeing more of Algorhythm outside of the Asian market as, at least in his view, the supply chain market in general needs a good kick in the pants as innovation there-in has considerably lagged the Source-to-Pay market that we primarily cover here on SI.

So if you need a good Supply Chain Orchestration solution, the doctor strongly encourages you to check out Algorhythm … you won’t be disappointed.

Don’t Throw Away That Old Spend Cube, Spendata Will Recover It For You!

And if you act fast, to prove they can do it, they’ll recover it for free. All you have to do is provide them 12 months of data from your old cube. More on this at the end of the post, but first …

As per our article yesterday, many organizations, often through no fault of their own, end up with a spend cube (filled with their IP) that they spent a lot of money to acquire, but which they can’t maintain — either because it was built by experts using a third party system, built by experts who did manual re-mappings with no explanations (or repeatable rules), built by a vendor that used AI “pattern matching”, or built by a vendor that ceased supporting the cube (and simply provided it to the company without any of the rules that were used to accomplish the categorization).

Such a cube is unusable, and unless maintainable rules can be recovered, it’s money down the drain. But, as per yesterday’s post, it doesn’t have to be.

  1. It’s possible to build the vast majority of spend cubes on the largest data sets in a matter of days using the classic secret sauce described in our last post.
  2. All mappings leave evidence, and that evidence can be used to reconstruct a new and maintainable rules set.

Spendata has figured out that it’s possible to reverse engineer old spend cubes by deriving new rules by inference, based on the existing mappings. This is possible because the majority of such (lost) cubes are indirect spending cubes (where most organizations find the most bang for their buck). These can often be mapped to 95% or better accuracy using just Vendor and General Ledger code, with outliers mapped (if necessary) by Item Description.

And it doesn’t matter how your original cube was mapped — keyword matching algorithms, the deep neural net de jour, or by Elves from Rivendell — because supplier, GL-code, and supplier and GL-code patterns can be deduced from the original mappings, and then poked at with intelligent (AI) algorithms to find and address the exceptions.

In fact, Spendata is so confident of its reverse-engineering that — for at least the first 10 volunteers who contact them (at the number here) — they’ll take your old spend cube and use Spendata (at no charge) to reverse-engineer its rules, returning a cube to you so you can see the results (as well as the reverse-engineering algorithms that were applied) and the sequenced plain-English rules that can be used (and modified) to maintain it going forward.

Note that there’s a big advantage to rules-based mapping that is not found in black-box AI solutions — you can easily see any new items at refresh time that are unmapped, and define rules to handle them. This has two advantages.

  1. You can see if you are spending where you are supposed to be spending against your contracts and policies.
  2. You can see how fast new suppliers, products, and human errors are entering your system. [And you can speak with the offending personnel in the latter case to prevent these errors in the future.]

And mapping this new data is not a significant effort. If you think about it, how many new suppliers with meaningful spending does your company add in one month? Is it five? Ten? Twenty? It’s not many, and you should know who they are. The same goes for products. Chances are you’ll be able to keep up with the necessary rule additions and changes in an hour a month. That’s not much effort for having a spend cube you can fully understand and manage and that helps you identify what’s new or changed month over month.

If you’re interested in doing this, the doctor is interested in the results, so let SI know what happens and we’ll publish a follow-up article.

And if you take Spendata up on the offer:

  1. take a view of the old cube with 13 consecutive months of data
  2. give Spendata the first 12 consecutive months, and get the new cube back
  3. then add the 13th month of data to the new cube to see what the reverse-engineered rules miss.

You will likely find that the new rules catch almost all of the month 13 spending, showing that the maintenance effort is minimal, and that you can update the spend cube yourself without dependence on a third party.

Per Angusta: End-to-End Cross-Platform Purchasing & Procurement Project Management

When we last discussed Per Angusta last year in our post on Purchasing CRM, they were a relatively new SaaS company focussing on the workflow that ties the entire Supply Management process together.

They were building a SaaS platform to manage sourcing pipelines, track savings for organizational validation, and make Procurement’s impact visible to the organization. And, more importantly, they were building a tool designed to manage the sourcing workflow by integrating (through APIs) with Sourcing, Procurement, and Supplier platforms … out-of-the-box. At the time, they were integrated, or building integrations with, Rosslyn Analytics, HICX, Market Dojo. Today, they are also integrated with Coupa, Dhatim, D&B, and Ecovadis and other integrations are in the way.

Back then, they were mainly workflow, budget management, and great project management. Since then, they’ve added (better) ERP integration; improved alerts with rule definitions that will, in the next release, also support approval management and “toll gates” for better project management capability; added better contract management and tracking support (with forthcoming DocuSign integration); added supplier (information) management capability (that can import data from existing systems); added opportunity identification and management (with some innovative capability for those that also use Dhatim); and added an overall progress management capability … with the ability to take reporting snapshots from any point in time (in the past). In this post we are going to focus on three key advancements: opportunity management, supplier management and the progress management capability.

Opportunity Management was designed as a “scratch-pad” based application that allows a sourcing and procurement team to track potential opportunities as they are identified. To start identifying an opportunity, all that is needed is a name, an opportunity type, and a category. A short description, stakeholder, scope, implementation difficulty, and expected start date can also be defined. Once an opportunity is accepted, a potential budget impact can be defined, and once the opportunity is implemented, the expected savings can be defined and then the actual savings tracked. And all of this is summarized on the dashboard that summarizes opportunities by status, type of impact, ease of implementation, and project duration. But the great thing is that if a customer also has Dhatim, they can use Dhatim’s AI to identify the likely best opportunities that can be attacked and then feed them right into the Per Angusta platform.

Supplier Management, which can take data from the ERP, organizational Sourcing / ERP / Supply Management systems, and third party systems (D&B, Ecovadis, etc.), can be use to provide a basic Supplier snapshot independent of any given Sourcing system that can merge all the relevant data and provide consistent information to all Supply Management personnel. If they integrate a supplier discovery platform, it will be quick and easy to identify the best current and new suppliers to invite to your next Sourcing or Procurement project.

The Progress Management capability is essentially a pair of operational and financial dashboards that summarize target, forecast, and actual results for the year on top of the opportunity management and tracking capability. It’s trivially simple, but when data from all the platforms is integrated, extremely powerful and useful to the Procurement and Finance organizations.

Per Angusta has come a long way in a short time and SI looks forward to see what they do next year, especially as they are now working on “finding ways to use AI to make sourcing and procurement professionals much more productive and effective”.

Tealbook … Not Just a Journal Anymore!

When you hear teal, you probably think of the colour which gets its name from the coloured area around the eyes of the common teal, and when you hear tealbook, you’re probably thinking of a notebook in the calming hue of teal, perfect for a journal or personal contact book … maybe even one you can keep your supplier contacts in!

But we all know the problems with a contact book. Contact information changes as people are shuffled around the company. Contacts leave the company, and you not only have to update their information but add a new contact. There is only a limited amount of room for notes. It’s really hard to share the information, and, if your peers are also using handwritten ‘teal journals’, get them to share the information, especially when you need it quickly.

That’s why supplier information management (SIM) modules and platforms were developed. All of the supplier and contact information in one place, accessible to, and updatable by, anyone in the organization. Plus, anyone can search the supplier database for suppliers new to them … but not new to the organization. This was one major limitation. Another was lack of community intelligence from peers. Were they selected or known for certain capabilities, or not? Do they have other customers for a product or service who will serve as references? Are they (now) capable of satisfying a minority designation or certification requirement (in a certain geography)? You can ask this, update the system to track it, but a community keeps this information up to date.

But most importantly, with traditional Supplier Information Management (SIM), you know what you know and you don’t know what you don’t know. You have no way of determining how many potential suppliers you don’t know about for any given category or requirement. Or how good the suppliers are for your needs relative to the suppliers you don’t know about.

That’s where a modern Supplier Information Management with Supplier Discovery platform comes into play. A modern supplier discovery platform, which is more than just a supplier network — as a supplier network is nothing more than a database of suppliers that have been transacted with through a particular platform, allows a community of organizations to keep track of, and provide information and recommendations on, potential suppliers (whether transacted through a platform or not); potential suppliers to self-identify and provide relevant information up front (such as diversity status and certifications); and all parties to share information of potential relevance.

tealbook‘s vision is to create a shared, trusted, supplier base with 100M suppliers that provides a central repository of reliable supplier intelligence that can be used as a stand-alone platform or integrated with your current ERP, sourcing, procurement, contract management, and other spend management systems of relevance through an easy to use API and an interface that is configurable to your organization’s processes and privacy preferences. tealbook already includes 1M vetted, and de-duplicated, suppliers with rich insights and expects to grow daily at an exponential rate to reach 4 million within 12 months.

And while this three-year-old start-up doesn’t have the 100M supplier database yet, they have the solid foundations for a reliable, scalable, extensible, and integratable community supplier intelligence platform that can be configured to your organization’s needs. That is getting the attention of some of the biggest organizations and consultancies in North America.

In the tealbook platform, a user can easily do a search for potential suppliers, review verified supplier profiles, review community generated expertise tags (similar to individual specialty tags on Linkedin), review provided supplier content, create a supplier list for vetting, interact with the supplier to get more information, interact with her teammates for initial vetting and review, and then select a subset of those suppliers for export for consideration in her sourcing/procurement project. And she can do it through the web platform, or the mobile app if she is documenting new potential suppliers at trade shows. Plus the database of connections and employees is always up to date, so she knows who to contact, and who she knows, or knows of, at the potential supplier.

Supplier Discovery (incumbent or new) can be quite time consuming without such a platform. Most organizations would resort to searching online databases, getting recommendations from professional societies, going to events to get information from peers, and so on. Discovery can take weeks on its own when a proper platform with a community built and maintained platform can knock that down to hours. And the information is a lot more reliable than that obtained from a single source. This reduces the time, effort, and risk to discover, pre-vet, and qualify new suppliers substantially — which makes for an improved sourcing and procurement process.

And the search in the tealbook platform is quite powerful — it’s not just keyword, industry, tag — it’s also specific to your data and connections — it’s semantic and it uses machine learning to increasingly improve the relevance of supplier recommendations. And that’s key to identifying the right suppliers for you. And it’s a great choice even if your platform has a basic SIM module. For example, tealbook complements newer sourcing platforms such as ScoutRFP (and eliminates the need for a supplier network entirely), Coupa customers can add on tealbook to fill in the holes in the Coupa S2P platform, and Ariba customers are, as you may have guessed from above, finding it provides that missing piece: mobile, user friendly and socially derived supplier intelligence. With tealbook, they are finally able to rapidly and easily look up updated supplier data, identify and qualify known or new suppliers without going through an extensive process before initiating a sourcing event in Ariba.

In other words, if you are looking to know more about suppliers who have already transacted with your company or regularly need to discover new suppliers (including increasing access to innovative and diversity suppliers) check out tealbook. It might be the platform for you.

BIQ: Alive and Well in the Opera House! Part II

Yesterday we noted that BIQ, from the sleepy little town of Southborough, that was acquired by Opera Solutions in 2012, is not only alive and well in the Opera House, but has been continually improved since its acquisition and the new version, 5(.05), even has a capability no other spend analytics product on the market has.

So what is this new capabilities? We’ll get to that. First of all, we want to note that since we last covered BIQ, a number of improvements have been made, and we’ll cover those.

Secondly, we want to note that the core engine is as powerful as ever. Since it runs entirely in memory, on data entirely in memory, it can process 1M transactions per second. Need to add a dimension? Change a measure? Recalculate a report? It’s instantaneous on data sets of 1M transactions or less. And essentially real-time on data sets of 10M transactions. Try getting that performance from your database or OLAP engine. Just try it.

One of the first big changes they made was complete separation of the engine from the viewer. This allowed them to do two things. One, create a minimal engine footprint (for in-memory execution) with a fully exposed API that allowed them to create a full web-based SaaS version as well as an improved desktop application and expose the full power of the BIQ engine to either instance.

They used QlikView for the web interface and through this interface have created a collection of CIQ (category intelligence) and PIQ (performance intelligence) dashboards for just about every indirect category and standard performance category (supplier, operations, finance, etc.) in addition to a standard spend dashboard with reports and insights that rivals any competitor dashboard. In addition, they have exposed all of the dimensions in the underlying data and measures that have been programmed and a user can not only create ad-hoc reports, but ad-hoc cross-tabs and pivot tables on the fly.

And they re-did the desktop interface to look like a modern analytics front-end that was built this decade. As those who saw it know, the old BIQ looked like a Windows 98 application, even though Microsoft never built anything with that amount of power. The new interface is streamlined, slick, and quick. It has all of the functionality of the old interface, plus modern widget that are easy to rearrange, expand, minimize, and deploy.

One of the best improvements is the new data loader. It’s still file based, but supports a plethora of file formats, can be used to transform data from one format to another, merge files into a single file or cube, picking some or all of the data. It’s quick, easy, user friendly, and can process massive amounts of data quickly, letting users know if there are errors or issues that need to be identified almost immediately.

Another great feature is the new anomaly detection engine that can be run in parallel with BIQ, built on the best of BIQ and Signal Hub technology. Right now, they only have an instance fine tuned to T&E spend in the procurement space, but you can bet more instances will be coming soon. But this is a great start. T&E spend is plentiful, a lot of small transactions, and hard to find those needles that represent off policy spend, off contract spend, and, more importantly, fraudulent spend. Using the new anomaly detection feature you can quickly identify when an employee is flying business instead of coach, using an off-contract airline, or, and this is key, charging pet kennels as lodging or strip club bills as executive dinners.

But this isn’t the best new feature. The best new feature is the new Open Extract capability that provides true open access to Python-based analytics in BIQ. The new version of BIQ engine, which runs 100% in memory, includes the python runtime and a fully integrated IDE. Any analyst or data scientist that can script python can access and manipulate the data in the BIQ engine in real time, using constructs built specifically for this purpose. And these custom built scripts run just as fast as the built in scripts as they run native in the engine. For example, you can run a Benford’s Law analysis on 1M transactions in less than a second. And building it in python, and the Anaconda distribution in particular, means that any of the open source analytics packages for Continuum Analytics can be used. There’s nothing else like it on the market. It takes spend analysis to a whole new level.