Advanced Supplier Management TOMORROW — No Gen-AI Needed!

Back in late 2018 and early 2019, before the GENizah Artificial Idiocy craze began, the doctor did a sequence of AI Series (totalling 22 articles) on Spend Matters on AI in X Today, Tomorrow, and The Day After Tomorrow for Procurement, Sourcing, Sourcing Optimization, Supplier Discovery, and Supplier Management. All of which was implemented, about to be implemented, capable of being implemented, and most definitely not doable with, Gen-AI.

To make it abundantly clear that you don’t need Gen-AI for any advanced back-office (fin)tech, and that, in fact, you should never even consider it for advanced tech in these categories (because it cannot reason, cannot guarantee consistency, and confidence on the quality of its outputs can’t even measured), we’re going to talk about all the advanced features enabled by Assisted and Augmented Intelligence that are (or soon will be) in development (now) and you will see in leading best of breed platforms over the next few years.

Unlike prior series, we’re identifying the sound, ML/AI technologies that are, or can, be used to implement the advanced capabilities that are currently emerging, and will soon be found, in Source to Pay technologies that are truly AI-enhanced. (Which, FYI, may not match one-to-one with what the doctor chronicled five years ago because, like time, tech marches on.)

Today we continue with AI-Enhanced Supplier Management that is in development “today” (and expected to be in development by now when the first series was penned five years ago) and will soon be a staple in best of breed platforms. (This article sort of corresponds with AI in Supplier Management The Day After Tomorrow that was published in May, 2019 on Spend Matters.)

TOMORROW

Supplier Future State Predictions

Supplier management platforms of today can integrate market intelligence with community intelligence, internal data, and external data sources and give you a great insight into a supplier’s current state from a holistic perspective.

Along each dimension, future states can be predicted based on trends. But single trends don’t tell the whole story. Now that we have decades of data on a huge number of companies available on the internet across financial, sustainability, workforce, production, and other dimensions which can be analyzed overtime and cross-correlated, we can do more, and know more.

Based on this correlated data, machine learning can be used to build functions by industry and company size that can predict future state with high confidence based upon the presence of a sufficient number of sufficiently accurate data points for a company in question. Now that these platforms can monitor enough internal, community, and market data and pull in a plethora of data feeds, they can accurately compute metrics with high confidence along a host of dimension, and this in turn allows them to compute the metrics that are needed to predict future state if the vendor’s platform has enough historical data on enough companies to define trends and define predictor functions using machine learning.

Not only can you enter a relationship based on a current risk profile, but on a likely future risk profile based on what the company could look like at the end of the desired contract term. If you want a five year relationship, maybe taking advantage of that great deal due to a temporary blip in supplier or market performance may not be a good idea if suppliers historically in this situation typically went into a downward spiral after accepting a big contract they ultimately weren’t prepared to deliver on.

Category Based Supplier Rebalancing

We could actually do this today, as a few vendors are now offering this capability, but it’s not yet part of supplier management platforms and the newly emergent offerings are often limited to a few categories today. But tomorrow’s platforms will continually analyze your categories holistically (along the most relevant dimensions, which could include cost, supply assurance, environmental friendliness, etc.) to determine if the supply mix you are currently using is the best one, let you know if there could be a better one, and suggest changes to orders (as long as it doesn’t jeopardize contracts where that jeopardy could come with a financial or legal penalty).

It’s just a matter of re-running an optimization model on, say, a monthly basis with updated data on price, supply assurance, and environmental friendliness (using the appropriate data for each, such as market quotes, current supplier risk, carbon per unit, etc), and comparing the optimal result to the current allocation plan. If it’s within tolerance, stay on track; if it’s slightly out of tolerance, notify a human to conduct and review a thorough analysis to see if something might need to change; if it’s way off of tolerance, recommend a change with the data that supports the change.

Supply Base Rebalancing

Once you have a platform that is continually reanalyzing categories and supplier-based assignment, you can start looking across the supply base and identify suppliers which are hardly used (and an overall drain on your company when you consider the costs of maintaining a relationship and even maintaining the supplier profile) and supplier that are potentially overused (and pose a risk to your business simply based on the level of supply [as even the biggest company can stumble, fall, and crash to the ground on a single unexpected event, such as the unexpected installation of a spreadsheet driven Master of Business Annihilation as CEO who has no clue what the business does or how to run it effectively and, thus, causes a major stumble, as summarized in Jason Premo’s article).

And, more importantly, identify new suppliers who have been performing great with slowly increasing product / service loads and should be awarded more of the business over older suppliers that are becoming less innovative and more risky to the operation at large. Now, this will just be from a supply perspective, and not a supply chain perspective (as these programs focus on suppliers and not logistics or warehousing or overall global supply issues), but this will be very valuable information for Sourcing and New Product Development who want to always find the best suppliers for a new product or service requirement.

Real-Time Order Rebalancing

Since tomorrow’s platforms will be able to recommend category rebalancing across suppliers, they will also be able to quickly recommend real-time order rebalancing strategies if a primary supplier is predicted to be late in a delivery (or a human indicates an ETA for a shipment has been delayed by 60 days). This is because they will be integrated with current contracts, e-procurement systems, and have a bevy of data on projected availability and real historical performance. Thus, it will be relatively simple to recommend the best alternatives by simply re-running the machine learning and optimization models with the problematic supplier taken out of the picture.

Carbon-Based Rebalancing

Similarly, with the rise of carbon-calculators and third-party public sources on average carbon production per plant, and even unit of a product, it will be relatively easy for these supplier management platforms to build up carbon profiles per supplier, the amount of that carbon the company is responsible for, how those profiles compare to other profiles, and what the primary reasons for the differentiation are.

The company can then focus on suppliers using, or moving to, more environmentally friendly production methods, optimize logistics networks, and proactive rebalancing of awards among supplier plants to make sure the plants producing a product are the ones closest to where the product will be shipped and consumed. It’s simply a carbon focussed model vs. a price focussed one.

SUMMARY

Now, we realize some of these descriptions are dense, but that’s because our primary goal is to demonstrate that one can use the more advanced ML technologies that already exist, harmonized with market and corporate data, to create even smarter Supplier Management applications than most people (and last generation suites) realize, without any need (or use) for Gen-AI. More importantly, the organization will be able to rely on these applications to reduce time, tactical data processing, spend, and risk while increasing overall organizational and supplier performance 100% of the time, as the platform will never take an action or make a recommendation that doesn’t conform to the parameters and restrictions placed upon it. It just requires smart vendors who hire very smart people who use their human intelligence (HI!) to full potential to create brilliant Supplier Management applications that buyers can rely on with confidence no matter what category or organization size, always knowing that the application will know when a human has to be involved, and why!

ketteQ: An Adaptive Supply Chain Planning Solution Founded in the Modern Age

As per yesterday’s post, any supply chain planning solution developed before 2010 isn’t necessarily built on a modern multi-tenant cloud-ready SaaS stack (as such a stack didn’t exist, and it would have had to be partially to fully re-platformed to be modern multi-tenant cloud-ready SaaS). Any solution built after was much more likely to be built on a modern multi-tenant cloud-ready SaaS stack. Not guaranteed, but more likely.

KetteQ‘s Adaptive Supply Chain Planning Solution is one of these solutions that was built in the modern age on a fully modern multi-tenant cloud-native SaaS stack, and one that has some advantages you won’t find in most of the competition. I was able to get an early view of the latest product which was released last week. Founded in 2018, ketteQ was built from the ground up to embody all of the lessons learned from the founders’ 100+ successful supply chain planning solution implementations across industries and systems, and the wisdom gained from building two prior supply chain companies, with the goal of addressing all of the issues they encountered with previous generation solutions. The modern architecture was purpose built to fully utilize the transformational power of optimization and machine learning. It was a tall feat, and while still a work in progress (as they admit they currently only have three mature core modules on par with their peers in depth and breadth [although all inherit the advantages of their modern stack and solver architecture]), but one they have pulled off as they can also address a number of other areas with their other, newer modules, and integration to third party systems (particularly for order management, production scheduling, and transportation management) and address End-to-End (E2E) supply chain planning, with native Integrated Business Planning (IBP) across demand, inventory, and supply — which are their core modules, along with a module for Service Parts Planning and S&OP Planning.

In addition to this solid IBP core, they also have capabilities across cost & price management, asset management, fulfillment & allocation, work order management, and service parts delivery. And all of this can be accessed and controlled through a central control tower.

And most importantly, the entire solution is cloud native, designed to leverage horizontal scalability and connectivity, and built for scale. The solution is enabled by a single data model that can be stored in an easily accessible open SQL database, in a contemporary architecture that supports all solutions. The solution is extendable to support scalability, multiple models, multiple scenarios per model, and a new, highly scalable solver that can perform thousands of heuristic tests and apply a genetic algorithm with machine learning to find a solution by testing all demand ranges against all supply options to find a solution that minimizes cost / maximizes margin against potential demand changes and fill rates.

Of course, the ketteQ platform comes with a whole repertoire of applied Optimization/ML/Genetic/Heuristic models for
demand planning, inventory planning, and supply planning, as well as S&OP. In addition, because of its extensible architecture, instead of manually running single scenarios at a time, it can run up tothousands of scenarios for multiple models simultaneously, and present the results that best meet the goal or the best trade-off between multiple goals.

KetteQ does all of this in a platform that is, compared to older generation solutions:

  • fast(er) to deploy — the engine was built for configuration, their scalable data model and data architecture make it easy to transform and integrate data, and they can customize the UX quickly as well
  • easy to use — every screen is configured precisely to efficiently support the task at hand, and the UX can be deployed standalone or as a Salesforce front end
  • cost-effective — since the platform was built from the ground up to be a true multi-tenant solution using a centralized, extensible, data architecture, each instance can spin off multiple models, which can spin off multiple scenarios, each of which only requires the additional processing requirement for that scenario instance and only the data required by that scenario; and as more computing power is required, it supports automatic horizontal scaling in the cloud.
  • better performing — since it can run more scenarios in more models using modern multi-pass algorithms that combine traditional machine learning with genetic algorithms and multi-pass heuristics that go broad and deep at the same time to find solutions that can withstand perturbations while maximizing the defined goals using whatever weighting the customer desires (cost, delivery time, carbon footprint, etc.)
  • more insightful — the package includes a full suite of analytics built on Python that are easily configured, extended, and integrated with AI engines (including Gen-AI if you so desire), which allows data scientists to add their own favorite forecasting, optimization, analytics, and AI algorithms; in addition, it can easily be configured to run and display best-fit forecasts at any level of hierarchy and automatically pull in and correlate external indicators as well
  • more automated — the platform can be configured to automatically run through thousands of scenarios up and down the demand, supply, and inventory forecasts on demand as data changes, so the platform always has the best recommendation on the most recent data; these scenarios can include multiple sourcing, logistics, and even bills of material; and they can be consolidated meta-scenarios for end-to-end integrated S&OP across demand, supply, and inventory
  • seamless Salesforce integration — takes you from customer demand all the way down to supply chain availability; seamless collaboration workflow with Salesforce forecast, pipeline, and order objects in the Salesforce front end
  • AWS nativity — for full leverage of horizontal scalability and serverless computing, multi-tenant optimization and analytics, and single-tenant customer data. Moreover, the solution is also available on the AWS marketplace.

In this coverage, we are going to primarily focus on demand and supply (planning) as that is the most relevant from a sourcing perspective. Both of these heavily depend on the platform’s forecasting ability. So we’ll start there.

Forecasting

In the ketteQ platform, forecasts, which power demand and supply planning,

  • can be by day, week, month, or other time period of interest
  • can be global, regional, local, at any level of the (geo) hierarchy you want
  • can be category, product line, and individual product
  • can be business unit, customer, channel
  • can be computed using sales data/forecasts, finance data, marketing data/forecasts, baselines, and consensus
  • can use a plethora of models (including, but not limited to Arima[Multivariate], Average, Croston, DES, ExtraTrees, Lasso[variants], etc.), as well as user defined models in Python
  • can be configured to select the best fit algorithm automatically based on historical data, based on just POS data, POS data augmented with economic indicators, external data (where insufficient POS data), etc.

These models, like all models in the platform, can be set up using a very flexible and responsive hierarchy approach, with each model automatically pulling in the model above it and then altering it as necessary (simply by modifying constraints, goals, data [sources], etc.). In the creation of models, restore points can be defined at any level before new data or new scenarios are run so the analyst can backtrack at any time.

Demand Planning

The demand planning module in ketteQ can compute demand plans that take into account:

  • market intelligence input to refine the forecast (which can include thousands of indicators across 196 countries from Trading Economics as well as your own data feeds) (and which can include, or not, correlation factors for correlation analysis)
  • demand sensing across business units, channels, customers, and any other data sources that are available to be integrated into the platform
  • priorities across channels, customers, divisions, and departments
  • multiple “what if” scenarios (simultaneously), as defined by the user
  • consensus demand forecasts across multiple forecasts and accepted what-ifs

The module can then display demand (plans) in units or value across actuals, sales forecasts, finance forecasts, marketing forecasts, baseline(s), and consensus.

In addition to this demand planning capability and all of the standard capabilities you would expect from a demand planning solution, the platform also allows you to:

  • Prioritize demand for planning and fulfillment
  • Track demand plan metrics
  • Consolidate market demand plans
  • Handle NPI & transition planning
  • Define user-specific workflows

Supply Planning

The reciprocal of the demand planning module, the supply planning module in ketteQ leverages what they call the PolymatiQ solver. (See their latest whitepaper at this link.)

Their capabilities for product and material planning includes the ability to:

  • compute plans by the day, week, month, or any other time frame of interest
  • do so globally, regionally, locally, or at any level of the hierarchy you want
  • and do so for all regional, local, or any other subset of suppliers of interest, as well as view by customer-focused dimensions such as channel, business unit and customer
  • use the current demand forecast, modifications, and taking into account current and projected supply availability, safety stock, inventory levels, forecasted consumption rates, expected defect rates, rotatable pools, and current supplier commitments, among other variables
  • run scenarios that optimize for cost and service
  • coordinate raw and pack material requirements for each facility
  • support collaboration with suppliers and manufacturing
  • manage sourcing options and alternates (source/routes) for make, buy, repair and transfers

Moreover, supply plans, like demand plans, can be plotted over time based on any factors or factor pair of interest, such as supply by time frame, sourcing cost vs fill rate, etc.

In addition, the supply planning module for distribution requirements can:

  • develop daily deployment plans
  • develop time-phased fulfillment and allocation plans
  • manage exceptions and risks
  • conduct what-if scenario analysis
  • execute short-term plans
  • track obsolescence and perform aging analysis/tracking

Inventory Planning

We did not see or review the inventory planning module in depth, even though it is one of their three core modules, so all we can tell you is that it has most of the standard functionality one would expect, and given the founder’s heritage in the service parts planning world, you know it can handle complex multi-echelon / multi-item planning. Capabilities include:

  • manage raw, pack and finished goods inventory
  • set and manage dynamic safety stock, EOQ, ROP levels and policies
  • ensure inventory balance and execution and support for ASL (authorized stocking list), time-phased, and trigger planning by segment
  • support parametric optimization for cost and service balancing
  • the ability to minimize supply chain losses through better inventory management
  • the ability to optimize service levels relative to goals

Salesforce: IBP

As we noted, the ketteQ platform supports native Salesforce integration, and you can do full IBP through the custom front-end built in Salesforce CRM, which allows you to seamlessly jump back and forth between your CRM and SCM, following the funnel from customer order to factory supply and back again.

The Salesforce front-end, which is very extensive, supports the typical seven-step IBP process:

  1. Demand Plan
  2. Demand Review
  3. Supply Plan
  4. Pre IBP Review
  5. Executive IBP Review
  6. Operational Plan
  7. Finalization

… and allows it to be done easily in Salesforce design style, with walk-through tab-based processes and sub-tabs to go from summary to detail to related information. Moreover, the UI can be configured to only include relevant widgets, etc.

In addition, users can easily select an IBP Cycle; drill into orders and track order status; define custom alerts; subscribe to plans, updates, and related reports; follow sales processes including the identification and tracking of opportunities; jump into their purchase orders (on the supply side); track assets; manage programs; and access control tower functionality.

As a result of the integration with Salesforce objects, including Pipeline and Orders, the solution helps bridge the gap between sales and supply chain organizations, enabling executive-driven process change. As an advanced supply chain solution on the Salesforce Appexchange, it enables the broad base of Salesforce customers on the manufacturing cloud a slew of unique integration possibilities.
And, of course, if you don’t have Salesforce, you still have all this functionality (and more) in the ketteQ front-end.

Finally, the platform can do much more as it also has modules, as we noted, for service parts planning, service parts delivery, sales and operations planning, cost and price management, fulfillment & allocation, asset management, clinical demand management, and a control tower. It is a fundamentally modern approach to planning that is worth exploring for companies that are challenged to adapt in today’s disruptive supply chain environment. For a deeper dive into these modules and capabilities, check out their website or reach out to them for a demo. This is a recommendation for ANY mid-sized or larger manufacturing (related) organization looking for a truly modern supply chain planning solution.

Have You Brought Your Supply Chain Planning Out of the Middle Ages?

Back in the 1930s, the dark ages of computing began, starting with the Telex messaging network in 1933. Beginning as an R&D project in 1926, it became an operational teleprinter service, operated by the German Reichspost (under the Third Reich — remember we said “dark ages”). With a speed of 50 baud, or about 66 words per minute, it was initially used for the distribution of military messages, but eventually became a world-wide network of both official and commercial text messaging that survived into the 2000s in some countries. A few years later, Bell Labs’ George Stibitz built the “Model K” adder in 1937 that was the first proof of concept for the application of Boolean Logic to computer design. Two years later, the Bell Labs CNC (Complex Number Calculator) was completed. In 1941, the Z3, using 2,300 relays, was constructed and could perform floating point binary arithmetic with a 22 bit word length and execute aerodynamic calculations. Then, in 1942, the ABC (Atanasoff-Berry Computer) was completed, seen by the John Mauchly, who invented the ENIAC, which was the first general purpose computer completed in 1945.

Three years later, in 1948, Frederic Williams, Tom Kilburn, and Geoff Toothill developed the Small-Scale Experimental Machine (SSEM), which was the first digital, electronic, stored-program computer to run a computer program, consisting of a mere 17 instructions! A year later, we saw the modem that allowed computers to communicate through ordinary phone lines. Originally developed for transmitting radar signals, the modem was adapted for computer use four years later in 1953. The same year saw the EDSAC, the first practical stored-program computer to provide a regular computing service.

A year later, in 1950, we saw the introduction of magnetic drum storage, which could store 1 Million bits, which was a previously unimagined amount of data (and twice what Gates once said anyone would ever need), though nothing by today’s standards. Then, in 1951, the US Census Bureau gets the Univac 1 and the end of the dark ages are in sight. Then, in 1952, only two years after the magnetic drum, IB introduces a high speed magnetic tape, which could store 2 million digits per tape! In 1953, Grimsdale and Webb built a 48-bit prototype transistorized computer that used 92 transistors and 550 diodes. Later that same year, MIT created magnetic core memory. Almost everything was in place for the invention of a computer that didn’t take a whole room. In 1956, MIT researchers began experimenting with direct keyboard input to computers (which up to now could only be programmed using punch cards or paper tape). A prototype of a mini computer, the LGP-30, was created at Caltech this same year. A year later, FORTRAN, one of the first third generation computing languages, was developed in 1957. Early magnetic disk drives were invented in 1959. And 1960 saw the introduction of the DEC PDP-1, one of the first general purposed mini-computers. A decade later saw the first IBM computer to use semiconductor memory. And one year later, in 1971, we saw one of the first memory chips, the Intel 1103, and the first microprocessor, the Intel 4004.

Two years later NPL and Cyclades started experimenting with internetworking with the European Informatics Network (EIN) and Xerox PARC began linking Ethernets with other networks using its PUP protocol. And the Micral, based on the Intel 8008 microprocessor, one of the earliest non-kit personal computers, ws released. the next year, in 1974, the Xerox Parc Alto was released and the end of the dark ages were in sight. In 1976, we saw the Apple I, and in 1981 we saw the first IBM PC and the middle ages began as computing was now within reach of the masses.

By 1981, before the middle ages began, we already had GAIN Systems (1971), SAP (1972), Oracle (1977), and Dassault Systemes, four (4) of the top fourteen (14) supply chain planning companies according to Gartner in their 2024 Supply Chain Planning Magic Quadrant (Challengers, Leaders, and Dassault Systemes). In the 1980s we saw the formation of Kinaxis (1984), Blue Yonder (1985), and OMP (1985). Then in the 1990s, we saw Arkieva (1993), Logility (1996), and John Galt Solutions (1996). This says ten (10) of the top fourteen (14) supply chain planning solution companies were founded before the middle ages ended in 1999 (and the age of enlightenment began).

Tim Berners-Lee invented the World Wide Web in 1989, the first browser appeared in 1990, the first cable internet service appeared in 1995, Google appeared in 1998, and Salesforce, considered to be one of the first SaaS solutions built from scratch launched in 1999. At the same time, we reached an early majority of internet users in North America, ending the middle ages and starting the age of enlightenment, as global connectivity was now available to the average person (at least in a first world country).

Only e2Open (2000), RELEX Solutions (2005), Anaplan (2006), and o9 Solutions (2009) were founded in the age of enlightenment (but not the modern age). In the age of enlightenment, we left behind on premise and early single client-server applications and began to build SaaS applications using a modern SaaS MVC architecture where requests came in, got directed to the machine with the software, that computed answers, and sent them back. This allowed for rather fault-tolerant software since if hardware failed, the instance could be moved. If an instance failed, it could just be redeployed with backup data. It was true enlightenment. However, not all companies adopted multi-tenant SaaS from day one, only a few providers did in the early days. (So even if your SCP company began in the age of enlightenment, it may not be built on a modern multi-tenant cloud-native true SaaS architecture.) This was largely because there were no real frameworks to build and deploy such solutions on (and Salesforce literally had to build their own.

However, in 2008, Google launched its Cloud and in 2010, one year after the last of the top 14 supply chain applications was launched, when Microsoft launched Azure, the age of enlightenment came to an end and the modern age began as there were now multiple cloud-based infrastructures available to support cloud-native true multi-tenant SaaS applications (no DC operational knowledge required), making it easy for any true SaaS provider to develop these solutions from the ground up.

In other words, not one Supply Chain Planning Solution recognized as a top supply chain planning solution by Gartner was founded in the modern age. (Moreover, if you look at the niche players, only one of the six was founded in the age of enlightenment, the rest are also from the middle ages.)

So why is this important?

  • If the SCP platform core was architected back in the day of client server, and the provider did not rearchitect it for true multi-tenant, even if the vendor wrapped this core in a VM (Virtual Machine), put it in a Docker container, and put it in the cloud, it’s still a client-server application at the core. This means it has all the limits of client server applications. One client per server. No scalability (beyond how many cores and how much memory the server can support).
  • If the platform core was architected such that each module, which runs in its own VM, requires a complete copy of the data to function, that’s a lot of data replication required to run the platform, especially if it has 12 separate modules. This can greatly exacerbate the storage requirements, and thus the cost.
  • But that’s not the big problem. The big problem is that models constructed on a traditional client-server architecture were designed to run only one scenario at a time, and only do so if a complete copy of the data is available. So if you want to run multiple models, multiple scenarios for a model, or both, you need multiple copies of the module, each with their own data set for each model scenario you want to run. This not only exacerbates data requirements, but compute requirements as well. (This is why many providers limit how many models you can have and scenarios you can run as their cloud compute costs skyrocket due to the inefficiency in design and data storage requirements.)

    And while there is no such thing as a truly optimal supply chain plan, since you never know all the variables in advance, there are near optimal fault-tolerant plans that, with enough scenarios, can be identified (by building up a picture of what happens at different demand levels, supply levels, transportation times, etc.) and you can select the one that balances cost savings, quality, expected delivery time, and risk at levels you are comfortable with.

That’s the crux of it. If you can’t run enough scenarios across enough models to build up a picture of what happens across different possibilities, you can’t come up with a plan that can withstand typical perturbations, and definitely can’t come up with a plan that can be rapidly put into place to deal with a major demand fluctuation, supply fluctuation, or an unexpected supply chain event.

So if you want to create a supply chain plan that can enable supply chain success, make sure you’ve brought your supply chain planning out of the middle ages (through the age of enlightenment) and into the modern age. And we mean you. If you chose a vendor a decade ago and are resisting migration to a newer solution, including one offered by the vendor, because you spent years, and millions, tightly integrating it to your ERP solution, then you’re likely running on a single tenant SaaS architecture at best, and a nicely packaged client server architecture otherwise. You need to upgrade … and you should do it now! (We won’t advise you here as we don’t know all of the vendors in the SCP quadrant well enough, but we know some, including those that have recently acquired newer, age of enlightenment and even modern age solutions, and know that some still have old tech on old stacks that they maintaining because of install base. Don’t be the company stalling progress for your own good!)

We Want to Be a Smart Company — What Else Can We Do! Part 2

In part 1, you admitted that you read the dumb company: avoid the fork in the road and dead company walking: avoiding the graveyard articles (links in part 1), taken them to heart, admitted you’re making some mistakes and that you’re not doing some key functions as well as you could. Most importantly, you know you need to do more to avoid becoming a casualty of the next mass corporate extinction that’s coming. And you asked us to tell you what else you could do to avoid becoming a dead company walking (or, even worse, a zombie company*)? And yesterday we gave you our first five suggestions. Today we give you our next five.

06) Remember Websites are MORE than Static Web Pages

Your website should be a dynamic and interactive website that quickly guide visitors to the educational and informative content they want, with point-based and constructable demos, targeted education and thought leadership, and easy to find contact us options for information requests and specific live demos from thought leaders and solution professionals, not sales people. (Qualify the lead, then pass it on to sales.)

It should not, like the majority of websites today, be an overload of hogwash messaging and buzzwords, fancy animated graphics that don’t actually show the solution in use, a constant barrage of questions (along the “do you have trouble with …”) with the uniform “contact us for answers” directive. It definitely should not contain nonstandard terminology for modules, functions, and processes. (And definitely don’t mislead and say you’re an e-Procurement tool if you’re an e-Sourcing tool, and if you don’t know the difference, that just means you didn’t do your homework!) Confusing or non-existent information on target industries and market size (as we all know there is no one size fits all solution, and pretending that your company has one is just obnoxious). Or utter lack of information on pricing tiers and benefits. (Maybe you can’t give an exact price because you offer SSDO or advanced analytics that requires a lot of pay-per-use cloud processing, but you can still give a base license fee or range. If you’re a M+ annual solution, you don’t want companies that can’t, or won’t, pay more than 100K reaching out. The market should understand you get what you pay for and that a 100K solution won’t, or at least shouldn’t, have all the features of a 1M solution, but also that, if they are a smaller company, they shouldn’t need all the advanced features of the 1M solution either.)

07) Tap Your Talent for Top Tens

Sometimes the talent you overlook (because you think they are just a developer, pre-sales solution advisor, etc.) has the best ideas (and sometimes they don’t, and that’s why you use your leadership to filter out the best ideas).

If you have a problem, or just want to look for opportunities for improvement, ask your people first. Now, they won’t identify or come up with everything (as they have a limited view from a single function and may not have the decades of experience that is sometimes required to come up with something that is both “obvious” and revolutionary), but why should you pay a consultant to help you with improvements you can identify and make in house? You want the consultant focussed on the big win improvements you don’t see (and not easily sidetracked with the dozens of things you can do better).

So, ask all of your employees to come up with, anonymously if it helps, the

  • ten best ways to save money,
  • ten best investments across the business,
  • ten best ways to improve productivity,
  • ten SaaS apps you can do without,
  • ten functions that would totally change customer productivity in your core offering
  • ten functions that could be removed from the roadmap because they are actually low value,
  • etc.

And while you will get a lot of pyrite, you will get some gold nuggets. And if you’re knowledgeable enough, you’ll be able to separate the gold nuggets out. (And if not, you’ve jump started your expert advisor with some unique insights into your business and your team and that will improve their productivity.)

08) Always Pause for Innovation

Regardless of how you interpret what we tell you in #10, if an opportunity for innovation presents itself, always pause to evaluate it and see if it is a true opportunity, fits in with the plan, and would make the product, and the plan, better. If it would make the plan better, and it wouldn’t slow progress down more than a small amount, work it into the plan. If it would make the plan better, but would slow progress down a moderately significant amount, put it on the roadmap to be considered in the next plan update (as new ideas might emerge that make it less of an impact by then). Moreover, when you stumble upon it, the right innovation will improve the product, the plan, and even the timeline.

09) Sign in Blood

Once you have a plan, sign your name to it in blood. The only thing worse than not having a proper plan is abandoning a good plan part way through (because you get too anxious or lose faith) … if, after investing a lot of time and effort, you abandon a plan part way through, you might as well just shut the doors now instead of retreating into the castle to starve as you wait out the siege. Greatness takes time, effort, and sometimes sacrifice.

10) Drive Decisions Like You Just Heisted the Antwerp Diamonds

Once you have a direction, don’t stop. Don’t pull over. Keep going until you successfully escape the EU, sorry, until you escape mediocrity and unprofitability. (And definitely don’t panic along the way. If you got out clean, and have 24 hours to make your escape, use every last hour, because once you cross the border, you’re off Scot-free.)

Once you have it all figured out and committed to, you have to be Hagar behind the wheel and drive, drive, drive. Slowing down will lead to stopping. Stopping to abandonment, and then, instead of improvement and success, it will be failure and the beginning of the end. As per 09, you have to see the plan through, and this will only happen if you never stop — you have to keep going as long as there is a drop of gas in the tank.

* yes, zombie companies exist in our space too; and, as the entertainment industry would have you believe, since we’re not medical doctors working in morgues with a constant fresh supply of brains, it is a fate worse than corporate death!

Dear Enterprise Software Vendor: Should You Fire Your PR and Marketing?

Note the Sourcing Innovation Editorial Disclaimers and note this is a very opinionated rant!  Your mileage will vary!  (And not about any firm in particular, as a few non-isolated incidents opened up a whole new line of questioning.)

In response to a post by eCornell (which is/was here), THE REVELATOR wrote this comment (which is/was here) which is repeated here in its entirety in case it gets deleted, since anytime we tried to have a serious conversation around sales, marketing, public relations, and/or Gen-AI with Big X firms and/or (mid-sized) consultancies and analyst firms, they have quickly deleted our comments, and sometimes their entire posts rather than enter into a real conversation on the subject (and now we have developed an implicit distrust any corporate account and keep copies of everything):

NOTE: The following post was inspired by a comment by Paul Rogers

Despite feeling like someone walking the hallowed halls of Cornell University wearing a “Yeah, Harvard University” t-shirt, sometimes you have to say things that need to be said – which is the purpose of sharing this article.

Ask ChatGPT the following two questions:

🤔 What is the role of the Public Relations professional?
🤔 What is the role of the Marketing professional?

Do you see any mention of end client or customer success as a priority? Whose best interests are PR and marketing professionals focused on? What does the answer to these questions tell you?

Corporate communication has always been about putting a positive spin on business and the brand. It reminds me of the 1986 Richard Gere movie Power – if not a great movie, it is certainly interesting and engaging. Denzel Washington’s role as public relations expert Arnold Billings is worth the price of admission alone.

Unfortunately, beyond the company they represent, are PR and marketing people doing more harm than good?

Thoughts?

To which the doctor responded (which is/was here)

Well, SI, which has repeatedly told companies in our space to fire their PR firms going back to 2008: Blogger Relations, firmly believes that PR firms are doing more harm than good because

  1. you are NOT selling enterprise software to consumers and
  2. it’s not “image”, it’s “solution”!

As for marketing, corporate marketing can be good if it exists to educate and explain, but when was the last time that happened on a regular basis in our space? Over a decade ago … now it’s all AI-this, orchestrate-that, and whatever the bullcr@p of the day is. It’s all buzz, no honey. All show, no substance. All confusion, no clarity. (It’s bad enough that Trump has brought back the Land of Confusion with his populist politics that have taken by storm the first world over, we don’t need it in our workplace!)

So, right now, I’d say at least 6/7, if not 9/10, marketers are doing more harm than good and should be fired with their PR brethren.

There are over 666 companies in our space, and way too many pandering any type of solution you can think of. While we need at least 3-5 in each industry group – market size – geo region – module focus you can think of for competition, we don’t need 30+. Most are not going to survive, especially when most of these don’t have solid solutions built from years of experience that solve real customer problems (as opposed to just offering some shiny new tech that looks good but doesn’t solve the majority of pain points in real organizations).

This means that companies need to focus less on marketing and selling and more on:

  • market research, especially listening to what the real pain points are of the customers they want to sell to (and they need to focus in on a customer group here, you can’t be everything to everyone in our space and any company that thinks it can is the first company you should walk away from)
  • solution (not product) development — not shiny new tech, tried-and-true tech that works
  • market education, explaining what they do, how they do it, and why it solves real pain points after building a solution that solves the pain points they identified in their research

Which means, especially if money is tight, they should forget the marketers and instead focus on hiring researchers and educators. People are getting tired of the 80%+ tech project failure rates. They’d welcome some real insight and real focus on real solutions. If only the market would wake up and realize this!