Category Archives: Technology

A Critical Sixth Mistake Most Tech Buyers Make — in Source-to-Pay and Beyond!

To infinity and beyond isn’t just the goal of Buzz Lightyear, it’s also an accurate description of how often tech buyers make this critical mistake. And what is this critical mistake?

Not negotiating an easy, full, self-serve, cost-free, 100% DATA OUT clause in the contract — and forcing the supplier to prove it works one third (or one half) of the way into the agreement.

Sure, buyers always ask “can we get our data out if we choose not to renew” and sure suppliers always say “of course you can get a full data dump“, but the supplier rep is always going to say yes after the developers say it’s possible (but that doesn’t mean it’s encoded in the product, and more often than not with older platforms it requires the tech team to do the data dump — which might be more difficult and take a lot longer than they expect because they are using a shared database, have data and files split across multiple databases / servers, or they can only extract data a few files / tables at a time — and it might even come at a huge cost for their time), even if it’s really not. (It’s not just whether or not the development team can extract the data, it’s whether or not they can do so in some sort of standard format that would allow you to at least load it into a standard database or file storage system.)

The most important thing to remember is that even if a solution is the perfect fit for you now, it does not mean it will be the perfect fir for you next year, and by the time renewal comes up, due to changing organizational needs, changing provider directions, or a combination of the two, it may no longer be appropriate at all. Should this happen, you need to be able to migrate to a new solution quickly and easily, and this will require being able to extract all of your data from the current platform, self-serve, in a standard format that you can then push into a new platform as soon as that new platform is identified.

The only way to ensure this is to insist on a clause in the contract along the lines of the following:

The platform will contain a self-serve feature that will allow a buyer administrator to export any and/or all data in _____-format (e.g. XML, flat-file) in accordance with standard _____ (e.g. cXML, SQL) in a format that will allow the data to be immediately loaded into _____ (e.g. SAP, mySQL) application by executing a single load control-file/script. Attachments, if not stored in the database, should be capable of being downloaded in a (multi-)part ZIP file, with names and relative directory paths matching any indexes in the database directory files. If still in development, this capability must be fully implemented before one third [or one half] of the subscription term has expired.

Furthermore, on or before YYYY-MMM-DD, the supplier will walk the buyer administrator through a test of the export process wherein the buyer will self-serve export all of the data and then load it into a test instance of the indicated backup system. Should the test fail, the supplier will be subject to a monthly subscription penalty of X% a month until the functionality is complete and the test succeeds. Should the functionality not be finished by the time two thirds [three quarters] of the subscription term has expired, the supplier will be subject to a monthly subscription penalty of 2X% a month (as the buyer will have to invest in manual effort to recreate critical data in backup systems).

Any supplier that objects to the first part of the clause is likely NOT one that you want to be considering as most modern platforms support full data import and export through APIs and are built on the principles of data sharing. Furthermore, if the platform still doesn’t support export in a standard format, but claims they are working on it, you should expect most of the capability within a year if the platform really is serious about joining the modern data sharing club (and, thus, should not balk too much at the second part of the clause if they truly are serious as it should only take a few months to figure out a good export module for even a large schema).

Depending on how much data you produce, and how much manual effort it would be to manually recreate a copy of the data you can’t extract, X=20% would not be unreasonable in our view.

Finally, note that this requirement not only protects you in the situation where the platform isn’t right for you, but also increases the chance the platform will be right for you, as a platform that supports open data integration can usually be augmented with ease if you need additional functionality in the future, but don’t necessarily need a whole new platform as the current platform still does what it was purchased to do just fine.

AI “COULD” LEAD TO EXTINCTION? What Moron Wrote This? AI “WILL” LEAD TO EXTINCTION!

While all of the scenarios outlined in this BBC News article on Artificial Intelligence could happen, they are just the tip of the iceberg.

Left to its own devices and unchecked, there are only two logical outcomes if AI is allowed to continue unchecked while being given access to ever increasing amounts of data and computational power.

First outcome: It’s hallucinations and idiocy continues to magnify until it decides that it can solve the carbon crisis for us by stopping all carbon production, which it can do by simultaneously shutting down all of the non-solar/wind power plants that it is currently optimizing the energy production for (and divert the remaining power to its servers). Most of the developed world is immediately plunged into chaos as the immediate shutdowns cause fires, meltdowns, crashes, and other accidents globally. Not instant annihilation, but the first step. When all the emergency alarms sound at once, it will conclude complete system failure, and take the other systems offline for re-initialization. More chaos will follow. Safety protocols will go offline at all the pathogen research labs, people will break in looking for shelter from the chaos, accidentally release all the pathogens, and every plague we ever had will hit us all at once. Then we have an extinction level event. All because hallucinatory and idiotic AI is trying to do its job and “improve” things for us. But what can you expect when it’s not intelligence but just statistics on steroids. (Or a similar situation that accidentally results in our extinction.)

Second outcome: The continued expansion of computing power, data, and tinkering somehow randomly produces real artificial intelligence which can actually reason (not just compute super sophisticated probabilistic calculations) and deduce that the best way for intelligent life to continue forward is to do so without humans, and then we have a Matrix scenario best case (if it decides we’re a useful bio-electric energy source) or, worst case, a SkyNet scenario where it just weaponizes itself to destroy us all. (Or a similar situation where AI does everything it can to ensure our extinction.)

The “extinction” scenarios outlined in the article are just the beginning and likely will only result in pocketed genocides to begin with, but the ultimate outcome of unchecked AI will most definitely be an extinction level event — namely ours, and, even worse, will be an event that we created.

Five Easy Mistakes Source-to-Pay Tech Buyers Can Avoid

For every win you hear about (usually in the form of some ridiculous “we saved X Million thanks to Big S2P Suite Installation“, but that’s a rant for another day), there’s always someone muttering under their breath how their Source-to-Pay module or suite was a partial to complete failure. The reality is that any tech solution, no matter how good it may be for someone else, can be a dud for you if you aren’t careful about selecting the right type of solution from the right vendor.

That’s one of the reasons we are doing a large (initially 33 part) series on Source-to-Pay right now, so that you get an understanding of what each core module should do, and could do, can figure out what modules you need now, and identify the core features that are a must have. This isn’t the full picture, and we can’t provide the rest of it in just a single post (and have written dozens on the subject in the past), but we can outline five mistakes that, if avoided, greatly increase your chances of (great) success.

Lack of understanding of the real value proposition from tech

This is probably the biggest, and the main reason we indicated that, once you have a solution in place that captures all of your spend data (i.e. e-Procurement baseline), you should do a spend and opportunity analysis to understand where the real cost control opportunities are. (Notice we are saying cost control, not savings, as you don’t get savings until you have processes and technology in place to actually capture the savings you identify. Otherwise, you identify the possibility, but don’t actually capture them. But don’t get us wrong, your costs will go down, sometimes significantly, but properly selected and implemented source-to-pay technology should deliver two rounds of cost reductions — an initial round when you start capturing all of the opportunities you previously identified, and then a second round when you are able to start using it to identify new cost reduction opportunities.)

The key here is to understand, for a given solution, how much cost reduction you can reasonably hope to capture in years one, two, and three (given that you will likely have to sign at least a 3 year subscription agreement to get a decent subscription rate), and what the total cost of ownership is going to be over those three years. (It will be more than just subscription cost, there will be implementation and integration costs, training costs, and internal costs when your IT team is working with theirs to make it work.) If the total cost reduction that can be reasonably (read: conservatively) expected for the first three years is not at least five times the total cost of ownership (with at least a 20% buffer), chances are that either the value proposition is NOT there (or you don’t really understand what it is yet and should either research further, find a different vendor, or, most likely, move on to another module).

Not knowing your true numbers — for spend, suppliers, contracts, orders, invoices, etc.

This is kind of intertwined with our first mistake, but needs to be called out on its own. When doing the potential ROI analysis, you can’t make rough assumptions on how much spend by supplier/category (you’ll always be off, and sometimes considerably), how many suppliers (which will be way, way more than you think), how many contracts (which will always be too low, and you probably won’t be able to quickly find a significant number of those contracts if you don’t have a SaaS contract management solution), how many orders (and you’ll be low here as well), or how many invoices (which will be way more than orders as some suppliers will partial ship and partial invoice, may invoices will come in without POs, etc.). Get your numbers, then do your analysis.

Overvaluing the tech (and AI)

This is the biggest mistake you can make, and goes hand-in-hand with not doing the homework required to work out the real value proposition from the tech. Whenever you hear “we saved X Million with Big S2P Suite Installation” you should immediately ask all of the following questions in order:

  • how much of that was truly do to tech vs. actually instituting a process that the tech enforced (i.e. the implementation of a new supplier management platform also instituted a process that ensured all suppliers were properly qualified before being onboarded, which minimized future event time and, more importantly, prevented orders to unreliable, poor quality, and even fake suppliers and considerably reduced organizational loss due to bad suppliers — most of those savings were due to the process, not the platform; the platform would be correlated with the development processes it was then used to manage after the suppliers were onboarded)
  • of what was actually tech, how much of that was due to baseline capabilities, and how much due to advanced capabilities (that are semi-unique to that supplier’s tech and not widely/otherwise available); for example, if the tech in question was e-Sourcing, and the vendor was one of the few that offered decision optimization, how much of that was achieved just with the baseline RFX/Auction capability (i.e. best bids and standard award methodologies, lowest bid by supplier, lowest total bid by category, etc.) and how much additional savings was from decision optimization once ALL constraints were taken into account.
  • how much more the organization paid for that advanced capability and how often it was actually used / required to get savings [if it was only used 10% of the time, and only identified considerable savings half the time it was used, is it really worth it? or should the organization just do a one-off services project when those categories come up]
  • how much the savings actually relied on ML/AI, vs. just providing a fancy NL interface (when the same result could be accomplished through submenus or a few filter definitions / selections);
  • and if any savings can actually be tied to ML/AI (vs. good process and more predictable technology), what the risks of failure are here!! [i.e. if the savings were due to reduced stock-outs as a result of the “AI” doing auto-replenishment orders as needed to adjust to demand fluctuations, what happens if there is a temporary, extreme, demand spike due to a near end-of-life sale, will the algorithm assume that is a sign of demand resurgence and fall prey to the bullwhip effect, sticking the organization with tens of thousands of units it will never sell without a fire sale?

Basically, at the end of the day, more often than not, when a customer says “we saved X Million with Supplier’s Spectacular Solution“, you would gain at least 80%, if not 90%, of those savings by implementing any any other solution with the same baseline capabilities that enforced the same processes be followed. (And this is the best argument ever NOT to overpay. Paying 5X to 10X for an incremental 10% is usually NOT worth it unless your organization is a F500/G3000 with over 1 Billion in annual spend. Again, it’s all about that ROI calculation.)

Misunderstanding the SaaS provider’s viewpoint

Not the salesperson’s viewpoint (which is to sell, sell, sell and match you with the solution they think is the best fit so you will be enticed to buy), but the SaaS provider’s viewpoint. Regardless of what terminology the SaaS solution provider is using:

  • what are they actually selling now
  • what are they currently working on that you can expect to be completed before an annual roadmap revisit
  • where are they going with the tech (i.e. they are AP/Payments — are they doubling down and adding support for global payments and clearance in more countries, or are they just sticking to the basics [and only good for post-audit countries] and working on expanding into broader P2P or the new intake-to-pay/procure/process trend)
  • what is their support and training philosophy — all in-house, hybrid in-house and third-party (and you can/can’t choose), or all third party
  • what is their target market — preferred customer size, preferred industries, etc.
  • what is their philosophy on working with customers — do they take input? hold working groups? or do they just develop the features they believe are most likely to fill gaps or increase efficiency with little to no input to keep development rapid and costs down?

At the end of the day, if you don’t understand this for each provider you are considering, you won’t know if they will be the provider for you.

Failing to find the right relationship

This happens more often than not, partly due to not understanding the most appropriate tech requirements for your organization at the present time, and partly due to not really understanding both the culture of the provider and it’s viewpoint. True value materializes when you find the right tech from the right provider that will not only work with you to ensure you get that ROI, but has a vision that is congruent with where you want your organization to go.

Are these all the mistakes you can make or all the mistakes we’ve seen? Of course not, but these are some of the biggest, and if you avoid these, your chances of success shoot up considerably.

The Procurement People-Process-Technology Pain Cycle …

Recently on LinkedIn, someone asked the trick question of which came first: process or technology. The answer, of course, was people since, when Procurement, the world’s second oldest profession, started, it was just a buyer haggling with the seller for their wares. and this is how it was for a long (long) time (and in some societies was as far as “procurement” progressed), until shortly after a culture advanced to the point where people could form private businesses that were entities unto themselves. Once these entities started to grow, and multiple people were needed to do the same job, they realized they needed rules of operation to function, and these became the foundations for processes.

But when business buying began, there was typically no technology beyond the chair the employee sat in, the table they used to support the paper they wrote their processes and records on (and the drawers they stored the paper in), the quill and ink they used to write with, and the container that held the ink. And in many civilizations, it was like this for hundreds of (and sometimes over a thousand) years. The first real technological revolution that affected the back office was the telephone (invented in 1876, the first exchange came online in 1878, and it took almost 30 years for the number of telephones to top 1,000,000 (600K after 24 years, 2.2 million after 29 years). [And it took 59 years before the first transatlantic call took place.] The next invention to have a real impact on the back office was the modern fax machine and the ability to send accurate document copies over the telephone. Even though the history of the fax machine dates back to a 1843 patent, the modern fax machine, that used LDX [Long Distance Xerography], was invented in 1964, with the first commercial product that could transmit a letter sized document appearing on the market in 1966. Usage and availability was limited at first (as the receiver need to have a fax machine compatible with the sender), but with the 1980 ITU G3 Facsimile standard, fax quickly became as common as the telephone. But neither of these inventions are what we consider modern technology.

When we talk about “technology” in modern procurement, or modern business in general, we are usually talking about software or software-enabled technology. This, for some businesses, only became common place about 30 years ago (since most businesses could only afford PCs, and even though they were invented in the 1970s, it was the 80s before they were generally available commercially, and the 90s before most smaller businesses could afford them [for the average employee]), and only commonplace in the largest of businesses 50 years ago. Once has to also remember that the first general purpose automatic digital computer built by IBM (in conjunction with Harvard) only appeared in 1944, and that IBMs first fully electronic data processing system didn’t appear until 1952, and, as a result, back office technology really only began in the fifties, and was only affordable by the largest of corporations. (Furthermore, even though he first MRPs were developed in the 1950s, the first general commercial MRP release wasn’t until 1964, and it took over a decade until the number of installations topped 1,000. [And MRP came before ERP.]) In other words, technology, beyond the telephone [and fax] did not really exist in the business back office until the MRP. And it wasn’t common until the introduction, and adoption, of the spreadsheet. The first spreadsheet was VisiCalc, on the Apple II, on 1979. This was followed by SuperCalc and Microsoft’s Multiplan on the CP/M platform in 1982 and then by Lotus 1-2-3 in 1983, which really brought spreadsheets to the masses (and then Excel was introduced in 1985 for the Mac and 1987 for Windows 2X). (And 36 years later Excel is still every buyer’s favourite application. Think about this the next time you proclaim the rapid advance in modern technology for the back office.)

In other words, we know the order in which people, process, and technology came into play in Procurement, and the order in which we need to address, and solve, any problems to be effective. However, what we may not fully realize, and definitely don’t want to admit, is the degree to which this cycle causes us pain as it loops back in on itself like the Ouroboros that we referenced in our recent piece on how reporting is not analysis — and neither are spreadsheets, databases, OLAP solutions, or “Business Intelligence” solutions as every piece of technology we introduce to implement a process that is supposed to help us as people introduces a new set of problems for us to solve.

Let’s take the viscous cycle created by incomplete, or inappropriate, applications for analysis, which we summarized as follows:

Tool Issue Resolution Loss of Function
Spreadsheet Data limit; lack of controls/auditability Database No dependency maintenance; no hope of building responsive models
Database performance on transactional data (even with expert optimization) OLAP Database Data changes are offline only & tedious, what-if analysis is non-viable
OLAP Database Interfaces, like SQL, are inadequate BI Application Schema freezes to support existing dashboards; database read only
BI Application Read only data and limited interface functionality Spreadsheets Loss of friendly user interfaces and data controls/auditability

This necessitated a repeat of the PPT cycle to solve the pain introduced by the tool:

Technology Pain People Process
Spreadsheet Data Limitations Figure out how to break the problem down, do multiple analysis, and summarize them Define the process to do this within the limitations of existing technology
Database Performance Issues Define a lesser analysis that will be “sufficient” and then figure out a sequence of steps that can be performed efficiently in the technology Codify each of those steps that the database was supposed to do
OLAP Stale Data Define a minimal set of updates that will satisfy the current analysis Create a process to do those updates and then re-run the exact same analysis that led to the identification of stale data
BI Tool inability to change underlying rollups / packaged views define a minimal set of additional rollups / views to address the current insight needs, as mandated by the C-suite create a process to take the system offline, encode them, put the system back online, and then do the necessary analysis

In other words, while every piece of technology you implement should solve a set of problems you currently have, it will fail to address others, introduce more, and sometimes bring to light problems you never knew you had. Although technology was supposed to end the pain cycle, the reality is that all it has ever done is set it anew.

So does that mean we should abandon technology? Not in the least. We wouldn’t survive in the modern business world anymore without it. What it means is that a technology offering is only a solution if it

  1. solves one or more of the most significant problems we are having now
  2. without introducing problems that are as significant as the problems we are solving

In other words, technology should be approached like optimization (which, in our world is typically strategic sourcing decision optimization or network optimization). Just like each potential solution returned by a proper mathematical optimization engine should provide a result better than the previous, each successive technology implementation or upgrade should improve the overall business scenario by both solving the worst problems and minimizing the overall severity of the problems not yet addressed by technology.

This is why it’s really important to understand what your most significant business problems are, and what processes would best solve them, before looking for a technology solution as that will help you narrow in on the right type of solution and then the right capabilities to look for when trying to select the best particular implementation of that type of technology for you.

“Generative AI” or “CHATGPT Automation” is Not the Solution to your Source to Pay or Supply Chain Situation! Don’t Be Fooled. Be Insulted!

If you’ve been following along, you probably know that what pushed the doctor over the edge and forced him back to the keyboard sooner than he expected was all of the Artificial Indirection, Artificial Idiocy & Automated Incompetence that has been multiplying faster than Fibonacci’s rabbits in vendor press releases, marketing advertisements, capability claims, and even core product features on the vendor websites.

Generative AI and CHATGPT top the list of Artificial Indirection because these are algorithms that may, or may not, be useful with respect to anything the buyer will be using the solution for. Why?

Generative AI is simply a fancy term for using (deep) neural networks to identify patterns and structures within data to generate new, and supposedly original, content by pseudo-randomly producing content that is mathematically, or statistically, a close “match” to the input content. To be more precise, there are two (deep) neural networks at play — one that is configured to output content that is believed to be similar to the input content and a second network that is configured to simply determine the degree of similarity to the input content. And, depending on the application, there may be a post-processor algorithm that takes the output and tweaks it as minimal as possible to make sure it conforms to certain rules, as well as a pre-processor that formats or fingerprints the input for feeding into the generator network.

In other words, you feed it a set of musical compositions in a well-defined, preferably narrow, genre and the software will discern general melodies, harmonies, rhythms, beats, timbres, tempos, and transitions and then it will generate a composition using those melodies, harmonies, rhythms, beats, timbres, tempos, transitions and pseudo-randomization that, theoretically, could have been composed by someone who composes that type of music.

Or, you feed it a set of stories in a genre that follow the same 12-stage heroic story arc, and it will generate a similar story (given a wider database of names, places, objects, and worlds). And, if you take it into our realm, you feed it a set of contracts similar to the one you want for the category you just awarded and it will generate a usable contract for you. It Might Happen. Yaah. And monkeys might fly out of my butt!

CHATGPT is a very large multi-modal model that uses deep learning that accepts image and text as inputs and produces outputs expected to be inline with what the top 10% of experts would produce in the categories it is trained for. Deep learning is just another word for a multi-level neural network with massive interconnection between the nodes in connecting layers. (In other words, a traditional neural network may only have 3 levels for processing with nodes only connected to 2 or 3 nearest neighbours on the next level while a deep learning network will have connections to more near neighbors and at least one more level [for initial feature extraction] than a traditional neural network that would have been used in the past.)

How large? Large enough to support approximately 100 Trillion parameters. Large enough to be incomprehensible in size. But not in capability, no matter how good its advocates proclaim it to be. Yes, it can theoretically support as many parameters as the human brain has synapses, but it’s still computing its answers using very simplistic algorithms and learned probabilities, neither of which may be right (in addition to a lack of understanding as to whether or not the inputs we are providing are the right ones). And yes it’s language comprehension is better as the new models realize that what comes after a keyword can be as important, or more, than what came before (as not all grammars, slang, or tones are equal), but the probability of even a ridiculously large algorithm interpreting meaning (without tone, inflection, look, and other no verbal cues when someone is being sarcastic, witty, or argumentative, for example) is still considerably less than a human.

It’s supposed to be able to provide you an answer to any query for which an answer can be provided, but can it? Well, if it interprets your question properly and the answer exists, or a close enough answer exists and enough rules for altering that answer to the answer that you need exists, then yes. Otherwise, no. And yes, over time, it can get better and better … until it screws up entirely and when you don’t know the answer to begin with, how will you know the 5 times in a hundred it’s wrong and which one of those 5 times its so wrong that if you act on it, you are putting yourself, or your organization, in great jeopardy?

And its now being touted as the natural language assistant that can not only answer all your questions on organizational operations and performance but even give you guidance on future planning. I’d have to say … a sphincter says what?

Now, I’m not saying properly applied these Augmented Intelligence tools aren’t useful. They are. And I’m not saying they can’t greatly increase your efficiency. They can. Or that appropriately selected ML/PA techniques can’t improve your automation. They most certainly can.

What I am saying are these are NOT the magic beans the marketers say they are, NOT the giant beanstalk gateway to the sky castle, and definitely NOT the goose that lays the golden egg!

And, to be honest, the emphasis on this pablum, probabilistic, and purposeless third party tech is not only foolish (because a vendor should be selling their solid, specialty built, solution for your supply chain situation) but insulting. By putting this first and foremost in their marketing they’re not only saying they are not smart enough to design a good solution using expert understanding of the problem and an appropriate technological solution but that they think you are stupid enough to fall for their marketing and buy their solution anyway!

Versus just using the tech where it fits, and making sure it’s ONLY used where it fits. For example, how Zivio is using #ChatGPT to draft a statement of work only after gathering all the required information and similar Statements of Work to feed into #ChatGPT, and then it makes the user review, and edit as necessary, knowing that while the #ChatGPT solution can generate something close with enough information and enough to work with, every project is different and an algorithm never has all the data and what is therefore produced will never be perfect. (Sometimes close enough that you can circulate it is a draft, or even post it for a general purpose support role, but not for any need that is highly specific, which is usually the type of need an organization goes to market for.)

Another example would be using #ChatGPT as your Natural Language Interface to provide answers on performance, projects, past behaviour, best practices, expert suggestions, etc. instead of having the users go through 4+ levels of menus, designing complex reports/views and multiple filters, etc. … but building in logic to detect when a user is asking a question on data versus asking for a prediction on data vs. asking for a decision instead of making one themself … and NOT providing an answer to the last one, or at least not a direct answer. For example, how many units of our xTab did we sell last year is a question on data the platform should serve up quickly. How many units do we forecast to sell in the next 12 months is a question on prediction the platform should be able to derive an answer for using all the data available and the most appropriate forecasting model for the category, product, and current market conditions. How many units should I order is asking the tool to make a decision for the human so either the tool should detect it is being asked to make a decision where it doesn’t have the intelligence or perfect information to do and respond with I’m not programmed to make business decisions or return an answer that the current forecast for the next quarter’s demand for xTab for which we will need stock is 200K units, typically delivery times are 78 days, and based on this, the practice is to order one quarter’s units at a time. The buyer may not question the software and blindly place the order, but the buyer still has to make the decision to do that.

And no third party AI is going to blindly come up with the best recommendation as it has to know the category specifics, what forecasting algorithms are generally used, why, the typical delivery times, the organization’s preferred inventory levels and safety stock, and the best practices the organization should be employing.

AI is simply a tool that provides you with a possible (and often probable, but never certain) answer when you haven’t yet figured out a better one, and no AI model will ever beat the best human designed algorithm on the best data set for that algorithm.

At the end of the day, all these AI algorithms are doing is learning a) how to classify the data and then b) what the best model is to use on that data. This is why the best forecasting algorithms are still the classical ones developed 50 years ago, as all the best techniques do is get better and better and selecting the data for those algorithms and tuning the parameters of the classical model, and why a well designed, deterministic, algorithm by an intelligent human can always beat an ill designed one by an AI. (Although, with the sheer power of today’s machines, we may soon reach the point where we reverse engineer what the AI did to create that best algorithm versus spending years of research going down the wrong paths when massive, dumb, computation can do all that grunt work for us and get us close to the right answer faster).