Since the blog software trashed my formatting, here are my responses to Emptoris’ Setting the Record Straight.
Response, Part I
(1) They should read the post again.
(a) I said MindFlow was in many ways far superior, that MindFlow had a better sourcing model, and that it contained many elements that should have been incorporated. In other words, it had, to the best of my knowledge at the time JB asked me to write my post, better features. It DID NOT have a better engine. I even acknowledged this during the great debate last summer. As you well know, there’s a difference between what the model supports and what the underlying engine can solve. The Emptoris engine, being based on a current version of ILog CPlex, with custom in-house extensions, is superior.
(b) I did not state there was a limit as to the number of constraint instances the Emptoris solution could handle – I simply quoted an article indicative of a representative large model the engine has solved.
(c) There is a difference between stating I did not see much progress in the product with respect to the information available and stating that a development team has been resting on its laurels. The reality in a software company is that
- a team could be working its butt off, innovating like crazy, but have its work delayed or cut from a release, or three, because product management feels other enhancements are more important, schedules are tight, and there is not enough QA to cover everything and
- considering the skills required to build an optimization solution, it’s often the case that these individuals are among the company’s best developers, and they could have been reassigned to the complex analytic algorithms of the spend analysis solution.
My observation was on the product, not the team!
(2) Their marketing person could be a little more honest in the fourth paragraph … although, in my opinion, they did retain the best product management talent a company could ever hope to acquire, by the time the acquisition completed, MindFlow management had let go of almost all of its engineers, including some of the best software developers I’ve ever worked with. That being said, Emptoris did acquire all the IP, code, design documents, research, notes, etc. … and their team is definitely competent enough to make use of any innovations Emptoris did not have at the time of acquisition.
(3) Emptoris is correct in that:
( a) MindFlow did not identify conflicting constraints … glad to hear their product does that now – this is something that should be advertised, as the only other solution I know that does this (or at least does it well) is CombineNet
( b) MindFlow did not do sensitivity analysis automatically (but this really isn’t too hard to automate, at least with respect to their definition)
(c) MindFlow did not support counts (it could have, the base functionality was there, but the architect chose not to build on it)
(4) Emptoris is mostly correct in that:
( a) MindFlow did not identify secondary suppliers;
it wasn’t automatic, but you could create a “what-if” scenario off of the original, exclude primary suppliers, and get secondary suppliers
(5) Emptoris is not correct with respect to:
(a) MindFlow did not support Step Volume Discounts;
the last version of the model did, but I seem to recall it was not obvious through the UI that it did
(b) MindFlow lacked supplier-specified capacity constraints:
it had them, but, as with the bundles, the buyer had to create them
(c) MindFlow did not have an ability to specify a limit on attributes other than price and quantity;
it did by way of the qualitative constraints, but it was not as straightforward as it could be
(6) I cannot comment on:
(a) “handling of delivery constraints”, as I do not know precisely what Emptoris means by that
(b) bid adjustments on any numeric value; this is easy enough to do, but it requires front-end as well as back-end support, and I do not know what was ever done on the front-end in that regards in the MindFlow solution; I’m guessing not much
(7) If Emptoris now has
( a) alternate support,
( b) ship-to support,
( c) automatic conflict constraint identification, and
( d) flexible attribute support then
(i) their solution is more powerful than any analysis of easily, and not-so-easily, locatable public information will lead one to believe and
(ii) they should be advertising this information, since it clearly sets the solution apart from the vast majority of other optimization(-based) solutions out there.
(8) As per my previous comment on Spend Matters, I was not evaluating the overall Advanced Sourcing Service. I was focused only on what JB asked me to focus on, the optimization component. I have also analyzed the entire offering, with respect to everything I know, and that was my last post.
(9) As for their final paragraph, I fully understand why they should not want to brief me, and more important than the points they list, is the missing, unwritten, point (3): as an optimization expert, if you simply provide me with an idea I did not have before (they did not in this post, but I have to admit a few things they mentioned I only started thinking about seriously within the last year or two), then there is a good chance I could figure out how to do it on my own.
With respect to their points (1) and (2), that is unfortunate, since it also implies they can no longer brief JB either, as he also consults with their competition from time to time, and his blog is the only blog in the space that is read more regularly than some of the big publications! (I’m getting more hits everyday, and I believe more than many of the blogs in the space, but SpendMatters has over a year on me and still holds the top spot.)
(10) I’d like to see more posts from Emptoris in the future! I know the Emptoris philosophy is more along the lines of “share as little as humanly possible”, but I’m of the opinion that if you have something good, you shouldn’t be afraid to show it! It certainly helps analysts and buyers figure out where you stand from a competitive perspective and reduces the FUD factor, and, I assume, would also help sales, since people would see where the application is better and better suited to their needs.
Response, Part II
Their Edelman paper – “Reinventing the supplier negotiation process at Motorola (Internet enabled supplier negotiations software platform)” – does not help a user as much as their post does. (As the post outlines a few capabilities that the MindFlow model did not have.) It only proves that as of 2004, although they had a better engine than MindFlow, they did not necessarily have a better model.
|Key dimensions: Suppliers, items||Key dimensions: Suppliers, Products, Ship Tos|
|Item Substitutions||Native Model Dimension|
|Tiered Bids||Tiered Bids|
|Discounts / Rebates||Discounts / Rebates|
|Penalties (Missed Terms)||Usage Costs|
|Min/Max Suppliers||Supplier Risk Mitigation|
| Preferred Vendor / Minority Award /
Offset Proximity Award / Min-Max Award
| General Purpose Group-Based
Supplier Allocation by Volume
|Budgetary Limit Award||Group-Based Supplier Limit by Cost|
|Non-price Factors / Switching Costs||General Purpose Fixed Costs|
|Supplier Qualification||General Purpose Exclusions|
|Split Award||Generic Meta-Allocation Constraints|
|Terms Support||General Purpose Qualitative Constraints|
For those readers who want to read the details for themselves, and if you are interested in an optimization solution, I would encourage you to do so, below are some links you can use, as the link Emptoris provides simply takes you to the INFORMS presentation abstract. The first link is free (after you sign up for an account if you don’t have one), but since it is a plain text link, it is missing the figures and a few of the more sophisticated equations. The second, Goliath, link is free if you are a member ($19.95/month), or $4.95 otherwise. (It looks like it may be text-based also, so I’d be wary of buying the article from this source, but I would check this link first if you are Goliath member.) The last link is Emerald, and I believe it will provide a true copy of the original article, but it is the most expensive, at a GBP of 14.50.
Although they likely cannot comment, with respect to the following paragraph from that paper, I’d be very interested in knowing how they overcame the over-aggressiveness of the implicated CPlex pruning algorithms to improve accuracy and increase the chance you truly are finding an optimal solution without sacrificing performance. Was it simply extensive trial and error and incremental parameter tweaking, or did they uncover a hidden secret of CPlex?
“Some of the rules had straightforwrd linear formulations, while others, such as complex discount structures, were nonlinear and also required very large coefficients that in most cases introduced numerical instability. We had to reformulate them to make them tractable and still accurate. The resulting MIP formulation was very complex and in many cases, especially for large auctions, was not readily solvable by commercial optimizers. We then introduced heuristics that reduced some of the problem coefficients, guided by the branch-and-cut strategy using knowledge of the specific problem structure, took advantage of the variable dependency, and iterated the solution to improve accuracy without increasing complexity. In addition, by using the appropriate settings of numerous CPLEX integer solve parameters, such as diving and probing, we further improved performance. Altogether these actions produced robust and scalable solutions, allowing the software to solve problems with hundreds of items and thousands of bids in a few minutes.”
My experience with CPlex is that it’s aggressive branch-and-bound pruning algorithms can be fooled even by small models. And it only gets worse in version 10. I have evaluated/constructed a number of small models that solve in less than a 10th of a second using default algorithms in CPlex (on a reasonably high end server), but whose accuracy is off by close to 1%. In comparison, other solvers solve these same models in less than a second (although three to seven times slower) and reach true optimality. Although users want fast solutions, depending on model size, and scenario value, I believe most will wait a few extra seconds or minutes for even half a percentage point, since this translates to 50K on a 10M scenario. In other words, I’m curious as to whether or not they have made any non-resource intensive significant advancements with regards to controlling trade-off in solution-time vs. optimality.
I agree with Jason Busch when he says that regarding consulting to or advising different companies in the sector, I would strongly urge all bloggers, analysts, and journalists to disclose any and all past and current commercial affiliations just as Michael and I do on our blogs. For those who are interested, these links are here, as well as on the sidebars of our blogs.