Category Archives: Vendor Review

Riding the Rails with Coupa

As you may recall, Sourcing Innovation was one of the first blogs to bring you a detailed preview of Coupa, the revolutionary new enterprise open-source e-Procurement application from Silicon Valley. One of the most interesting aspects of this technology is that it is being built on Ruby on Rails (RoR), as discussed by co-founder Dave Stephens on his blog Procurement Central [WayBackMachine].

This is a bold move considering that RoR is still a relatively new technology that is essentially unproven in the enterprise application market beyond the corporate website, but one that could pay off big time for Coupa when you consider the rapid development time enabled by RoR as compared to other enterprise platforms such as Java and .NET (where WORA* does not apply). Personally, I’m still a big Java fan, but I can see RoR becoming the platform of choice in a couple of years for a number of reasons:

  • faster development time
    following the mantra of “convention over configuration”, RoR sacrifices flexibility for convenience, allowing developers to do more, quicker, and better within the framework provided which makes basic assumptions that significantly decrease the amount of configuration required
  • MVC architecture
    unlike most enterprise frameworks that have preceded it, RoR was built on the MVC architecture from the ground up and has built in object-relational mapping capabilities
  • full stack framework
    whereas some platforms require extensions from multiple vendors, Rails provides all of the components commonly needed by most web-based systems
  • designed for reusability
    RoR adheres to the DRY (Don’t Repeat Yourself) philosophy and its framework was designed to allow every piece of knowledge in the system to be expressed in just one place
  • preconfigured application structure
    RoR automates the creation of project structure and automatically creates all files and components needed by default (no need for a fancy IDE to automate these tasks for you)
  • simplicity
    rails wasn’t designed to do everything, and its focus on the common features used by a majority of programmers a majority of the time removed much of the complexity inherent in many application frameworks; note that this does not limit its capability, as it includes a robust extension mechanism to allow development teams to add (only) the capabilities they need
  • strong community uptake
    a large number of developers, especially in the open source community, are latching onto RoR as their development environment of choice as it overcomes the shortcomings of web scripting languages such as PhP and the impracticality of J2EE for (rapid) web-based development
  • XML compliant
    so if you need to integrate with a non RoR app, no problem!
  • rapidly maturing environment
    just like Java, RoR is rapidly maturing from a neat language for cool web page development to a full fledged enterprise application development platform – I’d say it’s pretty close to Java 1.2 in terms of lifecycle, which is where Java truly became a solid language for application development

In other words, instead of jumping onto someone else’s bandwagon, Coupa has decided to jump into the driver’s seat and lead the charge in the development of eProcurement applications.

And if you want to join the RoR movement early, but don’t know where to begin, consider checking out www.daveastels.com, especially if you are in the NorthEast, for courses, resources, and best practices consulting.

*WORA: Write Once Run Anywhere

CombineNet Communiqué III: Differences

Disclaimer: This blog, including this post, is not sponsored by CombineNet (acquired by Jaggaer). The author is not employed by, contractually engaged with, or affiliated with CombineNet. Any and all opinions expressed herein are solely those of the author. Furthermore, the opinions expressed herein should be contrasted with the opinions of other educated professionals before the reader forms his or her own opinion. Finally, the author is neither endorsing nor dissenting the use of CombineNet’s products or services – merely trying to spread awareness on the importance of optimization and the relative uniqueness of an offering like that of CombineNet. This disclaimer holds true for each post in this multi-part series and will be repeated.

Yesterday we discussed CombineNet’s post “The Make vs. Buy Dilemma” and indicated that although this was a very good post … it did not quite meet its goal because this is a problem that could be more than adequately solved by way of a leading platform optimization engine and indicated that there are problems that the platform optimization engines cannot solve.

We noted that Paul was on the right path with the make versus buy example. This was primarily because there were three options at different levels of complexity:

  • source individual components (make)
  • source assembled components (buy)
  • source sub-assembles

and each of these options could define a sophisticated model on its own.

Now we’ll shift from make vs. buy to logistics, and, specifically, transportation network design. Consider a buyer who needs to transport product from a supplier’s facility. The buyer often has at least three options: let the supplier transport the product from their facility to the buyer facility, let a third party transport the product from the supplier’s facility to the buyer’s facility, or have the supplier transport the product to a centralized distribution center and then have a preferred third party transport the product to the buyer’s facility. Sounds easy, doesn’t it? Sounds like something we could probably do by hand since there will only be a small number of legs associated with each lane, each with a small number of costs and besides shipping costs, we only need to consider tariffs.

Now consider a buyer who needs to transport product from fifty supplier facilities. Instead of having at least three choices, you now have at least one hundred and fifty choices. Now you might be thinking that you could simply solve for each supplier location separately, which you could easily do with most platform optimization engines, but this is not likely to be the optimal solution since (a) there will likely be discounts from suppliers and freight carriers if sufficient volume is allocated and (b) if intermediate DCs are used, then products from the same category can be bundled and better rates obtained.

Of course, you might note that although most sourcing models either assume that the supplier is providing freight or that a single carrier is being used, at least one provider will soon offer a model where you can simultaneously associate multiple freight options with each product and that an intermediate distribution center is just another option (with adjusted costs) and that such a model could, with the proper formulation, costs, and adjustments likely handle such a model. Well, yes and no. It could handle a basic representation, but we haven’t considered bundling concerns – not all products from a centralized DC can be shipped on the same truck, multiple freight brackets and discounts – which could greatly increase the computational complexity, and the fact that the best solution in the design of an international transportation network might involve using multiple levels of centralized distribution centers. Even though a platform optimization engine with a really good embedded model could probably handle bundling reasonably well by way of grouping and exclusion constraints and freight brackets and discounts by way of appropriately implemented discounts and penalties, the reality is that your standard embedded sourcing, or even logistical model, is not going to be able to fully handle a dynamic multi-level transportation network.

So does this mean that best of breed wins out? Not necessarily. You’re not going to redesign your transportation network for every sourcing event. In fact, you’re probably not going to make significant changes to your network more than every couple of years. In addition, most of the time your problems are not going to be anywhere near this level of complexity.

So does this mean that leading platform optimization engines are your best choice? Not necessarily. As I indicated in my first post, on a high value category, even a couple of extra points can mean millions of dollars.

The answer is that if you are a large company, you probably need to follow the advice of the SpendFool and use both – they are not mutually exclusive. Use your (super-charged) honda-powered platform optimization engine from your leading Software-as-a-Service provider for your average sourcing event, since this is probably your highest ROI, but bring in the big lear-powered jet for your high-value complex events with very large numbers of decisions that need to be made simultaneously, particularly those that involve network design or really complicated make versus buy decisions (where you are not just considering a seat but the sourcing of the BOM for an entire car – and once you know what you are going to make versus what you are going to buy, you can probably revert to the honda-powered platform for the individual sourcing events). (As for how often will you need the lear jet, that depends on your specific needs. If you’re not sure, I would start with the honda and see how far it takes you, or, better yet, bring in a third party with expertise in optimization and market offerings to help you decide.) In my view, there’s plenty of room in the market, and plenty of need, for both BoB and POE because, when it comes to optimization, there really is no one-size-fits-all and the best you are going to find is a one-size-fits-most for any category of problems that you have.

Is the story over? Not by a mile. This series is titled CombineNet Communiqué and the next part of this series* will discuss their (public) solution offerings now that we’ve illustrated a scenario where a platform optimization engine might not be enough.

* After a brief hiatus.

CombineNet Communiqué II: Comparisons

Disclaimer: This blog, including this post, is not sponsored by CombineNet (acquired by Jaggaer). The author is not employed by, contractually engaged with, or affiliated with CombineNet. Any and all opinions expressed herein are solely those of the author. Furthermore, the opinions expressed herein should be contrasted with the opinions of other educated professionals before the reader forms his or her own opinion. Finally, the author is neither endorsing nor dissenting the use of CombineNet’s products or services – merely trying to spread awareness on the importance of optimization and the relative uniqueness of an offering like that of CombineNet. This disclaimer holds true for each post in this multi-part series and will be repeated.

Yesterday we recounted The Story To Date, and in response to the statements of CombineNet’s representatives that their product is designed for “everything in the bucket sourcing”, their optimization is easily accessible, their speed allows for more scenarios to be run in the same time frame – increasing the probability of a successful event -, they are now a hybrid of BoB (Best of Breed) and POE (Platform Optimization Engine), offering their clients the best of both worlds with their preconfigured templates, and even Jay Reddy, founder of MindFlow, openly acknowledged CombineNet’s optimization capabilities were without peer I noted that the reality was, more or less, (much?) better than it was (but easy is a relative term), definitely, more or less (but that’s not necessarily a bad thing), and pseudo revisionist history.

Today I’m going to indirectly address these issues by tackling CombineNet’s post The Make vs. Buy Dilemma, which was in response to my comments on their Analytics Support Negotiations posting.

In this post, Paul Martyn endeavors to offer a specific example of make versus buy to illustrate the saving potential an optimization-enabled sourcing process can unlock to demonstrate how Expressive Bidding unleashes savings and offers new insight into supply plans.

In this example, Paul uses the example of a seat assembly to illustrate that in addition to:

  • sourcing the individual components and assembling them (make)
  • sourcing the assembled components (buy)

one can take a hybrid approach where one considers

  • sourcing bundles of parts in combination with individual parts.

In his example, Paul illustrates a situation where sourcing individual parts (make) costs $129, sourcing the final product (buy) costs $120, but sourcing subassemblies, which may consist of the odd individual part, only costs $92.

Although this was a very good post, and one of the clearest posts out there on the power of optimization, it did not quite meet its goal because this is a problem that could be more than adequately solved by way of a leading platform optimization engine, such as Iasta’s, although it would take three scenarios instead of one (and possibly a few extra milliseconds of solve time), and a problem that could have been solved with a single scenario using an unreleased version of MindFlow’s optimizer, the former leader in the platform optimization engine category. In other words, it might take the right approach, a little creativity, or a little extra work, but some make vs. buy analysis can be done with platform optimization engines.

Does this mean that you don’t need best of breed? Not necessarily. There are problems that the platform optimization engines cannot solve. However, these tend to fall into two categories: deep and complex or highly specific. However, as it was defined, this was not one of them. But Paul was closing in on the right path, and tomorrow we will discuss a problem that you would be (very) hard pressed to solve using a platform optimization engine.

CombineNet Communiqué I: The Story to Date

Disclaimer: This blog, including this post, is not sponsored by CombineNet (which was acquired by Jaggaer). The author is not employed by, contractually engaged with, or affiliated with CombineNet. Any and all opinions expressed herein are solely those of the author. Furthermore, the opinions expressed herein should be contrasted with the opinions of other educated professionals before the reader forms his or her own opinion. Finally, the author is neither endorsing nor dissenting the use of CombineNet’s products or services – merely trying to spread awareness on the importance of optimization and the relative uniqueness of an offering like that of CombineNet. This disclaimer holds true for each post in this multi-part series and will be repeated.

For those of you following the sourcing blogsphere, you’ll know that I’ve been giving CombineNet a bit of a (good spirited) hard time lately on Spend Matters, e-Sourcing Forum (ESF) [WayBackMachine], and here on SI, but I’m just trying to poke and prod them into educating the sourcing community as a whole since I believe that decision optimization is still not well understood overall, and optimization is a much more involved topic than most people realize. After all, they have what should be the only real optimization blog out there, CombineNotes.

If you haven’t, I would highly recommend you read the spirited debates over on Spend Matters that resulted from the following posts:

Old News Keeps Flowing*
What Do Rubik’s Cube and Expressive Bidding Have in Common*
An Optimization Knock-Down!*

as well as the following CombineNet posts:

  • Project Zander and Comprehensive Network Design
  • CombineNet – The Allocation Company
  • Analytics Support Negotiations
  • Expressive Commerce and the Long Tail
  • Perspectives in Puzzling
  • The Make vs. Buy Dilemma
  • ‘Right Tool for the Job’ Sourcing

If you’re new to SI and missed my Optimization series over on ESF, you can still review part I, part II, part III, and part IV, and if you missed it, my first post on Decision Optimization is still available in the archives.

For those of you who’ve read the posts, and just want a quick recap, here it is.

I fear that optimization is not well understood and that much needs to be done to educate different users on the strengths and weaknesses of differing methodologies and solutions. There are upsides and downsides – the Rubik’s cube is simultaneously the best and worst analogy I’ve seen yet – the apparent complexity is that of a Rubik’s cube, but the actual complexity is much more so.

With respect to optimization, there is a sharp distinction between the problem, model, and solution (algorithm) and confusing them can be dangerous. If the model, and the modeling capabilities of the tool, are not appropriate to the problem, or not useable by the end user, it does not matter how good the solver (or solution algorithm) is. (MindFlow proved this point.)

There are embedded POE (Platform Optimization Engine) solutions, based on COTS (Commercial Off-The-Shelf) optimizers, BoB (Best of Breed) solutions with custom (proprietary) solution algorithms, and solutions that merge the two. The best solution all comes down to the problem at hand and its modeling capabilities – a better algorithm does not necessarily imply a better solution, although it will probably reach one faster. The model is key. Furthermore, there is a cost associated with pure speed – when it comes to optimization, you can only have any two of deep, fast, and accurate.

Depending on the problem, a better model may not save you more money – it depends on how fine grained the data you have available is and whether or not the model’s constraint representation abilities can support more advanced costing models. If your data is coarse grained, chances are the additional savings provided by a BoB solution will be negligible (less than 1%), if any. If your embedded solution limits the cost factors (i.e. forces you to combine unit, usage and/or transportation costs, for example), then a fine grained model may save you a couple of percentage points if you have detailed transportation and / or usage costs at multiple freight brackets. If your embedded solution does not support sophisticated discounts or bundle based costing, then a Best of Breed solution may save you significantly more, and the quoted range of 5% to 10% is very realistic. (However, if your embedded solution does support discounts and bundle-based costing, then a best of breed solution may not save you more than a few percentage points, if that much.)

CombineNet has some of the deepest models out there, they are one of the few companies with proprietary solution algorithms (which require a lot of brain power and development time), I thoroughly believe that their logistic models are best-in-class, but I’m not entirely sure that they are “without peer” when it comes to sourcing, primarily because of what I said in the last paragraph. It depends on the problem and, since this is business, the ROI.

Compared to many of its “peers”, CombineNet has a price tag that is directly correlated to its capability – quite high! (Based on quotes I have heard, one event could cost you as much as a year of unlimited events from an on-demand provider for a small team of sourcing professionals.) For some problems, I strongly believe that, in the words of the SpendFool a honda engine will do the job just as well as a lear engine, and if we are talking about a spend in the 10M range, then I would doubt the price tag is worth it. But if we are talking 100M+ spend on a very complicated category where you need to do an embedded make-vs-buy analysis (which I’ll elaborate on in a forthcoming post), I might actually advise you to spend money hand over fist on CombineNet because even an extra percentage point will generate a significant ROI for you. (For example, even if it only saved you two percent, on a 100M category, that’s two extra million to apply against your bottom line!)

My comments, as you might have guessed, did not go unanswered. According to CombineNet, their product is designed for “everything in the bucket sourcing”, their optimization is easily accessible, their speed allows for more scenarios to be run in the same time frame – increasing the probability of a successful event -, they are now a hybrid of BoB and POE, offering their clients the best of both worlds with their preconfigured templates, and even Jay Reddy, founder of MindFlow, openly acknowledged CombineNet’s optimization capabilities were without peer.

The reality, more or less, (much?) better than it was (but easy is a relative term), definitely, more or less (but that’s not necessarily a bad thing), and pseudo revisionist history. But these are topics for the forthcoming posts in this series, so I’ll leave you with a quote from the SpendFool:

This stuff (i.e. POE & BoB Optimization) isn’t mutually exclusive! Nothing wrong with using the consultants with big brains and tools to solve the really strategic problems AND also using mass deployed tools from ERP and/or SaaS vendors. Pick the right tool for the job. Anything else would be foolish.

Thanks SpendFool!

Looking forward to your comments on my (next) posts!

* All posts prior to 2012 were removed in the Spend Matters site refresh in June, 2023.

Eight Figures for an ERP? Think again. Think Compiere.

ERP – Enterprise Resource Planning – the be-all and end-all of business software – all of your transactional data in one place – everything you need to run your business – only seven figures! That was the promise.

The reality is much different. Seven figures for the software license. Multiples of that for the installation. That much again for the annual maintenance contract. In the end, it was an eight figure system – that if you were lucky recorded the majority of your transactions, in such a way that you couldn’t derive any intelligence out of the system without buying expensive modules for each business domain that sat on top of the ERP to allow you to create your financials, run human resources, create your manufacturing plans, negotiate your contracts, etc.

And that’s if you were lucky – if you weren’t, you couldn’t afford a real ERP and had to settle for a smaller, imitation system that probably only contained some set of transactions for the business unit that maintained it, was only accessible by a few people, and didn’t even meet their needs – leading to home-grown ad-hoc systems created by mavericks in an effort to do their jobs.

Either way – you probably have a solution that does not meet your needs. A system that requires more modules or third party add-ons than you can afford to be truly effective or a system that does not have the core capabilities or third party support. But what can you do? If you haven’t fully depreciated your mainstream ERP, you can’t afford to rip-and-replace, and if you don’t have one, you just can’t afford one.

You can look to open-source! And no, I’m not stark raving mad. Once the exclusive domain of big international multi-billion dollar software vendors, even ERP is now available open source. Compiere is a fully functional open-source ERP with built-in CRM functionality that is being used today by hundreds of companies all over the world. In addition, Compiere has amassed almost 100 partners in countries all over the world – so local support is available. And because it’s open, anyone can build extension modules on top of it, and custom modules for various domains are already being offered by it’s partners. Furthermore, Compiere and some of its partners are already hosting instances on-demand. And no, I do not believe I’m insane.

If you’re not an IT company, why not host your ERP on-demand? If it’s not your core business, why maintain an expensive 24-hour IT operation with redundant power supply, internet connectivity, real-time failover, automated off-site backups, IT security experts, always available on-call tech support, etc? After all, with today’s encryption and security protocols, communication security is probably the least of your security concerns.

For those of you who are already open source converts, you might be a little disappointed that Compiere was built on Oracle, but fear not! Compiere just received a significant amount of funding, relocated to Santa Clara, is in the process of completing a Sybase port, and it’s a safe bet that a MySQL(X) port may occur in the future!  (MySQL(X) may not have the required functionality to support Compiere yet, but MySQL(X) improves every year!)

It’s a procurement professional’s dream come true. No huge up front spending for an ERP system that may or may not deliver. With Compiere on demand, you’re paying for the system and support that you use, when you use it, and you’re not locked in!

Furthermore, you just know that a complete open-source-based on-demand procurement system with Compiere at its base is around the corner. After all, Rearden Commerce is less than 30 miles away in its San Mateo offices and there are a number of e-Procurement companies, like Ketera (which was acquired by Deem in 2010) nearby.  If these three companies adapted their APIs to allow you to merge their solutions, it would not be long before you could manage 100% of your contracted materials and services procurement spend. Rearden excels at services, Ketera is good for indirect procurement, and ERP-based planning and forecasting systems are the foundation of your direct materials spending.

I know it’s just pure speculation, but if these three companies made it easy to integrate their solutions, you’d be able to run your entire procurement effort on-demand! Then you could run your entire sourcing and procurement operation on-demand! After all, with respect to a complete solution, we have only left out sourcing (remember, procurement is the acquisition phase, sourcing the predecessor negotiation phase) and visibility (since, at a high level the cycle is Source-Acquire-Monitor). But wait — companies like Iasta (acquired by Selectica, merged with b-Pack, renamed Determine, acquired by Corcentric) have been offering (complete end-to-end) sourcing suites on-demand for years and now we have companies like Apexon offering on-demand visibility solutions!

As a side note, Compiere is holding it’s annual Partner Conference next month in Santa Clara – October 20-21 at the Embassy Suites at 2885 Lakeside Drive in Santa Clara. Check out Compiere’s Events Calendar for more details.