Open Source Quality
Author’s Note: This post was resurrected from an archived version of my website, and has been updated to point to current sources.
During a recent conversation with a trusted friend, I was discussing the use of open source products in a particular project. “Sure, I have done my homework and selected projects that appear to have the best chance of being stable and supported in the future, and they have greatly reduced duplication of code and increased development velocity a great deal,” I said. (My heuristic for evaluating an open source project generally includes a number of factors, including: thoroughness of documentation, quality of website, activity on public mailing lists, history, corporate stakeholders/sponsors, and responsiveness of developers to bugs, just to name a few).
His response to my statement caught me off guard: “You’re using ~30k LOC in relatively untested (formally) open source libraries, and your application is only 10k LOC? That would make me really nervous.” I had done my homework. I have been using open source stuff like this for years with great success. I was confident in my selection of quality projects that will be available now and in the future. I had faith in the developers, and the versions we used were stable, reasonably fast, and did exactly what we expected them to do. However, I found myself with no sound response that could easily retire his concerns.
Regardless of how sure I think I am, I don’t really know how reliable the code is that I use. My estimations are just wild guesses about which open source projects are production quality. Even though I’ve had more experience than most, my estimates aren’t quantatively better than the next guy’s, and they’re largely based on a “see what works” model. Yes, it might work to find open source software in wide use with few bugs, but it leaves a lot to be desired in terms of risk management in a serious application.
Unlike commercial offerings, I can’t get a support contract that has specific penalties for the vendor if they fail to provide a specific level of service. If the authors of these projects decide to shut the website down and go away, I’m left without anything but the code and little documentation I have. If I’m lucky maybe I’ll get a shoulder to cry on, or someone else will pick up the flag and carry on, but I won’t be holding my breath.
If you’ve ever tried to read most open source code, you’ll know why there aren’t many really large (20+ active core developers) open source projects. Users of open source software put a tremendous amount of faith in the authors of their programs. We trust them to write stable code, reasonably free of bugs, and that they will continue to be committed to doing so in the future (since few companies can shoulder the cost of continuing development when the leader(s) go away).
This leaves me in an interesting position. Unlike my project most businesses are unable to tolerate the risk of using software from the “wild-west” of the software development world. Yes, a number of companies are already making money off of open source products by proving support or training for these products, but these offerings are rarely centered around supporting open source software components which are rapidly making their way into a growing number of commercial products.
What the open source community needs is an independent body that is responsible for helping identify quality open source software components and mentoring developers who are interested in building this quality into their software and open source process. Ideally, the organization should provide exhaustive tests and risk analysis of open source projects to certify the project’s suitability for inclusion in a commercial/critical product.
Part of the verification process should include static analysis of the code, documentation, and requirements for the project. Additional rating criteria should include research about release policies, succession plans for key developers, speed of security issue patches, licensing limitations, known or outstanding bugs, future plans and roadmaps. The list could continue for a page or two.
The testing organization should also perform exhaustive benchmarking and load testing to establish some reasonable numbers for performance expectations and to feel for any sort of limits to the scalability of the software.
Once testing is complete, the organization should draft an official report that summarizes their findings and assigns an overall rating to the software. This organization might even be able to steal ideas from other market/corporate research organizations and compile lists of leading open source projects in different categories. The ratings should be easy to read, and should include brief executive summaries that are accessible to the non-technical types managing projects. Standard rating scales should be created, and each project should be rated quantatively on a number of scales that rate the risk of use on a variety of different metrics.
Using these reports corporations would have the data they need to make informed business decisions about using open source products in their commercial applications. Software professionals could use this data to sell open source components to higher management. Individual open source projects are freed also from the burden of doing this sort of documentation themselves, and can use the guidelines as suggestions for improving their own development processes.
Like any worthwhile endeavor, establishing this testing body would not be easy. It takes a lot of time, and, more importantly, a lot of money. Ratings could only be established for each major “feature” release for a project, and would have to be continually renewed as new versions are released. Deciding who gets rated is also difficult, and would likely be driven by corporate requests. Quantative ratings should be available for free, but the whole document might have to be available for a fee. And, most importantly of all, it may not fly with the cult of anti-establishment types in the open source world who fight “the suits” and “the man.”
This problem is the old “unite or die” all over again. If we miss this opportunity now, open source may be forever constrained to those who live fast and loose on the frontier of development, and out of the reach of many. If we act now we may be able to begin a cultural revolution in software development/engineering and deliver higher quality software open source solutions for everyone.