A hot debate over the inherent security benefits of open source software — or lack thereof — has been raging (again) since the “heartbleed” bug came to light last spring. So…..from a security expert’s point of view (yes, mine!), is open source really more secure than closed source (or proprietary) software?
Let’s start by reviewing the arguments presented by pundits on each side.
Arguments for Open Source being More Secure
While several arguments are made for open source software being inherently more secure, one in particular is made much more often than the others – the “more eyes equals fewer bugs” argument.
More Eyes = Fewer Bugs
Open source projects are typically composed of groups of otherwise unrelated individuals. Proponents of this argument believe that these people will therefore more closely scrutinize the source code of others. With more people more closely scrutinizing all of the code being added to a project, more bugs will be found.
Closed source projects, on the other hand, are typically composed of groups of individuals that all work for the same employer. These people get paid to generate code that works; not to find bugs in other programmer’s code. Therefore, the argument goes, closed source projects will inherently include more bugs, which means more security problems.
Concern for Reputation
Concern for one’s reputation is another argument frequently posed as proof of open source security being inherently more secure than closed source. Open source applications, by definition, include the source code. The source code includes the name of the programmer who contributed the code. The “more concern for reputation” argument theorizes that a programmer’s reputation is more at risk in open source projects. Not only do their peers more closely scrutinize their code (see the more eyes, fewer bugs argument), but any number of people building and/or modifying the code will also know who contributed it.
Programmers contributing to proprietary coding projects, according to this argument, remain anonymous to the vast majority of the users of the code and therefore incur no risk to their reputation. This will tend to make these programmers less vigilant with respect to security-related bugs.
Domain Experts
Participation by domain experts in open source projects is also suggested as a reason why open source projects will be inherently more secure than closed source projects. Projects involving cryptography are often used as examples of this. The premise of this argument is that domain experts more readily understand the security nuances of a project and are therefore much less likely to contribute code that includes security bugs. Furthermore, they are much more likely to spot security bugs contributed by other programmers.
Adherents to this argument believe that the deep domain expertise that is available to open source projects is not available to closed source projects, making propriety software more likely to fall prey to security issues related to that domain.
Find & Fix Faster / Number of Bugs Reported
Another often heard argument is that bugs in open source software are found and fixed faster. Contributors to open source projects tend to contribute to those projects because they will personally use the completed application. The reasoning behind this argument is that there are more people who understand how the code should behave actually using the code in the “real world,” which leads to more bugs being found more quickly. In addition, since the contributors of the code also rely on the code, bugs get fixed more quickly.
Folks espousing this argument assume that businesses selling proprietary software are less responsive for potentially a couple of reasons. First, they don’t want to incur the expense of developing and distributing a fix, and second they don’t want customers to know that a security bug exists within the software.
A related argument is the “fewer security bugs are reported for open source code” observation. This argument generally points to one analysis or another that counts the number of security flaws reported to one or more sites or organizations. These numbers indicate that proprietary software has more reported security bugs, and the proponents thus conclude that proprietary software must also then have more security flaws than open source.
Arguments Against Open Source being More Secure
On the opposite side of this issue are arguments for why proprietary code projects are inherently more secure than open source software. There seem to be three primary arguments put forth to support this view.
Decentralized Control
The first of these is that open source projects lack centralized control. Therefore, it is more likely that integrated code is more likely to contain security bugs. This argument suggests that centralized control of a project is necessary to make sure that appropriate development, review, and testing processes and procedures are defined and executed. Most proponents of this argument go on to say that lacking this control will result in less trustworthy code.
Lack of Ultimate Ownership
The next argument is that no individual is ultimately responsible for the open source project. With no ultimate responsibility, there is no way to ensure or enforce proper coding techniques or review processes are followed. This argument closely follows the previous argument. It essentially asserts that without someone assuming ultimate responsibility, there is no way to ensure that proper coding, review, and testing processes are followed, thus resulting in code that is more likely to contain security bugs.
No Incentives for Security
The final argument for why open source is not inherently more secure than closed source is that there are no incentives for project contributors to avoid security bugs in their code. The rationale for this argument is that contributors are focused on generating code that has the required behavior. Producing code that is security-bug free is neither rewarded nor penalized; but, producing code that doesn’t behave as required results in embarrassment and loss of reputation.
An Observation
Before laying out my opinion on the subject, I’d like to make an observation. Whenever someone questions the rationale of a specific argument, zealots on either side automatically assume that the questioner supports the opposite side. In reality, when someone provides evidence that the rationale of an argument does not support the conclusion, that only means that one doesn’t believe the conclusion is supported by proven fact, not that the conclusion is wrong.
My Take
What’s my opinion on the subject? I’m going to be really controversial and state unequivocally:
I don’t think anyone really knows.
The truth is, I have problems with the rationale for arguments on both sides, and it leads me to this advice:
If you are evaluating an open source solution against a proprietary solution, don’t use “inherently more secure” as one of your criteria. Develop a set requirements for the solution you need. Consider each option according to those requirements. Rather than using “security” as a requirement, specifically define your security requirements. Here are some examples of what I mean by more specifically defined security requirements:
- fits our current security management model
- is compatible with our current third-party authentication/authorization products
- is provided by a trusted-ISV
- is provided by an open source project that is widely used
Probably the biggest problem — and it applies to arguments on both sides – is the lack of empirical evidence. All of these theories describe why open source is inherently more secure or why it isn’t. None of them provide actual evidence.
Here are some tips for digging down under the various claims to get to the root of the matter.
Question the Validity of the Numbers
Sure, one of the arguments for open source being more secure is that fewer security bugs are reported against open source code. But how can one draw this conclusion when comparing absolute numbers of reports for dissimilar software which have widely ranging numbers of users? One would have to classify all types of software (e.g. operating system, type of application, purpose of program, etc.) and compare against the same type of software. Then one would have to figure out how to adjust the number of reported problems for the differing levels of usage. I’m not aware of any formal studies that have tried to take the analysis of reported bugs to this extent.
Neither am I convinced by the “more eyes, fewer bugs” argument. There is no evidence to suggest that merely having more people looking at code makes it more secure. First, you have to ensure that the people looking at the code are properly trained to look for security loopholes. Then you have to ensure that you have a formal process by which the code is analyzed for loopholes. Finally, you have to have a way to ensure that the review process is actually executed. The very nature of most open source projects makes it difficult to ensure that those who are supposed to be reviewing all of the code have, in fact, done so.
Ask Who Provides Security Expertise
The domain expert argument relies on the assumption that domain experts are more readily available to open source projects than to proprietary projects. It seems to me that domain expertise would be at least as available to proprietary projects as to open source projects simply due to the fact that proprietary projects pay for expertise. While this advantage may be available to open source projects, I don’t see how one can assume it isn’t just as available to closed source projects.
Compare Apples to Apples
I also don’t believe there is proof that security flaws are found and fixed faster in open source than in proprietary code. This goes back to one of my initial objections. This argument relies on a couple of invalid assumptions. First, it assumes that all coding projects – open or closed – have the same inherent qualities regardless of who participates. It may be possible to select specific projects of a like nature and compare open source implementations against all proprietary implementations and make a determination for that specific type of project, but it doesn’t make sense to compare, for example, openSSL against Microsoft Windows. The former is many times smaller in terms of the number of lines of code and in the number of functions provided. Of course, there will be many more bugs reported against Microsoft Windows. Being a solution for a specific function, openSSL requires much less testing time to ensure that a bug is fixed and that it doesn’t introduce additional problems. This means that it will take longer to produce a fix for the latter.
The same is true for the “fewer security bugs are reported in open source” arguments. You have to take into account the nature, size, and complexity of a project and compare only projects that are very similar. You can’t blindly analyze sheer numbers for all open source verses all closed source projects and believe that the results give any insight into the relative numbers of bugs for each type of implementation model.
Arguments on the other side have their own logic weaknesses. They tend to assume that all open source projects are managed the same way. My — admittedly small — exposure to open source projects leads me to believe that different open source projects are handled differently. Most seem to have one or a small number of project leaders that make most of the administrative and management related decisions, but the processes used for making those decisions didn’t seem to be well defined. Proprietary projects I’ve worked on typically have had centralized control. However, the processes for development, review, and testing have been quite different across those proprietary projects. Some projects have had, in my opinion, outstanding processes that were well defined, and assiduously followed while others seemed to have the same open source lack of definition.
Be Open-Minded About Incentives
I don’t agree with the argument that says since contributors to open source projects have no monetary incentive (i.e. aren’t paid) to produce code with no security flaws, there are likely to be more security flaws in those projects verses closed source. I think there are incentives that are as valuable to open source contributors as a salary is to closed source programmers. These incentives include the contributors’ need to use the solution, desire to enhance or maintain one’s reputation, and plain old pride in doing a job well.
Find Out Who is Managing the Project, and Why
The “there is no one ultimately held accountable” argument isn’t very convincing to me either. First of all, as I mentioned earlier, the few open source projects I have been exposed to did have one or more people responsible for the project as a whole. These people may not have been financially responsible, but the success or failure of the project still largely rested on their shoulders. Secondly, financial responsibility isn’t the only motivator for producing a project that has a minimum number of security flaws. In fact, one can argue that in some cases there may be greater financial incentive to produce proprietary code that has security bugs. For example, it may be a higher priority to finish a project by a certain date, than it is to take time to ensure that as many security bugs have been removed before releasing the code. For these reasons this argument isn’t very compelling to me.
The Bottom Line
The bottom line for me is this: Many open source projects provide huge benefits. Many proprietary projects provide huge benefits. One shouldn’t attempt to choose one over the other based on the supposition that the implementation model of one or the other will necessarily lead to fewer security flaws. There is no compelling evidence supporting this argument for either open or closed source projects. Therefore, pick the solution that best meets your specific requirements.
“First, you have to ensure that the people looking at the code are properly trained to look for security loopholes. ”
A point I too have made. If no one is looking for errors/issues then why would you assume people would just magically stumble upon it? Most glaringly obvious security flaws are caught because they are obvious, but the little ones that slip through the cracks squeak by because they are slippery.
I agree with you 100%. This argument applies equally to opensource and proprietary.