160

My computing teacher told us that closed source software is more secure than open source software, because with open source "anyone can modify it and put stuff in." This is why they do not want to use open source alternatives for learning to program, such as FreePascal (currently using Embarcadero Delphi, which is slow and buggy.)

I think this is completely wrong. For example Linux seems to be considerably more resilient to exploits than Windows; although it could be down to popularity/market share.

What studies have been performed which show that open source or closed source is better in terms of security?

Oddthinking
  • 140,378
  • 46
  • 548
  • 638
Thomas O
  • 11,908
  • 7
  • 53
  • 72
  • 107
    When will people learn that security through obscurity on its own is a deeply flawed concept... – Ardesco May 20 '11 at 10:45
  • 1
    @Ardesco: Probably the first time they get pwned. –  May 20 '11 at 12:49
  • 59
    I think the very fact that they are teaching in Pascal shows just how knowledgeable and current the teacher is, to say nothing of the luddite attitude towards open source. – Dov May 20 '11 at 14:05
  • 2
    Do any businesses still use pascal? @Thomas I think you'd be better off learning a more current language in your own time (I'm biased towards Python). As for closed vs open source, if I write a piece of closed source software you don't know who I let put what in it, you just have to hope I'm sensible about it. – Stephen Paulger May 20 '11 at 14:23
  • 29
    Open source is often more secure because anyone can change it. It means anyone can discover and fix bugs. So your teacher is completely wrong. – Lennart Regebro May 20 '11 at 14:36
  • 31
    ask your teacher what he/she has to say about apache vs IIS servers where apache have a greater marketshare yet more succesfull exploits are done on IIS (microsoft's) and if your teacher doesn't know about them he/she has very little knowledge and only crams up the textbook . – Lincity May 20 '11 at 14:51
  • @Stephen I agree 100% as I am very skilled in Python. :) It is a wonderful language to program in. One major business which still uses Delphi would be Altium (EDA software), but that's probably not due to choice - they built up a large code base in it and are pretty much stuck with it! We are forced to use Delphi because the exam board (AQA) says so. Even though Python is an option in A2 (next year), we can't do it this year. Ugh. – Thomas O May 20 '11 at 14:55
  • Less of interest than whether something is open or closed source is the incentives the producer has to identify and remedy security issues. The variables are: Open & free, commercial & closed. What are the incentives and motivations of developers for Debian (open, free), Red Hat (open, commercial), Shareware shops (closed, free) and Microsoft (closed, commercial)? – Brian M. Hunt May 20 '11 at 15:12
  • 1
    @Thomas O, at least you got to do Delphi in your AS year, we're were stuck on Turbo Pascal (DOS based). In A2 we were allowed Delphi and only Delphi. Though that was 8-9 years ago... Making me feel old ;) – thing2k May 20 '11 at 15:51
  • @Brian M. Hunt but when the code is open source then there are many volunteers who fix the code and anyone can commit a fix – Lincity May 20 '11 at 16:22
  • 53
    "Anybody can put stuff in it". Ahhh, so that's why every other day I log into my OS, the welcome message gets changed to "dicks lol" or "n00bz". – Lagerbaer May 20 '11 at 16:52
  • 2
    @Dov, to be fair, Pascal was invented as a language to teach programming. The problem came later when all those students thought it was a real language and tried to use it for real tasks. Of course, too few CS classes emphasize that detail. – RBerteig May 21 '11 at 01:37
  • 10
    @Alaukik Maybe his teacher also likes the IE over Firefox because of its better security. – Mateen Ulhaq May 21 '11 at 02:26
  • 2
    @muntoo, Actually, you're right. Well, I installed Chrome on the machines because it doesn't require admin access. When we were browsing a specific web page (made using Flash - Kerboodle), we were told it was incompatible and a security risk, even though it worked twice as well as IE, which is slow and buggy. Now they've blocked Chrome. :( – Thomas O May 21 '11 at 07:01
  • 5
    I'd say it is more likely that "stuff" is in closed-source software. For example, the Interbase back door fiasco: http://www.theregister.co.uk/2001/01/12/borland_interbase_backdoor_exposed/ – starblue May 21 '11 at 12:21
  • 5
    "Anyone can modify it" doesn't mean anybody does so. Modifying it, and bringing your changes upstream is a more difficult issue. Or to spread "stuff". – user unknown May 21 '11 at 14:36
  • 3
    The attackers usually do not care too much about the source code - they have the compiled code and can use a variety of techniques for exploiting its weaknesses. This approach can be much faster than having to read and understand the source code. – Roman Zenka May 20 '11 at 14:56
  • 3
    @Dov, @StephenPaulger: People are still using Delphi quite extensively in industry. Skype, for example, was just acquired by Microsoft for $8.5 billion. Or if you live in the US, there's a good chance the TV station you watch is run by Delphi-based control software from WideOrbit, the industry leader by a fair margin. (Also my current employer, and we couldn't accomplish half the stuff we do without Delphi. Not easily at least.) A lot of companies prefer to keep quiet about Delphi because it provides such a productivity boost over other languages that they consider it a competitive advantage. – Mason Wheeler May 23 '11 at 22:36
  • 5
    [Schneier: Open-Source Software Feels Insecure](http://www.schneier.com/blog/archives/2011/06/open-source_sof.html) – Stephen Paulger Jun 03 '11 at 08:58
  • 4
    Instead of idly speculating, this question should be asked of the experts over at [security.se]. Actually, in fact it *has* been asked (albeit after this one here), with rational, analytical answers: http://security.stackexchange.com/questions/4441/open-source-vs-closed-source-systems – AviD Jun 19 '11 at 08:29
  • @M.Night, @muntoo, @Thomas, some of these comments, besides being needlessly argumentative, are ill-informed, or at the very least out of date. E.g. while it is true that *historically* IIS had a very bad security track record, currently it is considered far ahead of Apache, security-wise. Same thing with IE, bad history, currently solid. Then again, today it really depends more on environmental factors, such as the skill and mindset of the administrators, hardening of the OS, etc. – AviD Jun 19 '11 at 08:36
  • 4
    First of all: a secure system is safe against even the guy who designed the system. It is simply well designed and does not rely on obscurity, as others have noted. Second: the argument "anybody can put anything in it" is often made against Wikipedia articles, and most people in here should know why that argument does not hold up in reality for exactly the same reasons as with open source. – Tormod Jun 19 '11 at 19:01
  • The only issue I see with open source is 'support'. You would be dependant on forums or your own or buy a support subscription. Security is a less concern for me. Generally open source software is more secure, robust & bug free then closed source. – Zo Has Dec 06 '13 at 09:24
  • I find this hard to believe. I would have thought that all programmers working in education would be for open source. – thomas-peter Feb 17 '14 at 13:40
  • A small note here: Linux arent more resilient by definition but they make up such a small part of the OS ecosystem(in comparison to Windows) that its just counter-effective to write an non-targeted exploit for those systems. This type of claim is equal to Mac people saying Mac are impervious by malware, its simply not true, they just are not relevant enough as a total for someone to spend effort on them. – Leon Jul 24 '17 at 07:17
  • This cries out for a follow-up question - "What qualifications are needed in the relevant course material to become a computing teacher?" – PoloHoleSet Jul 27 '17 at 15:07

9 Answers9

138

"Secure design, source code auditing, quality developers, design process, and other factors, all play into the security of a project, and none of these are directly related to a project being open or closed source."

Source : Open Source Versus Closed Source Security

Prince John Wesley
  • 1,412
  • 1
  • 9
  • 7
  • 40
    It is true that these things are not directly related to the license of a project. However, the tendency is for open-source (OSS) projects is that as they get more popular the source code is reviewed a great deal more. The review ceiling for closed projects is naturally determined by the number of developers that can be hired. There is no such ceiling for an open source project - so therefore an OSS project **may** be reviewed for bugs more than any closed project could be. Whether this happens depends on it's popularity. – Adrian May 20 '11 at 13:20
  • 11
    Also not all users of an open source project are necessarily going to be developers, and even of those who are, only a small fraction of them may actually read its source code. So it's not just about popularity. – Robin Green May 20 '11 at 15:41
  • 4
    @Adrian - In OSS that's called Linus' Law: http://en.wikipedia.org/wiki/Linus%27_Law – Kit Sunde May 20 '11 at 15:52
  • 15
    I disagree with this answer. In cryptography, a "black box" security system can only be considered insecure. A cryptography system is only considered "safe" if the algorithm it uses is well-understood and proven to be unbreakable (for all intents and purposes, with modern or near-future tech). With OSS, it's possible to determine whether the system uses a secure algorithm, and whether a particular implementation of an algorithm is flawed. Being OSS or closed source doesn't change the actual current code, but it ensures that you to know what you're getting, if you're able to do the analysis. – RMorrisey May 20 '11 at 21:52
  • 9
    @RMorrisey: If the NSA induces (through whatever means, hopefully ethically such as hiring the person) the world's 100 foremost cryptography experts to analyze their system, it's as secure as is feasible, despite being hidden. Opening it up for public scrutiny will have negligible additional impact. Your comment assumes that the most brilliant cryptanalysts in the world work contribute (solely) to open source projects, which is suspect. – Ben Voigt May 20 '11 at 22:49
  • 2
    @Ben Voigt: I was thinking mostly from the perspective of another company or entity using the tool. If the NSA were to sell their secure system to another country, who's to say they wouldn't monitor the traffic that's sent on it? There's no way of knowing except to take the coding entity's word for it, which is the point that I was trying to make. – RMorrisey May 20 '11 at 23:38
  • 1
    @RMorrisey, the cryptography example you give is a bad example, as crypto (and other subtle, niche topics) must be open for a different reason: Kerckhoff's Law. Then again, even if you presented completely open-source implementation of your special encryption algorithm, this would be *even worse*. As I said, crypto is different, and shouldnt be designed by programmers, only by expert cryptographers. – AviD Jun 19 '11 at 08:38
  • 1
    See also a better, reasoned analytical discussion at http://security.stackexchange.com/questions/4441/open-source-vs-closed-source-systems – AviD Jun 19 '11 at 08:42
  • 1
    @Adrian: "the tendency is for open-source (OSS) projects is that as they get more popular the source code is reviewed a great deal more" That's a common assumption, but I'd like to see some proof of it. The vast majority of people who use open source software don't understand the code or even look at it. – endolith Sep 27 '11 at 20:50
  • @BenVoigt just because they're the world's foremost experts does not make them the 'best' in their field, or more specifically does not meant they won't miss something they're trained to ignore. Throughout history we've seen novices trump professionals because they didn't know they couldn't do something. Open Source gives those people a chance to fluke it. – salmonmoose Feb 29 '12 at 22:54
  • -1 One of the top answers on this site, and it's just a single OPINION from a blog, taken out of context *(see the last paragraph of that link)*, and in disagreement with the opinion of most professionals *(see @RMorrisey's comment)*. This answer needs to be deleted or something... – BlueRaja - Danny Pflughoeft May 13 '13 at 23:21
  • 1
    @BlueRaja-DannyPflughoeft mods are investigating how to handle such a situation. Thank you for bringing this to our attention. – Larian LeQuella May 15 '13 at 02:01
  • @endolith The proof you need is referenced in [another answer on this page](http://skeptics.stackexchange.com/a/3452/5072). Quote: "OpenBSD source code is regularly and purposefully examined with the explicit intention of finding and fixing security holes (Payne, 1999), (Payne, 2000)." – JW. Oct 18 '13 at 09:28
  • And.... Heartbleed completely killed the credibility of Linus' Law – Ben Voigt Apr 24 '14 at 22:52
  • 1
    @LarianLeQuella: Has the mod team made a decision? Reading more closely this answer does not even answer the question: It implies that open/closed source does not matter but the core facts state only that open/closed source has nothing to do with other factors that do affect security. – Nobody moving away from SE Jul 16 '14 at 16:24
  • @Nobody thank you for bringing this to our attention. I haven't looked at it much since this is way outside my area of interest/expertise. I will bring it to the rest of the team. – Larian LeQuella Jul 17 '14 at 02:07
  • Even more : closed and open source programs face *the same* pressure from attacks, but open source ones have **much more** eyes to take a look at the code than any of closed source tools – Alexey Vesnin Jan 29 '16 at 20:43
  • *«However, the tendency is for open-source (OSS) projects is that as they get more popular the source code is reviewed a great deal more.»* This has been found to be fallacy multiple times. For example PHP is open source and very popular, yet has terrible security record. Debian and it's down-streams such as Ubuntu are greatly popularde facto server standard, yet that didn't stop them from sabotaging OpenSSL itself (https://www.debian.org/security/2008/dsa-1571) or sabotaging libraries using SSL by compiling against buggy GnuTLS instead of OpenSSL. – vartec Jul 24 '17 at 23:05
  • @RMorrisey Something doesn't become more or less secure from being a black box. You can't actually check and see if its secure if its a black box of course, and if you don't _know_ assuming it is in fact not secure is a reasonable default, but obscurity doesn't prevent a system from being secure; It's just misguided to think of obscurity as a security feature on its own. – Cubic Jul 31 '17 at 13:17
  • @BenVoigt Your claim that the top 100 security experts would make something as secure as possible is dubious. Can you cite it? – Fax Apr 03 '20 at 15:22
  • @Fax: Actually it's true by construction. "Experts" are the ones who do make something as secure as possible. This shares a lot in common with a "No True Scotsman" fallacy, except for the fact that I'm the one controlling the counter-example scenario. In my scenario "foremost" means the experts are sorted according to the strength of the system they are capable of building. Thus the result may not be "as secure as theoretically possible" but it will be "more secure than any other group of comparable size could build". – Ben Voigt Apr 03 '20 at 15:41
  • @BenVoigt That assumes that one person can make something as secure as possible on their own (who needs code review, right?), and that taking one piece of software that is as secure as possible and joining it with another piece of software that is as secure as possible doesn't produce something that is less than as secure as possible. It also assumes that 100 experts collectively have sufficient knowledge to build every component of a given piece of software. I think you'll find that human interaction comes into play at some point, and at that point it's no longer constructive truth. – Fax Apr 03 '20 at 15:52
  • @Fax: Actually in my example, the 100 experts were responsible for one portion of quality control (find practical cryptographic weaknesses) only. There could have been any number of developers. But if you find a different group of 100 that does a "better" job than my group, I say that the word "foremost" applies to your group, thus those are the ones I had in my example all along. Note that I'm not making claims about effectiveness of the cryptanalysts the NSA actually uses, I've contrived a counterexample to the preceding claim – Ben Voigt Apr 03 '20 at 15:56
  • @BenVoigt Why would I have to select a group of 100 people? A project that is restricted to 100 people can't really be considered open source. The question is whether the group of 100 top experts can produce code at least as secure as the 7.8B not in that group. – Fax Apr 03 '20 at 16:42
  • @Fax: Now you need millions of people just to sort through the feedback coming from the 7.8B people. No, realistically what is going happen is that open source projects ignore most of the 7.8B potential reviewers. And you're still confusing what my group of 100 analysts is doing. They are not producing the code, they are reviewing the design (partially based on code but augmented with other documentation) for flaws. – Ben Voigt Apr 03 '20 at 17:13
  • Let us [continue this discussion in chat](https://chat.stackexchange.com/rooms/106294/discussion-between-fax-and-ben-voigt). – Fax Apr 03 '20 at 17:13
70

Software being open source doesn't mean anyone can change it (often anyone can fork it, but that will be new derived software) - only dedicated people have access to the repository. For example, if I want to submit a change to Tortoise SVN I have to mail my change to a dedicated mail list and then developers will see it, review it, and commit it to the codebase.1,2


Still, anyone can read the sources. That's not a big deal either. Look at contemporary cryptography. Algorithms are public and researched and tested by numerous people. How can they be used for protecting data? They use small portions of secret data (encryption keys) to parameterize the algorithm. Everyone knows the algorithm, but only people who need that know the secret keys and algorithms are successfully used for data protection.


That said, software being open source and software being secure (or reliable) are completely independent - comparing those is like comparing apples versus oranges. Yes, open source software can be buggy. So can closed source software. It's how the development process is organized, not whether you disclose the sources.


References:

1

Submit patches (submit enough and you can become a committer!)

2 (Slightly modified)

Technically, a committer is someone who has write access to the SVN repository. A committer can submit his or her own patches or patches from others.

Sklivvz
  • 78,578
  • 29
  • 321
  • 428
sharptooth
  • 1,891
  • 1
  • 14
  • 16
  • 5
    Skeptics requires references for all answers. Answers without references are only speculation, not fact. See the [FAQs](http://skeptics.stackexchange.com/faq). Thanks! – Kevin Peno May 20 '11 at 18:49
  • 1
    So sharptooth should write an article, then get a friend to cite that article. – Phil May 21 '11 at 03:40
  • @Kevin Do you mean the "forking"? Or the cryptography "algorithms are public and researched and tested by numerous people"? – Mateen Ulhaq May 21 '11 at 08:36
  • @Kevin Peno: did you say he need reference just because sharptooth said he are not citing any? The only opinion in his post are "That's not a big deal either" and the last paragraph. The rest are well known facts. – Lie Ryan May 21 '11 at 08:59
  • 1
    @Lie, well know eh? Then there must be studies to cite then? – Kevin Peno May 21 '11 at 09:13
  • 11
    @Kevin Peno: You can verify these as easily as observing that there are 24 hours in a day: "Software being open source doesn't mean anyone can change it only dedicated people have access to the repository", "if I want to submit a change to Tortoise SVN I have to mail my change to a dedicated mail list and then developers will see it, review it, and commit it to the codebase", "Still, anyone can read the sources", "contemporary cryptography ... Algorithms are public and researched and tested by numerous people", "They use small portions of .... encryption keys ... to parameterize the algorithm" – Lie Ryan May 21 '11 at 09:20
  • @Kevin I'm not sure what you mean by "studies"... In any case, that's basic stuff every programmer/[non-]security researcher should know. I suppose we could link to some random Open Source project about the forking, but that's just ... well, you know. – Mateen Ulhaq May 21 '11 at 09:38
  • 2
    @Kevin Peno: do you even know the difference between facts and opinions? Go to Tortoise SVN repository and see if you can gain write access without going through their code review system. This proves fact 1) and 2). "anyone can read the sources" is the definition of open source. "AES" are public and researched and tested by numerous people. "AES" uses encryption key to parameterize the algorithm. – Lie Ryan May 21 '11 at 09:52
  • 3
    @Kevin Peno. You don't need to cite sources for well-known facts. @Lie Ryan "anyone can read the sources" is **not** the definition of Open Source. The definition is given here: http://programmers.stackexchange.com/questions/21907 (the subtly different definition of Free Software is also given there). – TRiG May 22 '11 at 00:59
  • "It's how the development process is organized, not whether you disclose the sources." - But it's also how the release process (and testing, and reputation, etc.) is organized. – ChrisW Jun 20 '11 at 01:18
49

I'm not going to answer this question myself. The United States Department of Defense has done it much better than I could.

Q: Doesn't hiding source code automatically make software more secure?

No. Indeed, vulnerability databases such as CVE make it clear that merely hiding source code does not counter attacks:

  • Dynamic attacks (e.g., generating input patterns to probe for vulnerabilities and then sending that data to the program to execute) don’t need source or binary. Observing the output from inputs is often sufficient for attack.

  • Static attacks (e.g., analyzing the code instead of its execution) can use pattern-matches against binaries - source code is not needed for them either.

  • Even if source code is necessary (e.g., for source code analyzers), adequate source code can often be regenerated by disassemblers and decompilers sufficiently to search for vulnerabilities. Such source code may not be adequate to cost-effectively maintain the software, but attackers need not maintain software.

  • Even when the original source is necessary for in-depth analysis, making source code available to the public significantly aids defenders and not just attackers. Continuous and broad peer-review, enabled by publicly available source code, improves software reliability and security through the identification and elimination of defects that might otherwise go unrecognized by the core development team. Conversely, where source code is hidden from the public, attackers can attack the software anyway as described above. In addition, an attacker can often acquire the original source code from suppliers anyway (either because the supplier voluntarily provides it, or via attacks against the supplier); in such cases, if only the attacker has the source code, the attacker ends up with another advantage.

Hiding source code does inhibit the ability of third parties to respond to vulnerabilities (because changing software is more difficult without the source code), but this is obviously not a security advantage. In general, “Security by Obscurity” is widely denigrated.

This does not mean that the DoD will reject using proprietary COTS products. There are valid business reasons, unrelated to security, that may lead a commercial company selling proprietary software to choose to hide source code (e.g., to reduce the risk of copyright infringement or the revelation of trade secrets). What it does mean, however, is that the DoD will not reject consideration of a COTS product merely because it is OSS. Some OSS is very secure, while others are not; some proprietary software is very secure, while others are not. Each product must be examined on its own merits.

Edit to add: There's an answer to the malicious code insertion question, too:

Q: Is there a risk of malicious code becoming embedded into OSS?

The use of any commercially-available software, be it proprietary or OSS, creates the risk of executing malicious code embedded in the software. Even if a commercial program did not originally have vulnerabilities, both proprietary and OSS program binaries can be modified (e.g., with a "hex editor" or virus) so that it includes malicious code. It may be illegal to modify proprietary software, but that will normally not slow an attacker. Thankfully, there are ways to reduce the risk of executing malicious code when using commercial software (both proprietary and OSS). It is impossible to completely eliminate all risks; instead, focus on reducing risks to acceptable levels.

The use of software with a proprietary license provides absolutely no guarantee that the software is free of malicious code. Indeed, many people have released proprietary code that is malicious. What's more, proprietary software release practices make it more difficult to be confident that the software does not include malicious code. Such software does not normally undergo widespread public review, indeed, the source code is typically not provided to the public and there are often license clauses that attempt to inhibit review further (e.g., forbidding reverse engineering and/or forbidding the public disclosure of analysis results). Thus, to reduce the risk of executing malicious code, potential users should consider the reputation of the supplier and the experience of other users, prefer software with a large number of users, and ensure that they get the "real" software and not an imitator. Where it is important, examining the security posture of the supplier (e.g., their processes that reduce risk) and scanning/testing/evaluating the software may also be wise.

Similarly, OSS (as well as proprietary software) may indeed have malicious code embedded in it. However, such malicious code cannot be directly inserted by "just anyone" into a well-established OSS project. As noted above, OSS projects have a "trusted repository" that only certain developers (the "trusted developers") can directly modify. In addition, since the source code is publicly released, anyone can review it, including for the possibility of malicious code. The public release also makes it easy to have copies of versions in many places, and to compare those versions, making it easy for many people to review changes. Many perceive this openness as an advantage for OSS, since OSS better meets Saltzer & Schroeder's "Open design principle" ("the protection mechanism must not depend on attacker ignorance"). This is not merely theoretical; in 2003 the Linux kernel development process resisted an attack. Similarly, SourceForge/Apache (in 2001) and Debian (in 2003) countered external attacks.

As with proprietary software, to reduce the risk of executing malicious code, potential users should consider the reputation of the supplier (the OSS project) and the experience of other users, prefer software with a large number of users, and ensure that they get the "real" software and not an imitator (e.g., from the main project site or a trusted distributor). Where it is important, examining the security posture of the supplier (the OSS project) and scanning/testing/evaluating the software may also be wise. The example of Borland's InterBase/Firebird is instructive. For at least 7 years, Borland's Interbase (a proprietary database program) had embedded in it a "back door"; the username "politically", password "correct", would immediately give the requestor complete control over the database, a fact unknown to its users. Whether or not this was intentional, it certainly had the same form as a malicious back door. When the program was released as OSS, within 5 months this vulnerability was found and fixed. This shows that proprietary software can include functionality that could be described as malicious, yet remain unfixed - and that at least in some cases OSS is reviewed and fixed.

Note that merely being developed for the government is no guarantee that there is no malicious embedded code. Such developers need not be cleared, for example. Requiring that all developers be cleared first can reduce certain risks (at substantial costs), where necessary, but even then there is no guarantee.

Note that most commercial software is not intended to be used where the impact of any error of any kind is extremely high (e.g., a large number of lives are likely to be immediately lost if even the slightest software error occurs). Software that meets very high reliability/security requirements, aka "high assurance" software, must be specially designed to meet such requirements. Most commercial software (including OSS) is not designed for such purposes.

Bacon Bits
  • 597
  • 3
  • 4
  • of course there are things that, when exposed, create a security risk. Think of encryption algorithms. If an enemy knows what encryption algorithm you're using (and which specific implementation, with its flaws), intercepting and decrypting your data becomes potentially that much easier. Of course that's not software security as such but data and data transport security, but it's part of the entire landscape you need to address. – jwenting Jun 20 '11 at 07:52
  • 9
    @jwenting Except there are no modern, accepted crypto standards do not take [Kerckhoffs' Principle](http://en.wikipedia.org/wiki/Kerckhoffs%27_principle) into account. They all assume the system is known. Once could argue for NTLM password hashes, but *that's closed source*. There are algorithms (such as AES) which have [perfect forward secrecy](http://en.wikipedia.org/wiki/Perfect_forward_secrecy), which means that even if you know the system (algorithm) and *even if you know the private key*, once the initial session key is generated and passed you're still back at brute force decryption. – Bacon Bits Jun 21 '11 at 17:44
38

Back in 2002, Payne conducted a study comparing three similar Unix-like operating systems, one of which was closed-source (Solaris) and two of which were open-source (Debian and OpenBSD) across a number of security metrics. He concludes:

The results show that, of the three systems, OpenBSD had the most number of security features (18) with Debian second (15) and Solaris third (11). Of these features, OpenBSD's features rated highest scoring 7.03 out of 10 while Debian's scored 6.42 and Solaris’ scored 5.92. A similar pattern was observed for the vulnerabilities with OpenBSD having the fewest (5).
...
Based on these results it would appear that open source systems tend to be more secure, however, ... in scoring 10.2, OpenBSD was the only system of the tree to receive a positive score and, a comparison with the magnitudes of the other two scores suggests this is a relatively high score also. Therefore, the significant differences between Debian and OpenBSD's score support the argument that making a program ‘open source’ does not, by itself, automatically improve the security of the program (Levy, 2000), (Viega, 2000). What, therefore, accounts for the dramatically better security exhibited by the OpenBSD system over the other two? The author believes that the answer to this question lies in the fact that, while the source code for the Debian system is available for anyone who cares to examine it, the OpenBSD source code is regularly and purposefully examined with the explicit intention of finding and fixing security holes (Payne, 1999), (Payne, 2000). Thus it is this auditing work, rather than simply the general availability of source code, that is responsible for OpenBSD's low number of security problems.

Edit: To summarize, Payne explains his results by claiming that it is the culture of security itself that promotes actual security. While that is likely true, I think it is also important to note that, with all else being equal, the general public can't independently audit that which is not open.

That study is a bit dated and of limited breadth, though.

I tried looking for a more comprehensive study, but I couldn't really find anything substantive (there are many "opinion pieces" giving arguments as to why open source is better, but not much data). Therefore, I took a quick look at the National Vulnerability Database, which collects, rates, and posts software vulnerabilities. It has a database dating back into the 1980s. I quickly hacked together this perl script to parse the database:

#!/usr/bin/perl -w
use Cwd 'abs_path';
use File::Basename;
use XML::Parser;
my @csseverity;my @osseverity;my @bothseverity;
my $numNeither = 0;
sub mean {
  my $result; return 0 if(@_ <= 0); foreach (@_) { $result += $_ } return $result / @_;
}
sub stddev {
  my $mean = mean(@_); my @elem_squared; foreach (@_) { push (@elem_squared, ($_ **2));     }
  return sqrt( mean(@elem_squared) - ($mean ** 2));
}
sub handle_start {
    if($_[1] eq "entry") {
        $item = {};
        undef($next) if(defined($next));
        for(my $i=2; $i<@_; $i++) {
            if(!defined($key)) {
                $key = $_[$i];
            } else {
                $item->{$key} = $_[$i];
                undef($key);
            }
        }
    } elsif(defined($item)) {
        $next = $_[1];
    }
}
sub handle_end {
    if($_[1] eq "entry") {
        if(!exists($item->{'reject'}) || $item->{'reject'} != 1) {
            my $score = $item->{'CVSS_score'};
            my $d = $item->{"descript"};
            my $isOS = 0;
            my $isCS = 0;
            $isOS = 1 if($d =~ m/(^|\W)(linux|nfs|openssl|(net|open|free)?bsd|netscape|red hat|lynx|apache|mozilla|perl|x windowing|xlock|php|w(u|f)-?ftpd|sendmail|ghostscript|gnu|slackware|postfix|vim|bind|kde|mysql|squirrelmail|ssh-agent|formmail|sshd|suse|hsftp|xfree86|Mutt|mpg321|cups|tightvnc|pam|bugzilla|mediawiki|tor|piwiki|ruby|chromium|open source)(\W|$)/i);
            $isCS = 1 if($d =~ m/(^|\W)(windows|tooltalk|solaris|sun|microsoft|apple|macintosh|sybergen|mac\s*os|mcafee|irix|iis|sgi|internet explorer|ntmail|sco|cisco(secure)?|aix|samba|sunos|novell|dell|netware|outlook|hp(-?ux)?|iplanet|flash|aol instant|aim|digital|compaq|tru64|wingate|activex|ichat|remote access service|qnx|mantis|veritas|chrome|3com|vax|vms|alcatel|xeneo|msql|unixware|symantec|oracle|realone|real\s*networks|realserver|realmedia|ibm|websphere|coldfusion|dg\/ux|synaesthesia|helix|check point|proofpoint|martinicreations|webfort|vmware)(\W|$)/i);
            if($isOS && $isCS) {
                push(@bothseverity, $score);
            } elsif($isOS) {
                push(@osseverity, $score);
            } elsif($isCS) {
                push(@csseverity, $score);
            } else {
                $numNeither++;
                #print $d . "\n";
            }
        }
        undef($item);
    }
}
sub handle_char {
    $item->{$next} = $_[1] if(defined($item) && defined($next));
    undef($next) if(defined($next));
}
my($scriptfile, $scriptdir) = fileparse(abs_path($0));
sub process_year {
    my $filename = 'nvdcve-' . $_[0] . '.xml';
    system("cd $scriptdir ; wget http://nvd.nist.gov/download/" . $filename) unless(-e $scriptdir . $filename);
    $p = new XML::Parser(Handlers => {Start => \&handle_start,
                                      End   => \&handle_end,
                                      Char  => \&handle_char});
    $p->parsefile($filename);
}
my($sec,$min,$hour,$mday,$mon,$currentyear,$wday,$yday,$isdst) = localtime(time);
$currentyear += 1900;
for(my $year=2002; $year<=$currentyear; $year++) {
    &process_year($year);
}
print "Total vulnerabilities: " . (@osseverity + @csseverity + @bothseverity + $numNeither) . "\n";
print "\t  # Open Source (OS): " . @osseverity . "\n";
print "\t# Closed Source (OS): " . @csseverity . "\n";
print "\t              # Both: " . @bothseverity . "\n";
print "\t      # Unclassified: " . $numNeither . "\n";
print "OS Severity: " . &mean(@osseverity) . "\t" . &stddev(@osseverity) . "\n";
print "CS Severity: " . &mean(@csseverity) . "\t" . &stddev(@csseverity) . "\n";
print "Both Severity: " . &mean(@bothseverity) . "\t" . &stddev(@bothseverity) . "\n";

Feel free to modify the code, if you'd like. Here are the results:

The full database has 46102 vulnerabilities. My script was able to classify 15748 of them as specifically related to open source software, 11430 were related to closed source software, 782 were applicable to both closed source and open source software, and 18142 were unclassified (I didn't have time to optimize my classifier very much; feel free to improve it). Among the vulnerabilities that were classified, the open source ones had an average severity of 6.24 with a standard deviation of 1.74 (a higher severity is worse). The closed source vulnerabilities had an average severity of 6.65 (stddev = 2.21). The vulnerabilities that were classified as both had an average severity of 6.47 (stddev = 2.13). This may not be a completely fair comparison, though, since open source software has become much more popular in recent years. If I restrict the results to the years 2003 to the present, we get:

  • Total vulnerabilities: 39445
  • # Open Source (OS): 14595
  • # Closed Source (CS): 9293
  • # Both: 675
  • # Unclassified: 14882
  • Avg. OS Severity: 6.25 (stddev 1.70)
  • Avg. CS Severity: 6.79 (stddev 2.24)
  • Both Severity: 6.52 (stddev 2.15)

I haven't had time to do any statistical analysis on these results, however, it does look like, on average, the vulnerabilities affecting open source software have a slightly lower severity rating than vulnerabilities affecting closed source software.

When I get some more time, I'll try and generate a graph of the running average of severity over time.

ESultanik
  • 8,038
  • 3
  • 38
  • 49
  • 24
    Oh no! I can't run that script now. You've made it open-source! :-) – Oddthinking May 20 '11 at 16:39
  • 9
    One thing to note about the *number* of open source vulnerabilities is that security-minded users will notify developers, and more bugs/security holes get recorded. –  May 20 '11 at 17:25
  • 3
    Your summary of Payne's work is exactly the opposite of his conclusion which you cited. To paraphrase Payne using your verbiage: "That open-source software is freely available allows anyone who is interested to conduct a security audit of the code cannot account for OpenBSD's relatively high security rating, because Debian also is freely available for auditing. The most plausible explanation is that the OpenBSD culture is security-focused and actually performs such audits. Available for auditing != audited!" – Ben Voigt May 20 '11 at 19:18
  • @Ben: you're correct; I was trying to emphasize the fact that such a culture is really only possible in an open source environment. – ESultanik May 20 '11 at 19:39
  • 5
    @ESultanik: That doesn't follow. I offer the NSA as an example of a very closed organization that also has a security-centric culture. – Ben Voigt May 20 '11 at 19:55
  • @Ben: True, but if all else is equal in terms of culture, the general public can't independently audit that which is not open. – ESultanik May 20 '11 at 23:59
  • Just tracking documented security incidents doesn't tell the whole story. We really need to know the rate at which undiscovered (or worse, undisclosed) vulnerabilities are in each kind of project, as well as the time it takes each organization to fix vulnerabilities once disclosed. – RBerteig May 21 '11 at 01:41
  • My suspicion is that time to mitigation or fix is going to be all over the map for both kinds. You have abandoned software on both sides, and popular tools should have market pressure in favor of fast mitigation regardless of whether they are open or closed source. – RBerteig May 21 '11 at 01:44
  • To be precise, it's under [CC](http://creativecommons.org/licenses/by-sa/3.0/). – Mateen Ulhaq May 21 '11 at 02:43
  • @Ben: I edited my summary to more accurately convey Payne's conclusions. – ESultanik May 21 '11 at 12:19
  • Note that in 2002, Sun was beginning their long slide into hell. So you might conclude, rightly, that it's bad to buy an OS from a company that does not have the profit margin to maintain it. Of course, the reason they were in decline was the availability of the free operating systems, but my point is that Solaris was already being orphaned back then, so it is perhaps not the greatest point of comparison. – Dov May 23 '11 at 14:22
33

My computing teacher told us that closed source software is more secure than open source software, because with open source "anyone can modify it and put stuff in."

Your teacher is flat wrong. The correct statement is:

anyone can fork it, and put stuff in their fork.

Open source means that anyone can read the source code corresponding to the distributed binary. Usually it also means that anyone can read from the master repository where development occurs, in order to test new unreleased changes. FreePascal follows this general pattern: "As an alternative to the daily zip files of the SVN sources, the SVN repository has been made accessible for everyone, with read-only access."

It does not require that the general public can write to the master repository, in fact write access being limited to trusted project members is the general rule. In some cases, the repository accepts patches from anyone but they are quarantined to separate branches until a trusted member merges the change into the master (trunk) codebase. It appears that FreePascal follows this latter model, you need only a free account to upload patches, but they won't be integrated into the mainline without review.

Ask your teacher to back up their words with actions -- you have FreePascal installed on your computer, if he thinks that's "insecure", ask him to "modify it and put in" an insulting message that appears next time you run it. Won't happen, there's this huge chasm between the modified copy in his home directory and the version you download and compile on your computer.


Your final sentence, asking for studies performing statistical comparison of open-source vs closed-source, shows that you've adopted one of your teacher's bad practices: the fallacy of applying the law of averages to an individual.

I submit that the utility to you of drawing software from a category which is more secure on average is essentially nil. You should be interested instead in programs which are individually and specifically highly secure, no matter what characteristics they share with insecure software.

Sklivvz
  • 78,578
  • 29
  • 321
  • 428
Ben Voigt
  • 462
  • 3
  • 9
  • 4
    Skeptics requires references for all answers. Answers without references are only speculation, not fact. See the [FAQs](http://skeptics.stackexchange.com/faq). Thanks! – Kevin Peno May 20 '11 at 18:48
  • 6
    @Kevin: Your reference doesn't substantiate your claim. The FAQ you linked doesn't even contain the string "refer". Also, before demanding references from someone (e.g. sharptooth above), you might want to first verify that he isn't a recognized expert in the field, whose word is as good as any work you could reference. Furthermore, an argument is only as valid as its premises, showing that the assumed premise is disputed is a valid (under formal logic) means of invalidating an argument. – Ben Voigt May 20 '11 at 19:14
  • 1
    @Ben, the point of this site is to bring fact to a question of speculation. Thus, by responding without references cited, even your own thesis if you are an expert in your field, you are only increasing speculation. If speculation and non-facts weren't a problem, we wouldn't be skeptical. I'd suggest you look around on this site a bit more. `Skeptics is about applying skepticism — it's for researching the evidence behind the claims you hear or read`. So I ask you sir, where is your evidence? – Kevin Peno May 20 '11 at 19:19
  • @Kevin: Your failure to substantiate the claim in your comments simply exposes you as a hypocrite. Nevertheless, I will add a citation. – Ben Voigt May 20 '11 at 19:24
  • 1
    @Ben, the blockquote is from the FAQ page I previously metioned. I could also linked to 1000s of answers on this site where a moderator has said exactly what I have said. Would you like that I litter your comments further? – Kevin Peno May 20 '11 at 19:25
  • 8
    @Kevin: No, I would prefer that you demonstrate good answering style by writing some answers. Let the real contributors to the site police it. Oh, and skepticism is also about knowing basic principles of logic such as identification of assumptions. – Ben Voigt May 20 '11 at 19:31
  • 2
    @Ben, [here you go](http://meta.skeptics.stackexchange.com/questions/5/must-all-claims-be-referenced). Just because I don't have a good way to answer the questions posed on this site doesn't mean that I am not a contributor. Additionally, as a user, I want to see reliable answers so that my own skepticism can be satisfied. – Kevin Peno May 20 '11 at 19:36
  • 1
    [Reference to the FAQ](http://meta.skeptics.stackexchange.com/questions/5/must-answers-be-referenced) (for clarity). – Sklivvz May 20 '11 at 22:25
  • 1
    @Ben Voigt: I think your answer would have been perfectly good without the references. Arguably @Kevin Peno is being overzealous; you don't really seem to be saying anything contentious, so it's not obvious why any of it needs to be backed up by authorities. Apart from OP's daffy teacher, who really thinks that anyone can easily insert malicious code into OSS, for example? I'm excluding people who know nothing of code, OSS, etc., since they're irrelevant here. But rules is rules, as they say, so I'm glad you complied. May you post many more useful answers! – FumbleFingers May 21 '11 at 01:12
  • @Fumble: I'm assuming you didn't click through to my "source"? – Ben Voigt May 21 '11 at 03:10
  • @Ben Voigt: hah hah - Actually I *did*, but I just ignored it without realising it was a joke. But I went thru the SVN & FreePascal links too - realising as I did that I'd been to both pages before. They were apposite, and I'm only sorry I didn't get the benefit of the first one until now. A sensible, robustely defended, and *amusing* Answer! What more could one ask? – FumbleFingers May 21 '11 at 03:24
  • @Fumble: One only needs to experience grant-writing politics to gain a healthy skepticism about the academic studies so often cited here. I'm a PhD student. Therefore I distrust academics and look for opportunities for humor. Glad you approve. – Ben Voigt May 21 '11 at 03:33
  • 2
    -1 Your second part doesn't answer the question. The question is if an OSS model would produce more secure software than a closed model, naturally if we pose such a question we would have to control for difference in population. Pointing out that different programs have different needs doesn't answer the question, it merely points out a supposed fallacy about something which should be controlled by in any research that has been done (of which you've cited none). – Kit Sunde May 21 '11 at 04:32
  • @Kit: I didn't say anything about different programs having different needs. I said some programs are more secure than others, and that aggregate measures are worthless when it comes to deciding whether to trust a particular piece of software. – Ben Voigt May 21 '11 at 12:55
  • This answer is quite bad. Ok so you showed that free pascal has some good practices, and what does that demonstrate exactly? Please bring facts to the table to support these two claims you are making (or implying): 1) that "anyone can fork it, and put stuff in their fork." (merely pointing to another answer as reference will get this deleted as NAA, please don't waste our time) and 2) That F/OSS *in general, or in large part* has better or equal security practices and controls than closed source. Otherwise your speculation is completely useless here. – Sklivvz Aug 17 '11 at 16:12
  • @Sklivvz: You're completely wrong. I neither claimed nor implied that F/OSS as a whole has better or equal security practices than closed source software has. Sourced facts aren't needed to invalidate an argument that contains a logical fallacy, only pure logic is. The argument being made is that "many F/OSS software applications are insecure, therefore FreePascal is insecure". This is a clear fallacy, as explained in my answer, and thus the fact that I counter the original assumption is just gravy. – Ben Voigt Aug 17 '11 at 17:31
  • Sourced fact are always needed and logic without facts is just speculation, not acceptable here. See: http://meta.skeptics.stackexchange.com/questions/1019/how-to-handle-answers-based-on-logic – Sklivvz Aug 17 '11 at 20:16
  • @Sklivvz: You don't know the least thing about logic then. For an argument to be sound, it requires (1) true premises and (2) a valid argument. Source: http://en.wikipedia.org/wiki/Soundness#Of_arguments By DeMorgan's Theorem, the argument is unsound if either (1) any premise is false or (2) the argument is fallacious. It therefore follows that IF the argument is fallacious, it is unsound, and I do not need to disprove the premise. I shouldn't have needed to state sources for simple rules of logical reasoning to a moderator. – Ben Voigt Aug 17 '11 at 21:20
  • @Sklivvz: Your linked discussion on meta doesn't apply here, because for this particular question, pure logic IS enough to prove the teacher's reasoning unsound. – Ben Voigt Aug 17 '11 at 21:27
  • Beyond that, I did provide sources that show the teacher's premises were also false, see the other links in my answer (and they've been there for quite some time). – Ben Voigt Aug 17 '11 at 21:31
  • @ben: the question is not about free pascal in particular, but open source in general. Avoid using that tone as well. I am not here to argue or to be insulted. Thanks. – Sklivvz Aug 17 '11 at 23:05
  • @BenVoigt let us [continue this discussion in chat](http://chat.stackexchange.com/rooms/1100/discussion-between-sklivvz-and-ben-voigt) – Sklivvz Aug 17 '11 at 23:05
  • @Sklivvz: I'm going to be on a plane most of today, so chat won't be possible. I agree that the question asks about open source in general, because the student has bought into the professor's fallacious argument that if open source is less secure on average, that alone is enough to justify blocking a particular open source application. I'm doing Thomas more of a service by showing the flaw in the argument, than by addressing the faulty premise, because critical thinking skills are far far more valuable in life than a single localized fact. – Ben Voigt Aug 18 '11 at 12:11
15

I think John provides the best answer when he says that many other factors can influence security. However it is worthwhile to see how openness can affect security.

The earliest work in this direction was in 1883 by Auguste Kerckhoffs and is called Kerckhoffs's Principle. He argued that for any system to be secure:

A Cryptosystem should be secure even if everything about the system, except the key, is public knowledge.

An important interpretation from Art of Information Security,

Kerckhoffs’ Principle does not require that we publish or disclose how things work. It does require that the security of the system must not be negatively impacted by such a disclosure.

Most closed-source systems do not actually violate Kerckhoffs' principle, so open-source cannot be said to be inferior or superior to closed-source by this measure.

Two models are often used with regard to software Security through obscurity vs. Security through disclosure/openness. The arguments for and against them are rehashed on wikipedia.

Statistically, Linux suffers from a much lower rate of infection than Windows. This is usually attributed to the open-source model, but some other alternative reasons (like lower market share) are also proposed as the explanation. Firefox also claims to have a lower number of open security exploits than Internet Explorer.

However, it should be noted that more eyes less bugs only works for popular open-source software, and may not be viable for less popular/custom software.

apoorv020
  • 3,266
  • 2
  • 25
  • 37
  • 1
    "More eyes less bugs [sic]" (aka "Linus' Law") cannot apply to subtle security bugs - that would need to be ammended to "Given enough *trained and motivated* eyeballs, *most* bugs are shallow". – AviD Jun 19 '11 at 08:41
9

A cursory examination of the controversies in the weekly kernel sections on Linux Weekly News[1] shows just how hard it often is for extremely experienced developers with great reputations to get their code into reputable projects. If you're downloading from a project or distribution that has standards and enforces them on public mailing lists, you can make a more informed decision about the reliability and trustworthiness of the code than if you're buying proprietary software from companies with unknown processes whose development practices you cannot scrutinize. If you're downloading from Will's World of Warez, you're in trouble, regardless of the development model.

[1]: http://lwn.net/ Linux Weekly News. Weekly editions other than the latest are free to non-subscribers.

Chris
  • 199
  • 2
8

As I've noted in comments, a complete, reasoned analysis is presented at Open-source vs closed-source systems.

However, for the sake of argument, I will present a single example as evidence: the first real rootkit - and, apparently the most widespread - was in very popular open source package.

From Rootkit History (Wikipedia):

Ken Thompson of Bell Labs, one of the creators of Unix, subverted the C compiler in a Unix distribution and discussed the exploit in the lecture he gave upon receiving the Turing award in 1983. The modified compiler would detect attempts to compile the Unix "login" command and generate altered code that would accept not only the user's correct password, but an additional password known to the attacker. Additionally, the compiler would detect attempts to compile a new version of the compiler, and would insert the same exploits into the new compiler. A review of the source code for the "login" command or the updated compiler would not reveal any malicious code.

Reference: http://www.ece.cmu.edu/~ganger/712.fall02/papers/p761-thompson.pdf

In summary, Ken's own words from that paper:

The moral is obvious. You can't trust code that you did not totally create yourself. No amount of source-level verification or scrutiny will protect you from using untrusted code.

Open source would not help you here.
In fact, insisting on the inherent security in opensource is irrelevant.

AviD
  • 197
  • 6
  • Interesting story; but logically there might be other reasons (other threats) why open or closed source software might be safer. – ChrisW Jun 20 '11 at 01:13
  • @ChrisW, as explained in the question I linked to on ITSec, there are only anecdotal reasons (at best), and really it comes down to religious beliefs being the only reason to find. Analytically, realistically, open or closed has no effect on the level of security. – AviD Jun 20 '11 at 06:39
  • The security.stackexchange question provides more arguments but no references: more reasons/claims why open or closed may be more or less secure, but no studies showing whether they, in fact, are. – ChrisW Jun 20 '11 at 13:55
  • @Chris, as I said, the answers there provide analysis, not spouting this or that type of gameable study. Furthermore, any such "study" would be subjective by nature. In *this* case, I believe sound analytical rationale to be more beneficial and reliable, for numerous reasons. Did you find any flaw in the analysis? Note that there are no arbitrary claims there, either - high school level logic, together with facts that are well known in the security industry, suffice. This even matches the requirements for this site too... – AviD Jun 20 '11 at 14:47
  • 1
    You can also extend this to the CPU level, really. If your CPUs are rootkitted, you are screwed :-/ – Sklivvz Aug 17 '11 at 16:25
7

Another point not already covered, but going in the same direction as most answers:

Even without the source, in many environments, you can place a series of jumps at the beginning of an executable binary to go to a place where you have compiled your own little piece of software, then resuming normal operation of the code.

From Wikipedia:

The binary is then modified using the debugger or a hex editor in a manner that replaces a prior branching opcode with its complement or a NOP opcode so the key branch will either always execute a specific subroutine or skip over it. Almost all common software cracks are a variation of this type.

Of course, as this is what many viruses and cracked versions of commercial software do, it may be detected as suspicious by antivirus utilities or blocked because of checksum verifications by the code itself, the loader/linker, the OS, etc.

ogerard
  • 681
  • 5
  • 13