old-www/HOWTO/Secure-Programs-HOWTO/open-source-security.html

691 lines
25 KiB
HTML

<HTML
><HEAD
><TITLE
>Is Open Source Good for Security?</TITLE
><META
NAME="GENERATOR"
CONTENT="Modular DocBook HTML Stylesheet Version 1.7"><LINK
REL="HOME"
TITLE="Secure Programming for Linux and Unix HOWTO"
HREF="index.html"><LINK
REL="UP"
TITLE="Background"
HREF="background.html"><LINK
REL="PREVIOUS"
TITLE="Why do Programmers Write Insecure Code?"
HREF="why-write-insecure.html"><LINK
REL="NEXT"
TITLE="Types of Secure Programs"
HREF="types-of-programs.html"></HEAD
><BODY
CLASS="SECT1"
BGCOLOR="#FFFFFF"
TEXT="#000000"
LINK="#0000FF"
VLINK="#840084"
ALINK="#0000FF"
><DIV
CLASS="NAVHEADER"
><TABLE
SUMMARY="Header navigation table"
WIDTH="100%"
BORDER="0"
CELLPADDING="0"
CELLSPACING="0"
><TR
><TH
COLSPAN="3"
ALIGN="center"
>Secure Programming for Linux and Unix HOWTO</TH
></TR
><TR
><TD
WIDTH="10%"
ALIGN="left"
VALIGN="bottom"
><A
HREF="why-write-insecure.html"
ACCESSKEY="P"
>Prev</A
></TD
><TD
WIDTH="80%"
ALIGN="center"
VALIGN="bottom"
>Chapter 2. Background</TD
><TD
WIDTH="10%"
ALIGN="right"
VALIGN="bottom"
><A
HREF="types-of-programs.html"
ACCESSKEY="N"
>Next</A
></TD
></TR
></TABLE
><HR
ALIGN="LEFT"
WIDTH="100%"></DIV
><DIV
CLASS="SECT1"
><H1
CLASS="SECT1"
><A
NAME="OPEN-SOURCE-SECURITY"
></A
>2.4. Is Open Source Good for Security?</H1
><P
>There's been a lot of debate by security practitioners
about the impact of open source approaches on security.
One of the key issues is that open source exposes the source code
to examination by everyone, both the attackers and defenders,
and reasonable people disagree about the ultimate impact of this situation.
(Note - you can get the latest version of this essay by going to the
main website for this book,
<A
HREF="http://www.dwheeler.com/secure-programs"
TARGET="_top"
>http://www.dwheeler.com/secure-programs</A
>.</P
><DIV
CLASS="SECT2"
><H2
CLASS="SECT2"
><A
NAME="OPEN-SOURCE-SECURITY-EXPERTS"
></A
>2.4.1. View of Various Experts</H2
><P
>First, let's exampine what security experts have to say.</P
><P
>Bruce Schneier is a well-known expert on computer security and cryptography.
He argues that smart engineers should ``demand
open source code for anything related to security'' [Schneier 1999],
and he also discusses some of the preconditions which must be met to make
open source software secure.
Vincent Rijmen, a developer of the winning Advanced Encryption Standard (AES)
encryption algorithm, believes that
the open source nature of Linux
provides a superior vehicle to making security vulnerabilities easier
to spot and fix, ``Not only because more people can look at it, but,
more importantly, because the model forces people to write more clear
code, and to adhere to standards. This in turn facilitates security review''
[Rijmen 2000].</P
><P
>Elias Levy (Aleph1) is the former moderator of one of the most
popular security discussion groups - Bugtraq.
He discusses some of the problems in making open source
software secure in his article
<A
HREF="http://www.securityfocus.com/commentary/19"
TARGET="_top"
>"Is Open Source
Really More Secure than Closed?"</A
>.
His summary is:
<A
NAME="AEN209"
></A
><BLOCKQUOTE
CLASS="BLOCKQUOTE"
><P
>So does all this mean Open Source Software is no better than closed
source software when it comes to security vulnerabilities? No. Open
Source Software certainly does have the potential to be more secure
than its closed source counterpart.
But make no mistake, simply being open source is no guarantee of security.</P
></BLOCKQUOTE
></P
><P
>Whitfield Diffie is the
co-inventor of public-key cryptography (the basis of all Internet security)
and chief security officer and senior staff engineer at Sun Microsystems.
In his 2003 article
<A
HREF="http://zdnet.com.com/2100-1107-980938.html"
TARGET="_top"
>Risky business: Keeping security a secret</A
>,
he argues that proprietary vendor's claims that their software
is more secure because it's secret is nonsense.
He identifies and then counters two main claims made by proprietary vendors:
(1) that release of code benefits attackers more than anyone else because
a lot of hostile eyes can also look at open-source code, and
that (2) a few expert eyes are better than several random ones.
He first notes that while giving programmers access to a piece of software
doesn't guarantee they will study it carefully,
there is a group of programmers who can be expected to care deeply:
Those who either use the software personally or work for an
enterprise that depends on it.
"In fact, auditing the programs on which an enterprise depends for
its own security is a natural function of the enterprise's own
information-security organization."
He then counters the second argument, noting that
"As for the notion that open source's usefulness to opponents
outweighs the advantages to users, that argument flies in
the face of one of the most important principles in security:
A secret that cannot be readily changed should be regarded as a vulnerability."
He closes noting that
<A
NAME="AEN213"
></A
><BLOCKQUOTE
CLASS="BLOCKQUOTE"
><P
>"It's simply unrealistic to depend on secrecy for security in
computer software.
You may be able to keep the exact workings of the program out of general
circulation, but can you prevent the code from being
reverse-engineered by serious opponents? Probably not."</P
></BLOCKQUOTE
>
&#13;</P
><P
>John Viega's article
<A
HREF="http://dev-opensourceit.earthweb.com/news/000526_security.html"
TARGET="_top"
>"The Myth of Open Source Security"</A
> also discusses
issues, and summarizes things this way:
<A
NAME="AEN217"
></A
><BLOCKQUOTE
CLASS="BLOCKQUOTE"
><P
>Open source software projects can be more secure than closed
source projects. However, the very things that can make open
source programs secure -- the availability of the source code,
and the fact that large numbers of users are available to look
for and fix security holes -- can also lull people into a false
sense of security. </P
></BLOCKQUOTE
></P
><P
><A
HREF="http://www.linuxworld.com/linuxworld/lw-1998-11/lw-11-ramparts.html"
TARGET="_top"
>Michael H. Warfield's "Musings on open source security"</A
> is
very positive about the impact of open source software on security.
In contrast,
Fred Schneider doesn't believe that open source helps security, saying
``there is no reason to believe that the many eyes inspecting (open)
source code would be successful in identifying bugs that allow
system security to be compromised'' and claiming that
``bugs in the code are not the dominant means of attack'' [Schneider 2000].
He also claims that open source rules out control of the construction
process, though in practice there is such control - all major open source
programs have one or a few official versions with ``owners'' with
reputations at stake.
Peter G. Neumann discusses ``open-box'' software (in which source code
is available, possibly only under certain conditions), saying
``Will open-box software really improve system security?
My answer is not by itself, although the potential is considerable''
[Neumann 2000].
TruSecure Corporation, under sponsorship by Red Hat (an open source company),
has developed a paper on why they believe open source is more
effective for security [TruSecure 2001].
<A
HREF="http://www-106.ibm.com/developerworks/linux/library/l-oss.html?open&I=252,t=gr,p=SeclmpOS"
TARGET="_top"
>Natalie Walker Whitlock's IBM DeveloperWorks article</A
>
discusses the pros and cons as well.
Brian Witten, Carl Landwehr, and Micahel Caloyannides [Witten 2001]
published in IEEE Software an article tentatively concluding that
having source code available should work in the favor of system security;
they note:
<A
NAME="AEN222"
></A
><BLOCKQUOTE
CLASS="BLOCKQUOTE"
><P
>``We can draw four additional conclusions from this discussion. First,
access to source code lets users improve system security -- if they have
the capability and resources to do so. Second, limited tests indicate that
for some cases, open source life cycles produce systems that are less
vulnerable to nonmalicious faults. Third, a survey of three operating
systems indicates that one open source operating system experienced less
exposure in the form of known but unpatched vulnerabilities over a 12-month
period than was experienced by either of two proprietary counterparts.
Last, closed and proprietary system development models face disincentives
toward fielding and supporting more secure systems as long as less secure
systems are more profitable. Notwithstanding these conclusions, arguments
in this important matter are in their formative stages and in dire need of
metrics that can reflect security delivered to the customer.''</P
></BLOCKQUOTE
></P
><P
>Scott A. Hissam and Daniel Plakosh's
<A
HREF="http://www.ics.uci.edu/~wscacchi/Papers/New/IEE_hissam.pdf"
TARGET="_top"
>``Trust and Vulnerability in Open Source Software''</A
>
discuss the pluses and minuses of open source software.
As with other papers, they note that just because the software
is open to review, it should not automatically follow that
such a review has actually been performed.
Indeed, they note that this is a general problem for all software,
open or closed - it is often questionable if many people examine any
given piece of software.
One interesting point is that they demonstrate that
attackers can learn about a
vulnerability in a closed source program (Windows)
from patches made to an OSS/FS program (Linux).
In this example,
Linux developers fixed a vulnerability before attackers tried to attack it,
and attackers correctly surmised that a similar problem might be still be in
Windows (and it was).
Unless OSS/FS programs are forbidden, this kind of learning is difficult
to prevent.
Therefore, the existance of an OSS/FS program can reveal the vulnerabilities
of both the OSS/FS and proprietary program performing the same function -
but at in this example, the OSS/FS program was fixed first.</P
></DIV
><DIV
CLASS="SECT2"
><H2
CLASS="SECT2"
><A
NAME="OPEN-SOURCE-SECURITY-NOHALT"
></A
>2.4.2. Why Closing the Source Doesn't Halt Attacks</H2
><P
>It's been argued that a
system without source code is more secure because,
since there's less information available for an attacker, it should
be harder for an attacker to find the vulnerabilities.
This argument has a number of weaknesses, however, because
although source code is extremely important when trying to add
new capabilities to a program,
attackers generally don't need source code to find a vulnerability.</P
><P
>First, it's important to distinguish between ``destructive'' acts
and ``constructive'' acts. In the real world, it is much easier to
destroy a car than to build one. In the software world, it is
much easier to find and exploit a vulnerability than to
add new significant new functionality to that software.
Attackers have many advantages against defenders because of this difference.
Software developers must try to have no security-relevant mistakes
anywhere in their code, while attackers only need to find one.
Developers are primarily paid to get their programs to work...
attackers don't need to make the program work, they only need to
find a single weakness. And as I'll describe in a moment, it takes
less information to attack a program than to modify one.</P
><P
>Generally attackers (against both open and closed programs) start by
knowing about the general kinds of security problems programs have.
There's no point in hiding this information; it's already out, and
in any case, defenders need that kind of information to defend
themselves.
Attackers then use techniques to try to find those problems;
I'll group the techniques into ``dynamic'' techniques (where you
run the program) and ``static'' techniques (where you examine
the program's code - be it source code or machine code).</P
><P
>In ``dynamic'' approaches, an attacker runs the program,
sending it data (often problematic data), and sees
if the programs' response indicates a common vulnerability.
Open and closed programs have no difference here, since the attacker isn't
looking at code.
Attackers may also look at the code, the ``static'' approach.
For open source software, they'll
probably look at the source code and search it for patterns.
For closed source software, they might search the machine code
(usually presented in assembly language format to simplify the
task) for essentially the same patterns.
They might also use tools called
``decompilers'' that turn the machine code back into source code
and then search the source code for the vulnerable patterns
(the same way they would search for vulnerabilities in open source software).
See Flake [2001] for one discussion of how closed code can still be examined
for security vulnerabilities (e.g., using disassemblers).
This point is important:
even if an attacker wanted to use source code to find a vulnerability,
a closed source program has no advantage, because the attacker
can use a disassembler to re-create the source code of the product.</P
><P
>Non-developers might ask ``if decompilers can create source code
from machine code, then why do developers say they need
source code instead of just machine code?''
The problem is that although developers don't need source
code to find security problems, developers do need source code to make
substantial improvements to the program.
Although decompilers can turn machine code back into a
``source code'' of sorts, the resulting source code
is extremely hard to modify. Typically most understandable names are
lost, so instead of variables like ``grand_total'' you get
``x123123'', instead of methods like ``display_warning'' you get
``f123124'', and the code itself may have spatterings of
assembly in it.
Also, _ALL_ comments and design information are lost.
This isn't a serious problem for finding security problems, because
generally you're searching for patterns indicating vulnerabilities,
not for internal variable or method names.
Thus, decompilers can be useful for finding ways to attack programs,
but aren't helpful for updating programs.</P
><P
>Thus, developers will say ``source code is vital''
when they intend to add functionality), but the fact that the source
code for closed source programs is hidden doesn't protect the program
very much.</P
></DIV
><DIV
CLASS="SECT2"
><H2
CLASS="SECT2"
><A
NAME="OPEN-SOURCE-SECURITY-SECRETS"
></A
>2.4.3. Why Keeping Vulnerabilities Secret Doesn't Make Them Go Away</H2
><P
>Sometimes it's noted that a vulnerability that exists but is unknown
can't be exploited, so the system ``practically secure.''
In theory this is true, but the problem is that once someone finds
the vulnerability, the finder may just exploit
the vulnerability instead of helping to fix it.
Having unknown vulnerabilities doesn't really make the vulnerabilities go away;
it simply means that the vulnerabilities are a time bomb, with no
way to know when they'll be exploited.
Fundamentally, the problem of someone exploiting a vulnerability they
discover is a problem for both open and closed source systems.</P
><P
>One related claim sometimes made
(though not as directly related to OSS/FS)
is that people should not post warnings about
vulnerabilities and discuss them.
This sounds good in theory, but the problem is that attackers already
distribute information about vulnerabilities through a large number
of channels.
In short, such approaches would leave
defenders vulnerable, while doing nothing to inhibit attackers.
In the past, companies actively tried to prevent disclosure of vulnerabilities,
but experience showed that, in general, companies didn't fix vulnerabilities
until they were widely known to their users (who could then insist that
the vulnerabilities be fixed).
This is all part of the argument for ``full disclosure.''
Gartner Group has a blunt commentary in a CNET.com article titled
``Commentary: Hype is the real issue - Tech News.''
They stated:
<A
NAME="AEN238"
></A
><BLOCKQUOTE
CLASS="BLOCKQUOTE"
><P
> The comments of Microsoft's Scott Culp, manager of the company's
security response center, echo a common refrain in a long, ongoing
battle over information. Discussions of morality regarding the
distribution of information go way back and are very familiar. Several
centuries ago, for example, the church tried to squelch Copernicus'
and Galileo's theory of the sun being at the center of the solar
system...
Culp's attempt to blame "information security professionals" for the
recent spate of vulnerabilities in Microsoft products is at best
disingenuous. Perhaps, it also represents an attempt to deflect
criticism from the company that built those products...
[The] efforts of all parties contribute to a continuous
process of improvement. The more widely vulnerabilities become known,
the more quickly they get fixed.</P
></BLOCKQUOTE
>&#13;</P
></DIV
><DIV
CLASS="SECT2"
><H2
CLASS="SECT2"
><A
NAME="OPEN-SOURCE-SECURITY-TROJANS"
></A
>2.4.4. How OSS/FS Counters Trojan Horses</H2
><P
>It's sometimes argued that open source programs, because there's no
enforced control by a single company, permit people to insert Trojan
Horses and other malicious code.
Trojan horses can be inserted into open source code, true, but they
can also be inserted into proprietary code.
A disgruntled or bribed employee can insert malicious code, and
in many organizations it's much less likely to be found than in an
open source program.
After all,
no one outside the organization can review the source code, and few
companies review their code internally (or, even if they do, few can
be assured that the reviewed code is actually what is used).
And the notion that a closed-source company can be sued later has little
evidence; nearly all licenses disclaim all warranties, and courts have
generally not held software development companies liable.</P
><P
>Borland's InterBase server is an interesting case in point.
Some time between 1992 and 1994, Borland inserted an intentional
``back door'' into their database server, ``InterBase''.
This back door allowed any local or remote user to
manipulate any database object and install arbitrary programs, and
in some cases could lead to controlling the machine as ``root''.
This vulnerability stayed in the product for at least 6 years - no one else
could review the product, and Borland had no incentive to remove the
vulnerability.
Then Borland released its source code on July 2000.
The "Firebird" project began working with the source code, and
uncovered this serious security problem
with InterBase in December 2000.
By January 2001 the CERT announced the existence of this back door as
<A
HREF="http://www.cert.org/advisories/CA-2001-01.html"
TARGET="_top"
>CERT
advisory CA-2001-01</A
>.
What's discouraging is that the backdoor can be easily found simply by
looking at an ASCII dump of the program (a common cracker trick).
Once this problem was found by open source developers reviewing
the code, it was patched quickly.
You could argue that, by keeping the password unknown,
the program stayed safe, and that opening the source made
the program less secure.
I think this is nonsense, since ASCII dumps are trivial to do and well-known
as a standard attack technique, and not all attackers have sudden
urges to announce vulnerabilities - in fact, there's no way to be
certain that this vulnerability has not been exploited many times.
It's clear that after the source was opened, the source code was
reviewed over time, and the vulnerabilities found and fixed.
One way to characterize this is to say that the original code was
vulnerable, its vulnerabilities became easier to exploit
when it was first made open source,
and then finally these vulnerabilities were fixed.</P
></DIV
><DIV
CLASS="SECT2"
><H2
CLASS="SECT2"
><A
NAME="OPEN-SOURCE-SECURITY-OTHER"
></A
>2.4.5. Other Advantages</H2
><P
>The advantages of having source code open extends not just to software
that is being attacked, but also extends to vulnerability assessment
scanners.
Vulnerability assessment scanners intentionally look for vulnerabilities
in configured systems.
A recent Network Computing evaluation found that the best scanner
(which, among other things, found the most legitimate vulnerabilities)
was Nessus, an open source scanner [Forristal 2001].</P
></DIV
><DIV
CLASS="SECT2"
><H2
CLASS="SECT2"
><A
NAME="OPEN-SOURCE-SECURITY-BOTTOM-LINE"
></A
>2.4.6. Bottom Line</H2
><P
>So, what's the bottom line?
I personally believe that when a program began as closed source and
is then first made open source, it
often starts less secure for any users (through exposure of
vulnerabilities), and over time (say a few years) it has
the potential to be much more secure than a closed program.
If the program began as open source software, the public scrutiny is
more likely to improve its security before it's ready for use by
significant numbers of users, but there are several caveats to this
statement (it's not an ironclad rule).
Just making a program open source doesn't suddenly make a program secure,
and just because a program is open source does not guarantee security:
<P
></P
><UL
><LI
><P
>First, people have to actually review the code.
This is one of the key points of debate - will people really
review code in an open source project?
All sorts of factors can reduce the amount of review:
being a niche or rarely-used product (where there are few potential reviewers),
having few developers, and use of a rarely-used computer language.
Clearly, a program that has a single developer and no other contributors
of any kind doesn't have this kind of review.
On the other hand, a program that has a primary author and many other
people who occasionally examine the code and contribute suggests that there
are others reviewing the code (at least to create contributions).
In general, if there are more reviewers, there's generally a higher likelihood
that someone will identify a flaw - this is the basis of the
``many eyeballs'' theory.
Note that, for example, the OpenBSD project continuously examines
programs for security flaws, so the components in its innermost parts
have certainly undergone a lengthy review.
Since OSS/FS discussions are often held publicly, this level of
review is something that potential users can judge for themselves.</P
><P
>One factor that can particularly reduce review likelihood is not actually
being open source.
Some vendors like to posture their ``disclosed source''
(also called ``source available'') programs as
being open source, but since the program owner has extensive exclusive rights,
others will have far less incentive to work ``for free'' for the owner
on the code.
Even open source licenses which have unusually
asymmetric rights (such as the MPL) have this problem.
After all, people are less likely to voluntarily participate
if someone else will have rights to their results that they don't have
(as Bruce Perens says, ``who wants to be someone else's unpaid employee?'').
In particular,
since the reviewers with the most incentive tend to be people trying to modify
the program, this disincentive to participate reduces the number of
``eyeballs''.
Elias Levy made this mistake in his article about open source
security; his examples of software that had been broken into
(e.g., TIS's Gauntlet) were not, at the time, open source.</P
></LI
><LI
><P
>Second, at least some of the people developing and
reviewing the code must know how to write secure programs.
Hopefully the existence of this book will help.
Clearly, it doesn't matter if there are ``many eyeballs'' if none of the
eyeballs know what to look for.
Note that it's not necessary for everyone to know how to write
secure programs, as long as those who do know how are examining the
code changes.</P
></LI
><LI
><P
>Third, once found, these problems need to be fixed quickly
and their fixes distributed.
Open source systems tend to fix the problems quickly, but the distribution
is not always smooth.
For example, the OpenBSD developers do an excellent job of reviewing code for
security flaws - but they don't always report the identified
problems back to the original developer.
Thus, it's quite possible for there to be a fixed version in one system,
but for the flaw to remain in another.
I believe this problem is lessening over time, since no one
``downstream'' likes to repeatedly fix the same problem.
Of course, ensuring that security patches are actually installed on
end-user systems is a problem for both open source and closed source software.</P
></LI
></UL
>
Another advantage of open source is that, if you find a problem, you can
fix it immediately.
This really doesn't have any counterpart in closed source.</P
><P
>In short, the effect on security of open source software
is still a major debate in the security community, though a large number
of prominent experts believe that it has great potential to be
more secure.</P
></DIV
></DIV
><DIV
CLASS="NAVFOOTER"
><HR
ALIGN="LEFT"
WIDTH="100%"><TABLE
SUMMARY="Footer navigation table"
WIDTH="100%"
BORDER="0"
CELLPADDING="0"
CELLSPACING="0"
><TR
><TD
WIDTH="33%"
ALIGN="left"
VALIGN="top"
><A
HREF="why-write-insecure.html"
ACCESSKEY="P"
>Prev</A
></TD
><TD
WIDTH="34%"
ALIGN="center"
VALIGN="top"
><A
HREF="index.html"
ACCESSKEY="H"
>Home</A
></TD
><TD
WIDTH="33%"
ALIGN="right"
VALIGN="top"
><A
HREF="types-of-programs.html"
ACCESSKEY="N"
>Next</A
></TD
></TR
><TR
><TD
WIDTH="33%"
ALIGN="left"
VALIGN="top"
>Why do Programmers Write Insecure Code?</TD
><TD
WIDTH="34%"
ALIGN="center"
VALIGN="top"
><A
HREF="background.html"
ACCESSKEY="U"
>Up</A
></TD
><TD
WIDTH="33%"
ALIGN="right"
VALIGN="top"
>Types of Secure Programs</TD
></TR
></TABLE
></DIV
></BODY
></HTML
>