Topic: Discuss how the tragedy of the commons is relevant to cyberspace? 

Based on the two pdf’s files attached 

Length: 1000 words minimum plus works cited page

The Cathedral and the Bazaar
Eric Steven Raymond

Thyrsus Enterprises [http://www.tuxedo.org/~esr/]

<[email protected]>

This is version 3.0
Copyright © 2000 Eric S. Raymond

Copyright

Permission is granted to copy, distribute and/or modify this document under the terms of the Open Publication
License, version 2.0.

$Date: 2002/08/02 09:02:14 $
Revision History
Revision 1.57 11 September 2000 esr
New major section “How Many Eyeballs Tame Complexity”.

Revision 1.52 28 August 2000 esr
MATLAB is a reinforcing parallel to Emacs. Corbatoó & Vyssotsky got it in 1965.

Revision 1.51 24 August 2000 esr
First DocBook version. Minor updates to Fall 2000 on the time-sensitive material.

Revision 1.49 5 May 2000 esr
Added the HBS note on deadlines and scheduling.

Revision 1.51 31 August 1999 esr
This the version that O’Reilly printed in the first edition of the book.

Revision 1.45 8 August 1999 esr
Added the endnotes on the Snafu Principle, (pre)historical examples of bazaar development, and originality
in the bazaar.
Revision 1.44 29 July 1999 esr
Added the “On Management and the Maginot Line” section, some insights about the usefulness of bazaars

for exploring design space, and substantially improved the Epilog.
Revision 1.40 20 Nov 1998 esr
Added a correction of Brooks based on the Halloween Documents.

Revision 1.39 28 July 1998 esr
I removed Paul Eggert’s ’graph on GPL vs. bazaar in response to cogent aguments from RMS on

Revision 1.31 February 10 1998 esr
Added “Epilog: Netscape Embraces the Bazaar!”

Revision 1.29 February 9 1998 esr
Changed “free software” to “open source”.

Revision 1.27 18 November 1997 esr
Added the Perl Conference anecdote.

Revision 1.20 7 July 1997 esr
Added the bibliography.

Revision 1.16 21 May 1997 esr

1

First official presentation at the Linux Kongress.

I anatomize a successful open-source project, fetchmail, that was run as a deliberate test of the surprising
theories about software engineering suggested by the history of Linux. I discuss these theories in terms of two
fundamentally different development styles, the “cathedral” model of most of the commercial world versus the
“bazaar” model of the Linux world. I show that these models derive from opposing assumptions about the nature
of the software-debugging task. I then make a sustained argument from the Linux experience for the proposition
that “Given enough eyeballs, all bugs are shallow”, suggest productive analogies with other self-correcting systems
of selfish agents, and conclude with some exploration of the implications of this insight for the future of software.

Table of Contents

The Cathedral and the Bazaar � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � 2
The Mail Must Get Through � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � 3
The Importance of Having Users � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � 6
Release Early, Release Often � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � 7

How Many Eyeballs Tame Complexity � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � 9
When Is a Rose Not a Rose? � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � 11
Popclient becomes Fetchmail � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � 13
Fetchmail Grows Up � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � 15
A Few More Lessons from Fetchmail � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � 17
Necessary Preconditions for the Bazaar Style � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � 18
The Social Context of Open-Source Software � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � 19

On Management and the Maginot Line � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � 23
Epilog: Netscape Embraces the Bazaar � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � 27
Notes � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � 29
Bibliography � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � 33
Acknowledgements � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � 34

The Cathedral and the Bazaar
Linux is subversive. Who would have thought even five years ago (1991) that a world-class operating system
could coalesce as if by magic out of part-time hacking by several thousand developers scattered all over the
planet, connected only by the tenuous strands of the Internet?

Certainly not I. By the time Linux swam onto my radar screen in early 1993, I had already been involved in
Unix and open-source development for ten years. I was one of the first GNU contributors in the mid-1980s. I had
released a good deal of open-source software onto the net, developing or co-developing several programs (nethack,
Emacs’s VC and GUD modes, xlife, and others) that are still in wide use today. I thought I knew how it was done.

Linux overturned much of what I thought I knew. I had been preaching the Unix gospel of small tools, rapid
prototyping and evolutionary programming for years. But I also believed there was a certain critical complexity
above which a more centralized, a priori approach was required. I believed that the most important software
(operating systems and really large tools like the Emacs programming editor) needed to be built like cathedrals,

2

carefully crafted by individual wizards or small bands of mages working in splendid isolation, with no beta to be
released before its time.

Linus Torvalds’s style of development—release early and often, delegate everything you can, be open to the point
of promiscuity—came as a surprise. No quiet, reverent cathedral-building here—rather, the Linux community
seemed to resemble a great babbling bazaar of differing agendas and approaches (aptly symbolized by the Linux
archive sites, who’d take submissions from anyoneanyone) out of which a coherent and stable system could
seemingly emerge only by a succession of miracles.

The fact that this bazaar style seemed to work, and work well, came as a distinct shock. As I learned my way
around, I worked hard not just at individual projects, but also at trying to understand why the Linux world not
only didn’t fly apart in confusion but seemed to go from strength to strength at a speed barely imaginable to
cathedral-builders.

By mid-1996 I thought I was beginning to understand. Chance handed me a perfect way to test my theory, in
the form of an open-source project that I could consciously try to run in the bazaar style. So I did—and it was a
significant success.

This is the story of that project. I’ll use it to propose some aphorisms about effective open-source development.
Not all of these are things I first learned in the Linux world, but we’ll see how the Linux world gives them
particular point. If I’m correct, they’ll help you understand exactly what it is that makes the Linux community
such a fountain of good software—and, perhaps, they will help you become more productive yourself.

The Mail Must Get Through
Since 1993 I’d been running the technical side of a small free-access Internet service provider called Chester
County InterLink (CCIL) in West Chester, Pennsylvania. I co-founded CCIL and wrote our unique multiuser
bulletin-board software—you can check it out by telnetting to locke.ccil.org [telnet://locke.ccil.org]. Today it
supports almost three thousand users on thirty lines. The job allowed me 24-hour-a-day access to the net through
CCIL’s 56K line—in fact, the job practically demanded it!

I had gotten quite used to instant Internet email. I found having to periodically telnet over to locke to check my
mail annoying. What I wanted was for my mail to be delivered on snark (my home system) so that I would be
notified when it arrived and could handle it using all my local tools.

The Internet’s native mail forwarding protocol, SMTP (Simple Mail Transfer Protocol), wouldn’t suit, because it
works best when machines are connected full-time, while my personal machine isn’t always on the Internet, and
doesn’t have a static IP address. What I needed was a program that would reach out over my intermittent dialup
connection and pull across my mail to be delivered locally. I knew such things existed, and that most of them used
a simple application protocol called POP (Post Office Protocol). POP is now widely supported by most common
mail clients, but at the time, it wasn’t built in to the mail reader I was using.

I needed a POP3 client. So I went out on the Internet and found one. Actually, I found three or four. I used one of
them for a while, but it was missing what seemed an obvious feature, the ability to hack the addresses on fetched
mail so replies would work properly.

3

The problem was this: suppose someone named ‘joe’ on locke sent me mail. If I fetched the mail to snark and
then tried to reply to it, my mailer would cheerfully try to ship it to a nonexistent ‘joe’ on snark. Hand-editing
reply addresses to tack on <@ccil.org> quickly got to be a serious pain.

This was clearly something the computer ought to be doing for me. But none of the existing POP clients knew
how! And this brings us to the first lesson:

1. Every good work
of software starts
by scratching a
developer’s personal
itch.

Perhaps this should have been obvious (it’s long been proverbial that “Necessity is the mother of invention”) but
too often software developers spend their days grinding away for pay at programs they neither need nor love.
But not in the Linux world—which may explain why the average quality of software originated in the Linux
community is so high.

So, did I immediately launch into a furious whirl of coding up a brand-new POP3 client to compete with the
existing ones? Not on your life! I looked carefully at the POP utilities I had in hand, asking myself “Which one is
closest to what I want?” Because:

2. Good programmers
know what to write.
Great ones know what
to rewrite (and reuse).

While I don’t claim to be a great programmer, I try to imitate one. An important trait of the great ones is
constructive laziness. They know that you get an A not for effort but for results, and that it’s almost always
easier to start from a good partial solution than from nothing at all.

Linus Torvalds [http://www.tuxedo.org/~esr/faqs/linus], for example, didn’t actually try to write Linux from
scratch. Instead, he started by reusing code and ideas from Minix, a tiny Unix-like operating system for PC
clones. Eventually all the Minix code went away or was completely rewritten—but while it was there, it provided
scaffolding for the infant that would eventually become Linux.

In the same spirit, I went looking for an existing POP utility that was reasonably well coded, to use as a
development base.

The source-sharing tradition of the Unix world has always been friendly to code reuse (this is why the GNU
project chose Unix as a base OS, in spite of serious reservations about the OS itself). The Linux world has taken
this tradition nearly to its technological limit; it has terabytes of open sources generally available. So spending
time looking for some else’s almost-good-enough is more likely to give you good results in the Linux world than
anywhere else.

And it did for me. With those I’d found earlier, my second search made up a total of nine candidates—fetchpop,
PopTart, get-mail, gwpop, pimp, pop-perl, popc, popmail and upop. The one I first settled on was ‘fetchpop’ by

4

Seung-Hong Oh. I put my header-rewrite feature in it, and made various other improvements which the author
accepted into his 1.9 release.

A few weeks later, though, I stumbled across the code for popclient by Carl Harris, and found I had a problem.
Though fetchpop had some good original ideas in it (such as its background-daemon mode), it could only handle
POP3 and was rather amateurishly coded (Seung-Hong was at that time a bright but inexperienced programmer,
and both traits showed). Carl’s code was better, quite professional and solid, but his program lacked several
important and rather tricky-to-implement fetchpop features (including those I’d coded myself).

Stay or switch? If I switched, I’d be throwing away the coding I’d already done in exchange for a better
development base.

A practical motive to switch was the presence of multiple-protocol support. POP3 is the most commonly used of
the post-office server protocols, but not the only one. Fetchpop and the other competition didn’t do POP2, RPOP,
or APOP, and I was already having vague thoughts of perhaps adding IMAP [http://www.imap.org] (Internet
Message Access Protocol, the most recently designed and most powerful post-office protocol) just for fun.

But I had a more theoretical reason to think switching might be as good an idea as well, something I learned long
before Linux.

3. “Plan to throw
one away; you will,
anyhow.” (Fred
Brooks, The Mythical
Man-Month, Chapter
11)

Or, to put it another way, you often don’t really understand the problem until after the first time you implement a
solution. The second time, maybe you know enough to do it right. So if you want to get it right, be ready to start
over at leastat least once [JB].

Well (I told myself) the changes to fetchpop had been my first try. So I switched.

After I sent my first set of popclient patches to Carl Harris on 25 June 1996, I found out that he had basically
lost interest in popclient some time before. The code was a bit dusty, with minor bugs hanging out. I had many
changes to make, and we quickly agreed that the logical thing for me to do was take over the program.

Without my actually noticing, the project had escalated. No longer was I just contemplating minor patches to an
existing POP client. I took on maintaining an entire one, and there were ideas bubbling in my head that I knew
would probably lead to major changes.

In a software culture that encourages code-sharing, this is a natural way for a project to evolve. I was acting out
this principle:

4. If you have the right
attitude, interesting
problems will find
you.

5

But Carl Harris’s attitude was even more important. He understood that

5. When you lose
interest in a program,
your last duty to it
is to hand it off to a
competent successor.

Without ever having to discuss it, Carl and I knew we had a common goal of having the best solution out there.
The only question for either of us was whether I could establish that I was a safe pair of hands. Once I did that, he
acted with grace and dispatch. I hope I will do as well when it comes my turn.

The Importance of Having Users
And so I inherited popclient. Just as importantly, I inherited popclient’s user base. Users are wonderful things to
have, and not just because they demonstrate that you’re serving a need, that you’ve done something right. Properly
cultivated, they can become co-developers.

Another strength of the Unix tradition, one that Linux pushes to a happy extreme, is that a lot of users are hackers
too. Because source code is available, they can be effectiveeffective hackers. This can be tremendously useful for
shortening debugging time. Given a bit of encouragement, your users will diagnose problems, suggest fixes, and
help improve the code far more quickly than you could unaided.

6. Treating your
users as co-developers
is your least-hassle
route to rapid code
improvement and
effective debugging.

The power of this effect is easy to underestimate. In fact, pretty well all of us in the open-source world drastically
underestimated how well it would scale up with number of users and against system complexity, until Linus
Torvalds showed us differently.

In fact, I think Linus’s cleverest and most consequential hack was not the construction of the Linux kernel itself,
but rather his invention of the Linux development model. When I expressed this opinion in his presence once,
he smiled and quietly repeated something he has often said: “I’m basically a very lazy person who likes to get
credit for things other people actually do.” Lazy like a fox. Or, as Robert Heinlein famously wrote of one of his
characters, too lazy to fail.

In retrospect, one precedent for the methods and success of Linux can be seen in the development of the GNU
Emacs Lisp library and Lisp code archives. In contrast to the cathedral-building style of the Emacs C core and most
other GNU tools, the evolution of the Lisp code pool was fluid and very user-driven. Ideas and prototype modes
were often rewritten three or four times before reaching a stable final form. And loosely-coupled collaborations
enabled by the Internet, a la Linux, were frequent.

6

Indeed, my own most successful single hack previous to fetchmail was probably Emacs VC (version control)
mode, a Linux-like collaboration by email with three other people, only one of whom (Richard Stallman, the
author of Emacs and founder of the Free Software Foundation [http://www.fsf.org]) I have met to this day. It was
a front-end for SCCS, RCS and later CVS from within Emacs that offered “one-touch” version control operations.
It evolved from a tiny, crude sccs.el mode somebody else had written. And the development of VC succeeded
because, unlike Emacs itself, Emacs Lisp code could go through release/test/improve generations very quickly.

The Emacs story is not unique. There have been other software products with a two-level architecture and a two-
tier user community that combined a cathedral-mode core and a bazaar-mode toolbox. One such is MATLAB, a
commercial data-analysis and visualization tool. Users of MATLAB and other products with a similar structure
invariably report that the action, the ferment, the innovation mostly takes place in the open part of the tool where
a large and varied community can tinker with it.

Release Early, Release Often
Early and frequent releases are a critical part of the Linux development model. Most developers (including me)
used to believe this was bad policy for larger than trivial projects, because early versions are almost by definition
buggy versions and you don’t want to wear out the patience of your users.

This belief reinforced the general commitment to a cathedral-building style of development. If the overriding
objective was for users to see as few bugs as possible, why then you’d only release a version every six months
(or less often), and work like a dog on debugging between releases. The Emacs C core was developed this way.
The Lisp library, in effect, was not—because there were active Lisp archives outside the FSF’s control, where you
could go to find new and development code versions independently of Emacs’s release cycle [QR].

The most important of these, the Ohio State Emacs Lisp archive, anticipated the spirit and many of the features
of today’s big Linux archives. But few of us really thought very hard about what we were doing, or about what
the very existence of that archive suggested about problems in the FSF’s cathedral-building development model. I
made one serious attempt around 1992 to get a lot of the Ohio code formally merged into the official Emacs Lisp
library. I ran into political trouble and was largely unsuccessful.

But by a year later, as Linux became widely visible, it was clear that something different and much healthier was
going on there. Linus’s open development policy was the very opposite of cathedral-building. Linux’s Internet
archives were burgeoning, multiple distributions were being floated. And all of this was driven by an unheard-of
frequency of core system releases.

Linus was treating his users as co-developers in the most effective possible way:

7. Release early. Re-
lease often. And listen
to your customers.

Linus’s innovation wasn’t so much in doing quick-turnaround releases incorporating lots of user feedback
(something like this had been Unix-world tradition for a long time), but in scaling it up to a level of intensity
that matched the complexity of what he was developing. In those early times (around 1991) it wasn’t unknown
for him to release a new kernel more than once a day!day! Because he cultivated his base of co-developers and
leveraged the Internet for collaboration harder than anyone else, this worked.

7

But howhow did it work? And was it something I could duplicate, or did it rely on some unique genius of Linus
Torvalds?

I didn’t think so. Granted, Linus is a damn fine hacker. How many of us could engineer an entire production-
quality operating system kernel from scratch? But Linux didn’t represent any awesome conceptual leap forward.
Linus is not (or at least, not yet) an innovative genius of design in the way that, say, Richard Stallman or James
Gosling (of NeWS and Java) are. Rather, Linus seems to me to be a genius of engineering and implementation,
with a sixth sense for avoiding bugs and development dead-ends and a true knack for finding the minimum-
effort path from point A to point B. Indeed, the whole design of Linux breathes this quality and mirrors Linus’s
essentially conservative and simplifying design approach.

So, if rapid releases and leveraging the Internet medium to the hilt were not accidents but integral parts of Linus’s
engineering-genius insight into the minimum-effort path, what was he maximizing? What was he cranking out of
the machinery?

Put that way, the question answers itself. Linus was keeping his hacker/users constantly stimulated and
rewarded—stimulated by the prospect of having an ego-satisfying piece of the action, rewarded by the sight of
constant (even dailydaily) improvement in their work.

Linus was directly aiming to maximize the number of person-hours thrown at debugging and development, even
at the possible cost of instability in the code and user-base burnout if any serious bug proved intractable. Linus
was behaving as though he believed something like this:

8. Given a large
enough beta-tester and
co-developer base,
almost every problem
will be characterized
quickly and the fix
obvious to someone.

Or, less formally, “Given enough eyeballs, all bugs are shallow.” I dub this: “Linus’s Law”.

My original formulation was that every problem “will be transparent to somebody”. Linus demurred that the
person who understands and fixes the problem is not necessarily or even usually the person who first characterizes
it. “Somebody finds the problem,” he says, “and somebody elseelse understands it. And I’ll go on record as saying
that finding it is the bigger challenge.” That correction is important; we’ll see how in the next section, when we
examine the practice of debugging in more detail. But the key point is that both parts of the process (finding and
fixing) tend to happen rapidly.

In Linus’s Law, I think, lies the core difference underlying the cathedral-builder and bazaar styles. In the cathedral-
builder view of programming, bugs and development problems are tricky, insidious, deep phenomena. It takes
months of scrutiny by a dedicated few to develop confidence that you’ve winkled them all out. Thus the long
release intervals, and the inevitable disappointment when long-awaited releases are not perfect.

In the bazaar view, on the other hand, you assume that bugs are generally shallow phenomena—or, at least, that
they turn shallow pretty quickly when exposed to a thousand eager co-developers pounding on every single new

8

release. Accordingly you release often in order to get more corrections, and as a beneficial side effect you have
less to lose if an occasional botch gets out the door.

And that’s it. That’s enough. If “Linus’s Law” is false, then any system as complex as the Linux kernel, being
hacked over by as many hands as the that kernel was, should at some point have collapsed under the weight of
unforseen bad interactions and undiscovered “deep” bugs. If it’s true, on the other hand, it is sufficient to explain
Linux’s relative lack of bugginess and its continuous uptimes spanning months or even years.

Maybe it shouldn’t have been such a surprise, at that. Sociologists years ago discovered that the averaged opinion
of a mass of equally expert (or equally ignorant) observers is quite a bit more reliable a predictor than the opinion
of a single randomly-chosen one of the observers. They called this the Delphi effect. It appears that what Linus has
shown is that this applies even to debugging an operating system—that the Delphi effect can tame development
complexity even at the complexity level of an OS kernel. [CV]

One special feature of the Linux situation that clearly helps along the Delphi effect is the fact that the contributors
for any given project are self-selected. An early respondent pointed out that contributions are received not from a
random sample, but from people who are interested enough to use the software, learn about how it works, attempt
to find solutions to problems they encounter, and actually produce an apparently reasonable fix. Anyone who
passes all these filters is highly likely to have something useful to contribute.

Linus’s Law can be rephrased as “Debugging is parallelizable”. Although debugging requires debuggers to
communicate with some coordinating developer, it doesn’t require significant coordination between debuggers.
Thus it doesn’t fall prey to the same quadratic complexity and management costs that make adding developers
problematic.

In practice, the theoretical loss of efficiency due to duplication of work by debuggers almost never seems to be
an issue in the Linux world. One effect of a “release early and often” policy is to minimize such duplication by
propagating fed-back fixes quickly [JH].

Brooks (the author of The Mythical Man-Month) even made an off-hand observation related to this: “The total cost
of maintaining a widely used program is typically 40 percent or more of the cost of developing it. Surprisingly this
cost is strongly affected by the number of users. More users find more bugsMore users find more bugs.” [emphasis
added].

More users find more bugs because adding more users adds more different ways of stressing the program. This
effect is amplified when the users are co-developers. Each one approaches the task of bug characterization with a
slightly different perceptual set and analytical toolkit, a different angle on the problem. The “Delphi effect” seems
to work precisely because of this variation. In the specific context of debugging, the variation also tends to reduce
duplication of effort.

So adding more beta-testers may not reduce the complexity of the current “deepest” bug from the devel-
oper’sdeveloper’s point of view, but it increases the probability that someone’s toolkit will be matched to the
problem in such a way that the bug is shallow to that personto that person.

Linus coppers his bets, too. In case there areare serious bugs, Linux kernel version are numbered in such a way
that potential users can make a choice either to run the last version designated “stable” or to ride the cutting edge
and risk bugs in order to get new features. This tactic is not yet systematically imitated by most Linux hackers,
but perhaps it should be; the fact that either choice is available makes both more attractive. [HBS]

9

How Many Eyeballs Tame Complexity
It’s one thing to observe in the large that the bazaar style greatly accelerates debugging and code evolution.
It’s another to understand exactly how and why it does so at the micro-level of day-to-day developer and tester
behavior. In this section (written three years after the original paper, using insights by developers who read it
and re-examined their own behavior) we’ll take a hard look at the actual mechanisms. Non-technically inclined
readers can safely skip to the next section.

One key to understanding is to realize exactly why it is that the kind of bug report non–source-aware users normally
turn in tends not to be very useful. Non–source-aware users tend to report only surface symptoms; they take their
environment for granted, so they (a) omit critical background data, and (b) seldom include a reliable recipe for
reproducing the bug.

The underlying problem here is a mismatch between the tester’s and the developer’s mental models of the program;
the tester, on the outside looking in, and the developer on the inside looking out. In closed-source development
they’re both stuck in these roles, and tend to talk past each other and find each other deeply frustrating.

Open-source development breaks this bind, making it far easier for tester and developer to develop a shared
representation grounded in the actual source code and to communicate effectively about it. Practically, there is
a huge difference in leverage for the developer between the kind of bug report that just reports externally-visible
symptoms and the kind that hooks directly to the developer’s source-code–based mental representation of the
program.

Most bugs, most of the time, are easily nailed given even an incomplete but suggestive characterization of their
error conditions at source-code level. When someone among your beta-testers can point out, “there’s a boundary
problem in line nnn”, or even just “under conditions X, Y, and Z, this variable rolls over”, a quick look at the
offending code often suffices to pin down the exact mode of failure and generate a fix.

Thus, source-code awareness by both parties greatly enhances both good communication and the synergy between
what a beta-tester reports and what the core developer(s) know. In turn, this means that the core developers’ time
tends to be well conserved, even with many collaborators.

Another characteristic of the open-source method that conserves developer time is the communication structure
of typical open-source projects. Above I used the term “core developer”; this reflects a distinction between the
project core (typically quite small; a single core developer is common, and one to three is typical) and the project
halo of beta-testers and available contributors (which often numbers in the hundreds).

The fundamental problem that traditional software-development organization addresses is Brook’s Law: “Adding
more programmers to a late project makes it later.” More generally, Brooks’s Law predicts that the complexity
and communication costs of a project rise with the square of the number of developers, while work done only rises
linearly.

Brooks’s Law is founded on experience that bugs tend strongly to cluster at the interfaces between code written
by different people, and that communications/coordination overhead on a project tends to rise with the number
of interfaces between human beings. Thus, problems scale with the number of communications paths between
developers, which scales as the square of the humber of developers (more precisely, according to the formula
N*(N – 1)/2 where N is the number of developers).

10

The Brooks’s Law analysis (and the resulting fear of large numbers in development groups) rests on a hidden
assummption: that the communications structure of the project is necessarily a complete graph, that everybody
talks to everybody else. But on open-source projects, the halo developers work on what are in effect separable
parallel subtasks and interact with each other very little; code changes and bug reports stream through the core
group, and only withinwithin that small core group do we pay the full Brooksian overhead. [SU]

There are are still more reasons that source-code–level bug reporting tends to be very efficient. They center around
the fact that a single error can often have multiple possible symptoms, manifesting differently depending on details
of the user’s usage pattern and environment. Such errors tend to be exactly the sort of complex and subtle bugs
(such as dynamic-memory-management errors or nondeterministic interrupt-window artifacts) that are hardest to
reproduce at will or to pin down by static analysis, and which do the most to create long-term problems in software.

A tester who sends in a tentative source-code–level characterization of such a multi-symptom bug (e.g. “It looks
to me like there’s a window in the signal handling near line 1250” or “Where are you zeroing that buffer?”) may
give a developer, otherwise too close to the code to see it, the critical clue to a half-dozen disparate symptoms.
In cases like this, it may be hard or even impossible to know which externally-visible misbehaviour was caused
by precisely which bug—but with frequent releases, it’s unnecessary to know. Other collaborators will be likely
to find out quickly whether their bug has been fixed or not. In many cases, source-level bug reports will cause
misbehaviours to drop out without ever having been attributed to any specific fix.

Complex multi-symptom errors also tend to have multiple trace paths from surface symptoms back to the actual
bug. Which of the trace paths a given developer or tester can chase may depend on subtleties of that person’s
environment, and may well change in a not obviously deterministic way over time. In effect, each developer and
tester samples a semi-random set of the program’s state space when looking for the etiology of a symptom. The
more subtle and complex the bug, the less likely that skill will be able to guarantee the relevance of that sample.

For simple and easily reproducible bugs, then, the accent will be on the “semi” rather than the “random”; debugging
skill and intimacy with the code and its architecture will matter a lot. But for complex bugs, the accent will be
on the “random”. Under these circumstances many people running traces will be much more effective than a few
people running traces sequentially—even if the few have a much higher average skill level.

This effect will be greatly amplified if the difficulty of following trace paths from different surface symptoms
back to a bug varies significantly in a way that can’t be predicted by looking at the symptoms. A single developer
sampling those paths sequentially will be as likely to pick a difficult trace path on the first try as an easy one. On
the other hand, suppose many people are trying trace paths in parallel while doing rapid releases. Then it is likely
one of them will find the easiest path immediately, and nail the bug in a much shorter time. The project maintainer
will see that, ship a new release, and the other people running traces on the same bug will be able to stop before
having spent too much time on their more difficult traces [RJ].

When Is a Rose Not a Rose?
Having studied Linus’s behavior and formed a theory about why it was successful, I made a conscious decision to
test this theory on my new (admittedly much less complex and ambitious) project.

But the first thing I did was reorganize and simplify popclient a lot. Carl Harris’s implementation was very sound,
but exhibited a kind of unnecessary complexity common to many C programmers. He treated the code as central

11

and the data structures as support for the code. As a result, the code was beautiful but the data structure design
ad-hoc and rather ugly (at least by the high standards of this veteran LISP hacker).

I had another purpose for rewriting besides improving the code and the data structure design, however. That was
to evolve it into something I understood completely. It’s no fun to be responsible for fixing bugs in a program you
don’t understand.

For the first month or so, then, I was simply following out the implications of Carl’s basic design. The first serious
change I made was to add IMAP support. I did this by reorganizing the protocol machines into a generic driver
and three method tables (for POP2, POP3, and IMAP). This and the previous changes illustrate a general principle
that’s good for programmers to keep in mind, especially in languages like C that don’t naturally do dynamic
typing:

9. Smart data struc-
tures and dumb code
works a lot better than
the other way around.

Brooks, Chapter 9: “Show me your flowchart and conceal your tables, and I shall continue to be mystified.
Show me your tables, and I won’t usually need your flowchart; it’ll be obvious.” Allowing for thirty years of
terminological/cultural shift, it’s the same point.

At this point (early September 1996, about six weeks from zero) I started thinking that a name change might be in
order—after all, it wasn’t just a POP client any more. But I hesitated, because there was as yet nothing genuinely
new in the design. My version of popclient had yet to develop an identity of its own.

That changed, radically, when popclient learned how to forward fetched mail to the SMTP port. I’ll get to that in
a moment. But first: I said earlier that I’d decided to use this project to test my theory about what Linus Torvalds
had done right. How (you may well ask) did I do that? In these ways:

• I released early and often (almost never less often than every ten days; during periods of intense development,
once a day).

• I grew my beta list by adding to it everyone who contacted me about fetchmail.

• I sent chatty announcements to the beta list whenever I released, encouraging people to participate.

• And I listened to my beta-testers, polling them about design decisions and stroking them whenever they sent
in patches and feedback.

12

The payoff from these simple measures was immediate. From the beginning of the project, I got bug reports of a
quality most developers would kill for, often with good fixes attached. I got thoughtful criticism, I got fan mail, I
got intelligent feature suggestions. Which leads to:

10. If you treat your
beta-testers as if
they’re your most
valuable resource,
they will respond by
becoming your most
valuable resource.

One interesting measure of fetchmail’s success is the sheer size of the project beta list, fetchmail-friends. At the
time of latest revision of this paper (November 2000) it has 287 members and is adding two or three a week.

Actually, when I revised in late May 1997 I found the list was beginning to lose members from its high of close
to 300 for an interesting reason. Several people have asked me to unsubscribe them because fetchmail is working
so well for them that they no longer need to see the list traffic! Perhaps this is part of the normal life-cycle of a
mature bazaar-style project.

Popclient becomes Fetchmail
The real turning point in the project was when Harry Hochheiser sent me his scratch code for forwarding mail
to the client machine’s SMTP port. I realized almost immediately that a reliable implementation of this feature
would make all the other mail delivery modes next to obsolete.

For many weeks I had been tweaking fetchmail rather incrementally while feeling like the interface design was
serviceable but grubby—inelegant and with too many exiguous options hanging out all over. The options to dump
fetched mail to a mailbox file or standard output particularly bothered me, but I couldn’t figure out why.

(If you don’t care about the technicalia of Internet mail, the next two paragraphs can be safely skipped.)

What I saw when I thought about SMTP forwarding was that popclient had been trying to do too many things.
It had been designed to be both a mail transport agent (MTA) and a local delivery agent (MDA). With SMTP
forwarding, it could get out of the MDA business and be a pure MTA, handing off mail to other programs for local
delivery just as sendmail does.

Why mess with all the complexity of configuring a mail delivery agent or setting up lock-and-append on a mailbox
when port 25 is almost guaranteed to be there on any platform with TCP/IP support in the first place? Especially
when this means retrieved mail is guaranteed to look like normal sender-initiated SMTP mail, which is really what
we want anyway.

(Back to a higher level….)

Even if you didn’t follow the preceding technical jargon, there are several important lessons here. First, this SMTP-
forwarding concept was the biggest single payoff I got from consciously trying to emulate Linus’s methods. A
user gave me this terrific idea—all I had to do was understand the implications.

13

11. The next best thing
to having good ideas
is recognizing good
ideas from your users.
Sometimes the latter is
better.

Interestingly enough, you will quickly find that if you are completely and self-deprecatingly truthful about how
much you owe other people, the world at large will treat you as though you did every bit of the invention yourself
and are just being becomingly modest about your innate genius. We can all see how well this worked for Linus!

(When I gave my talk at the first Perl Conference in August 1997, hacker extraordinaire Larry Wall was in the
front row. As I got to the last line above he called out, religious-revival style, “Tell it, tell it, brother!”. The whole
audience laughed, because they knew this had worked for the inventor of Perl, too.)

After a very few weeks of running the project in the same spirit, I began to get similar praise not just from my
users but from other people to whom the word leaked out. I stashed away some of that email; I’ll look at it again
sometime if I ever start wondering whether my life has been worthwhile :-).

But there are two more fundamental, non-political lessons here that are general to all kinds of design.

12. Often, the most
striking and innovative
solutions come from
realizing that your
concept of the problem
was wrong.

I had been trying to solve the wrong problem by continuing to develop popclient as a combined MTA/MDA with
all kinds of funky local delivery modes. Fetchmail’s design needed to be rethought from the ground up as a pure
MTA, a part of the normal SMTP-speaking Internet mail path.

When you hit a wall in development—when you find yourself hard put to think past the next patch—it’s often time
to ask not whether you’ve got the right answer, but whether you’re asking the right question. Perhaps the problem
needs to be reframed.

Well, I had reframed my problem. Clearly, the right thing to do was (1) hack SMTP forwarding support into the
generic driver, (2) make it the default mode, and (3) eventually throw out all the other delivery modes, especially
the deliver-to-file and deliver-to-standard-output options.

I hesitated over step 3 for some time, fearing to upset long-time popclient users dependent on the alternate delivery
mechanisms. In theory, they could immediately switch to .forward files or their non-sendmail equivalents to
get the same effects. In practice the transition might have been messy.

But when I did it, the benefits proved huge. The cruftiest parts of the driver code vanished. Configuration got
radically simpler—no more grovelling around for the system MDA and user’s mailbox, no more worries about
whether the underlying OS supports file locking.

14

Also, the only way to lose mail vanished. If you specified delivery to a file and the disk got full, your mail got
lost. This can’t happen with SMTP forwarding because your SMTP listener won’t return OK unless the message
can be delivered or at least spooled for later delivery.

Also, performance improved (though not so you’d notice it in a single run). Another not insignificant benefit of
this change was that the manual page got a lot simpler.

Later, I had to bring delivery via a user-specified local MDA back in order to allow handling of some obscure
situations involving dynamic SLIP. But I found a much simpler way to do it.

The moral? Don’t hesitate to throw away superannuated features when you can do it without loss of effectiveness.
Antoine de Saint-Exupéry (who was an aviator and aircraft designer when he wasn’t authoring classic children’s
books) said:

13. “Perfection (in de-
sign) is achieved not
when there is nothing
more to add, but rather
when there is nothing
more to take away.”

When your code is getting both better and simpler, that is when you knowknow it’s right. And in the process, the
fetchmail design acquired an identity of its own, different from the ancestral popclient.

It was time for the name change. The new design looked much more like a dual of sendmail than the old popclient
had; both are MTAs, but where sendmail pushes then delivers, the new popclient pulls then delivers. So, two
months off the blocks, I renamed it fetchmail.

There is a more general lesson in this story about how SMTP delivery came to fetchmail. It is not only debugging
that is parallelizable; development and (to a perhaps surprising extent) exploration of design space is, too.
When your development mode is rapidly iterative, development and enhancement may become special cases
of debugging—fixing ‘bugs of omission’ in the original capabilities or concept of the software.

Even at a higher level of design, it can be very valuable to have lots of co-developers random-walking through the
design space near your product. Consider the way a puddle of water finds a drain, or better yet how ants find food:
exploration essentially by diffusion, followed by exploitation mediated by a scalable communication mechanism.
This works very well; as with Harry Hochheiser and me, one of your outriders may well find a huge win nearby
that you were just a little too close-focused to see.

Fetchmail Grows Up
There I was with a neat and innovative design, code that I knew worked well because I used it every day, and a
burgeoning beta list. It gradually dawned on me that I was no longer engaged in a trivial personal hack that might
happen to be useful to few other people. I had my hands on a program that every hacker with a Unix box and a
SLIP/PPP mail connection really needs.

15

With the SMTP forwarding feature, it pulled far enough in front of the competition to potentially become a
“category killer”, one of those classic programs that fills its niche so competently that the alternatives are not just
discarded but almost forgotten.

I think you can’t really aim or plan for a result like this. You have to get pulled into it by design ideas so powerful
that afterward the results just seem inevitable, natural, even foreordained. The only way to try for ideas like that is
by having lots of ideas—or by having the engineering judgment to take other peoples’ good ideas beyond where
the originators thought they could go.

Andy Tanenbaum had the original idea to build a simple native Unix for IBM PCs, for use as a teaching tool (he
called it Minix). Linus Torvalds pushed the Minix concept further than Andrew probably thought it could go—and
it grew into something wonderful. In the same way (though on a smaller scale), I took some ideas by Carl Harris
and Harry Hochheiser and pushed them hard. Neither of us was ‘original’ in the romantic way people think is
genius. But then, most science and engineering and software development isn’t done by original genius, hacker
mythology to the contrary.

The results were pretty heady stuff all the same—in fact, just the kind of success every hacker lives for! And they
meant I would have to set my standards even higher. To make fetchmail as good as I now saw it could be, I’d have
to write not just for my own needs, but also include and support features necessary to others but outside my orbit.
And do that while keeping the program simple and robust.

The first and overwhelmingly most important feature I wrote after realizing this was multidrop support—the ability
to fetch mail from mailboxes that had accumulated all mail for a group of users, and then route each piece of mail
to its individual recipients.

I decided to add the multidrop support partly because some users were clamoring for it, but mostly because I
thought it would shake bugs out of the single-drop code by forcing me to deal with addressing in full generality.
And so it proved. Getting RFC 822 [http://info.internet.isi.edu:80/in-notes/rfc/files/rfc822.txt] address parsing
right took me a remarkably long time, not because any individual piece of it is hard but because it involved a pile
of interdependent and fussy details.

But multidrop addressing turned out to be an excellent design decision as well. Here’s how I knew:

14. Any tool should be
useful in the expected
way, but a truly great
tool lends itself to uses
you never expected.

The unexpected use for multidrop fetchmail is to run mailing lists with the list kept, and alias expansion done, on
the clientclient side of the Internet connection. This means someone running a personal machine through an ISP
account can manage a mailing list without continuing access to the ISP’s alias files.

Another important change demanded by my beta-testers was support for 8-bit MIME (Multipurpose Internet Mail
Extensions) operation. This was pretty easy to do, because I had been careful to keep the code 8-bit clean (that is,
to not press the 8th bit, unused in the ASCII character set, into service to carry information within the program).
Not because I anticipated the demand for this feature, but rather in obedience to another rule:

16

15. When writing
gateway software
of any kind, take
pains to disturb the
data stream as little
as possible—and
nevernever throw
away information
unless the recipient
forces you to!

Had I not obeyed this rule, 8-bit MIME support would have been difficult and buggy. As it was, all I had to do is
read the MIME standard (RFC 1652 [http://info.internet.isi.edu:80/in-notes/rfc/files/rfc1652.txt]) and add a trivial
bit of header-generation logic.

Some European users bugged me into adding an option to limit the number of messages retrieved per session
(so they can control costs from their expensive phone networks). I resisted this for a long time, and I’m still not
entirely happy about it. But if you’re writing for the world, you have to listen to your customers—this doesn’t
change just because they’re not paying you in money.

A Few More Lessons from Fetchmail
Before we go back to general software-engineering issues, there are a couple more specific lessons from the
fetchmail experience to ponder. Nontechnical readers can safely skip this section.

The rc (control) file syntax includes optional ‘noise’ keywords that are entirely ignored by the parser. The English-
like syntax they allow is considerably more readable than the traditional terse keyword-value pairs you get when
you strip them all out.

These started out as a late-night experiment when I noticed how much the rc file declarations were beginning to
resemble an imperative minilanguage. (This is also why I changed the original popclient “server” keyword to
“poll”).

It seemed to me that trying to make that imperative minilanguage more like English might make it easier to use.
Now, although I’m a convinced partisan of the “make it a language” school of design as exemplified by Emacs
and HTML and many database engines, I am not normally a big fan of “English-like” syntaxes.

Traditionally programmers have tended to favor control syntaxes that are very precise and compact and have no
redundancy at all. This is a cultural legacy from when computing resources were expensive, so parsing stages
had to be as cheap and simple as possible. English, with about 50% redundancy, looked like a very inappropriate
model then.

This is not my reason for normally avoiding English-like syntaxes; I mention it here only to demolish it. With
cheap cycles and core, terseness should not be an end in itself. Nowadays it’s more important for a language to be
convenient for humans than to be cheap for the computer.

17

There remain, however, good reasons to be wary. One is the complexity cost of the parsing stage—you don’t want
to raise that to the point where it’s a significant source of bugs and user confusion in itself. Another is that trying
to make a language syntax English-like often demands that the “English” it speaks be bent seriously out of shape,
so much so that the superficial resemblance to natural language is as confusing as a traditional syntax would have
been. (You see this bad effect in a lot of so-called “fourth generation” and commercial database-query languages.)

The fetchmail control syntax seems to avoid these problems because the language domain is extremely restricted.
It’s nowhere near a general-purpose language; the things it says simply are not very complicated, so there’s little
potential for confusion in moving mentally between a tiny subset of English and the actual control language. I
think there may be a broader lesson here:

16. When your
language is nowhere
near Turing-complete,
syntactic sugar can be
your friend.

Another lesson is about security by obscurity. Some fetchmail users asked me to change the software to store
passwords encrypted in the rc file, so snoopers wouldn’t be able to casually see them.

I didn’t do it, because this doesn’t actually add protection. Anyone who’s acquired permissions to read your rc
file will be able to run fetchmail as you anyway—and if it’s your password they’re after, they’d be able to rip the
necessary decoder out of the fetchmail code itself to get it.

All .fetchmailrc password encryption would have done is give a false sense of security to people who don’t
think very hard. The general rule here is:

17. A security sys-
tem is only as secure
as its secret. Beware of
pseudo-secrets.

Necessary Preconditions for the Bazaar Style
Early reviewers and test audiences for this essay consistently raised questions about the preconditions for
successful bazaar-style development, including both the qualifications of the project leader and the state of code
at the time one goes public and starts to try to build a co-developer community.

It’s fairly clear that one cannot code from the ground up in bazaar style [IN]. One can test, debug and improve in
bazaar style, but it would be very hard to originateoriginate a project in bazaar mode. Linus didn’t try it. I didn’t
either. Your nascent developer community needs to have something runnable and testable to play with.

When you start community-building, what you need to be able to present is a plausible promiseplausible promise.
Your program doesn’t have to work particularly well. It can be crude, buggy, incomplete, and poorly documented.
What it must not fail to do is (a) run, and (b) convince potential co-developers that it can be evolved into something
really neat in the foreseeable future.

18

Linux and fetchmail both went public with strong, attractive basic designs. Many people thinking about the bazaar
model as I have presented it have correctly considered this critical, then jumped from that to the conclusion that a
high degree of design intuition and cleverness in the project leader is indispensable.

But Linus got his design from Unix. I got mine initially from the ancestral popclient (though it would later change
a great deal, much more proportionately speaking than has Linux). So does the leader/coordinator for a bazaar-
style effort really have to have exceptional design talent, or can he get by through leveraging the design talent of
others?

I think it is not critical that the coordinator be able to originate designs of exceptional brilliance, but it is absolutely
critical that the coordinator be able to recognize good design ideas from othersrecognize good design ideas from
others.

Both the Linux and fetchmail projects show evidence of this. Linus, while not (as previously discussed) a
spectacularly original designer, has displayed a powerful knack for recognizing good design and integrating it
into the Linux kernel. And I have already described how the single most powerful design idea in fetchmail (SMTP
forwarding) came from somebody else.

Early audiences of this essay complimented me by suggesting that I am prone to undervalue design originality in
bazaar projects because I have a lot of it myself, and therefore take it for granted. There may be some truth to this;
design (as opposed to coding or debugging) is certainly my strongest skill.

But the problem with being clever and original in software design is that it gets to be a habit—you start reflexively
making things cute and complicated when you should be keeping them robust and simple. I have had projects
crash on me because I made this mistake, but I managed to avoid this with fetchmail.

So I believe the fetchmail project succeeded partly because I restrained my tendency to be clever; this argues (at
least) against design originality being essential for successful bazaar projects. And consider Linux. Suppose Linus
Torvalds had been trying to pull off fundamental innovations in operating system design during the development;
does it seem at all likely that the resulting kernel would be as stable and successful as what we have?

A certain base level of design and coding skill is required, of course, but I expect almost anybody seriously
thinking of launching a bazaar effort will already be above that minimum. The open-source community’s internal
market in reputation exerts subtle pressure on people not to launch development efforts they’re not competent to
follow through on. So far this seems to have worked pretty well.

There is another kind of skill not normally associated with software development which I think is as important as
design cleverness to bazaar projects—and it may be more important. A bazaar project coordinator or leader must
have good people and communications skills.

This should be obvious. In order to build a development community, you need to attract people, interest them in
what you’re doing, and keep them happy about the amount of work they’re doing. Technical sizzle will go a long
way towards accomplishing this, but it’s far from the whole story. The personality you project matters, too.

It is not a coincidence that Linus is a nice guy who makes people like him and want to help him. It’s not a
coincidence that I’m an energetic extrovert who enjoys working a crowd and has some of the delivery and instincts
of a stand-up comic. To make the bazaar model work, it helps enormously if you have at least a little skill at
charming people.

19

The Social Context of Open-Source Software
It is truly written: the best hacks start out as personal solutions to the author’s everyday problems, and spread
because the problem turns out to be typical for a large class of users. This takes us back to the matter of rule 1,
restated in a perhaps more useful way:

18. To solve an
interesting problem,
start by finding
a problem that is
interesting to you.

So it was with Carl Harris and the ancestral popclient, and so with me and fetchmail. But this has been understood
for a long time. The interesting point, the point that the histories of Linux and fetchmail seem to demand we
focus on, is the next stage—the evolution of software in the presence of a large and active community of users and
co-developers.

In The Mythical Man-Month, Fred Brooks observed that programmer time is not fungible; adding developers to a
late software project makes it later. As we’ve seen previously, he argued that the complexity and communication
costs of a project rise with the square of the number of developers, while work done only rises linearly. Brooks’s
Law has been widely regarded as a truism. But we’ve examined in this essay an number of ways in which the
process of open-source development falsifies the assumptionms behind it—and, empirically, if Brooks’s Law were
the whole picture Linux would be impossible.

Gerald Weinberg’s classic The Psychology of Computer Programming supplied what, in hindsight, we can see
as a vital correction to Brooks. In his discussion of “egoless programming”, Weinberg observed that in shops
where developers are not territorial about their code, and encourage other people to look for bugs and potential
improvements in it, improvement happens dramatically faster than elsewhere. (Recently, Kent Beck’s ‘extreme
programming’ technique of deploying coders in pairs looking over one anothers’ shoulders might be seen as an
attempt to force this effect.)

Weinberg’s choice of terminology has perhaps prevented his analysis from gaining the acceptance it de-
served—one has to smile at the thought of describing Internet hackers as “egoless”. But I think his argument
looks more compelling today than ever.

The bazaar method, by harnessing the full power of the “egoless programming” effect, strongly mitigates the
effect of Brooks’s Law. The principle behind Brooks’s Law is not repealed, but given a large developer population
and cheap communications its effects can be swamped by competing nonlinearities that are not otherwise visible.
This resembles the relationship between Newtonian and Einsteinian physics—the older system is still valid at low
energies, but if you push mass and velocity high enough you get surprises like nuclear explosions or Linux.

The history of Unix should have prepared us for what we’re learning from Linux (and what I’ve verified
experimentally on a smaller scale by deliberately copying Linus’s methods [EGCS]). That is, while coding remains
an essentially solitary activity, the really great hacks come from harnessing the attention and brainpower of entire
communities. The developer who uses only his or her own brain in a closed project is going to fall behind
the developer who knows how to create an open, evolutionary context in which feedback exploring the design

20

space, code contributions, bug-spotting, and other improvements come from from hundreds (perhaps thousands)
of people.

But the traditional Unix world was prevented from pushing this approach to the ultimate by several factors. One
was the legal contraints of various licenses, trade secrets, and commercial interests. Another (in hindsight) was
that the Internet wasn’t yet good enough.

Before cheap Internet, there were some geographically compact communities where the culture encouraged
Weinberg’s “egoless” programming, and a developer could easily attract a lot of skilled kibitzers and co-
developers. Bell Labs, the MIT AI and LCS labs, UC Berkeley—these became the home of innovations that
are legendary and still potent.

Linux was the first project for which a conscious and successful effort to use the entire worldworld as its talent
pool was made. I don’t think it’s a coincidence that the gestation period of Linux coincided with the birth of the
World Wide Web, and that Linux left its infancy during the same period in 1993–1994 that saw the takeoff of the
ISP industry and the explosion of mainstream interest in the Internet. Linus was the first person who learned how
to play by the new rules that pervasive Internet access made possible.

While cheap Internet was a necessary condition for the Linux model to evolve, I think it was not by itself a
sufficient condition. Another vital factor was the development of a leadership style and set of cooperative customs
that could allow developers to attract co-developers and get maximum leverage out of the medium.

But what is this leadership style and what are these customs? They cannot be based on power relationships—and
even if they could be, leadership by coercion would not produce the results we see. Weinberg quotes the
autobiography of the 19th-century Russian anarchist Pyotr Alexeyvich Kropotkin’s Memoirs of a Revolutionist
to good effect on this subject:

Having been brought
up in a serf-owner’s
family, I entered
active life, like all
young men of my
time, with a great
deal of confidence
in the necessity
of commanding,
ordering, scolding,
punishing and the
like. But when, at
an early stage, I had
to manage serious
enterprises and to deal
with [free] men, and
when each mistake
would lead at once to
heavy consequences,
I began to appreciate

21

the difference between
acting on the principle
of command and
discipline and
acting on the
principle of common
understanding.
The former works
admirably in a military
parade, but it is worth
nothing where real life
is concerned, and the
aim can be achieved
only through the
severe effort of many
converging wills.

The “severe effort of many converging wills” is precisely what a project like Linux requires—and the “principle of
command” is effectively impossible to apply among volunteers in the anarchist’s paradise we call the Internet. To
operate and compete effectively, hackers who want to lead collaborative projects have to learn how to recruit
and energize effective communities of interest in the mode vaguely suggested by Kropotkin’s “principle of
understanding”. They must learn to use Linus’s Law.[SP]

Earlier I referred to the “Delphi effect” as a possible explanation for Linus’s Law. But more powerful analogies
to adaptive systems in biology and economics also irresistably suggest themselves. The Linux world behaves
in many respects like a free market or an ecology, a collection of selfish agents attempting to maximize utility
which in the process produces a self-correcting spontaneous order more elaborate and efficient than any amount
of central planning could have achieved. Here, then, is the place to seek the “principle of understanding”.

The “utility function” Linux hackers are maximizing is not classically economic, but is the intangible of their own
ego satisfaction and reputation among other hackers. (One may call their motivation “altruistic”, but this ignores
the fact that altruism is itself a form of ego satisfaction for the altruist). Voluntary cultures that work this way
are not actually uncommon; one other in which I have long participated is science fiction fandom, which unlike
hackerdom has long explicitly recognized “egoboo” (ego-boosting, or the enhancement of one’s reputation among
other fans) as the basic drive behind volunteer activity.

Linus, by successfully positioning himself as the gatekeeper of a project in which the development is mostly
done by others, and nurturing interest in the project until it became self-sustaining, has shown an acute grasp of
Kropotkin’s “principle of shared understanding”. This quasi-economic view of the Linux world enables us to see
how that understanding is applied.

We may view Linus’s method as a way to create an efficient market in “egoboo”—to connect the selfishness of
individual hackers as firmly as possible to difficult ends that can only be achieved by sustained cooperation. With
the fetchmail project I have shown (albeit on a smaller scale) that his methods can be duplicated with good results.
Perhaps I have even done it a bit more consciously and systematically than he.

22

Many people (especially those who politically distrust free markets) would expect a culture of self-directed egoists
to be fragmented, territorial, wasteful, secretive, and hostile. But this expectation is clearly falsified by (to
give just one example) the stunning variety, quality, and depth of Linux documentation. It is a hallowed given
that programmers hatehate documenting; how is it, then, that Linux hackers generate so much documentation?
Evidently Linux’s free market in egoboo works better to produce virtuous, other-directed behavior than the
massively-funded documentation shops of commercial software producers.

Both the fetchmail and Linux kernel projects show that by properly rewarding the egos of many other hackers, a
strong developer/coordinator can use the Internet to capture the benefits of having lots of co-developers without
having a project collapse into a chaotic mess. So to Brooks’s Law I counter-propose the following:

19: Provided
the development
coordinator has a
communications
medium at least as
good as the Internet,
and knows how to
lead without coercion,
many heads are
inevitably better than
one.

I think the future of open-source software will increasingly belong to people who know how to play Linus’s game,
people who leave behind the cathedral and embrace the bazaar. This is not to say that individual vision and
brilliance will no longer matter; rather, I think that the cutting edge of open-source software will belong to people
who start from individual vision and brilliance, then amplify it through the effective construction of voluntary
communities of interest.

Perhaps this is not only the future of open-sourceopen-source software. No closed-source developer can match
the pool of talent the Linux community can bring to bear on a problem. Very few could afford even to hire the
more than 200 (1999: 600, 2000: 800) people who have contributed to fetchmail!

Perhaps in the end the open-source culture will triumph not because cooperation is morally right or software
“hoarding” is morally wrong (assuming you believe the latter, which neither Linus nor I do), but simply because
the closed-source world cannot win an evolutionary arms race with open-source communities that can put orders
of magnitude more skilled time into a problem.

On Management and the Maginot Line
The original Cathedral and Bazaar paper of 1997 ended with the vision above—that of happy networked hordes of
programmer/anarchists outcompeting and overwhelming the hierarchical world of conventional closed software.

A good many skeptics weren’t convinced, however; and the questions they raise deserve a fair engagement. Most
of the objections to the bazaar argument come down to the claim that its proponents have underestimated the
productivity-multiplying effect of conventional management.

23

Traditionally-minded software-development managers often object that the casualness with which project groups
form and change and dissolve in the open-source world negates a significant part of the apparent advantage of
numbers that the open-source community has over any single closed-source developer. They would observe that
in software development it is really sustained effort over time and the degree to which customers can expect
continuing investment in the product that matters, not just how many people have thrown a bone in the pot and left
it to simmer.

There is something to this argument, to be sure; in fact, I have developed the idea that expected future service value
is the key to the economics of software production in the essay The Magic Cauldron [http://www.tuxedo.org/-
~esr/writings/magic-cauldron/].

But this argument also has a major hidden problem; its implicit assumption that open-source development cannot
deliver such sustained effort. In fact, there have been open-source projects that maintained a coherent direction
and an effective maintainer community over quite long periods of time without the kinds of incentive structures
or institutional controls that conventional management finds essential. The development of the GNU Emacs
editor is an extreme and instructive example; it has absorbed the efforts of hundreds of contributors over 15 years
into a unified architectural vision, despite high turnover and the fact that only one person (its author) has been
continuously active during all that time. No closed-source editor has ever matched this longevity record.

This suggests a reason for questioning the advantages of conventionally-managed software development that is
independent of the rest of the arguments over cathedral vs. bazaar mode. If it’s possible for GNU Emacs to
express a consistent architectural vision over 15 years, or for an operating system like Linux to do the same over 8
years of rapidly changing hardware and platform technology; and if (as is indeed the case) there have been many
well-architected open-source projects of more than 5 years duration — then we are entitled to wonder what, if
anything, the tremendous overhead of conventionally-managed development is actually buying us.

Whatever it is certainly doesn’t include reliable execution by deadline, or on budget, or to all features of the
specification; it’s a rare ‘managed’ project that meets even one of these goals, let alone all three. It also does not
appear to be ability to adapt to changes in technology and economic context during the project lifetime, either; the
open-source community has proven farfar more effective on that score (as one can readily verify, for example, by
comparing the 30-year history of the Internet with the short half-lives of proprietary networking technologies—or
the cost of the 16-bit to 32-bit transition in Microsoft Windows with the nearly effortless upward migration of
Linux during the same period, not only along the Intel line of development but to more than a dozen other hardware
platforms, including the 64-bit Alpha as well).

One thing many people think the traditional mode buys you is somebody to hold legally liable and potentially
recover compensation from if the project goes wrong. But this is an illusion; most software licenses are written to
disclaim even warranty of merchantability, let alone performance—and cases of successful recovery for software
nonperformance are vanishingly rare. Even if they were common, feeling comforted by having somebody to sue
would be missing the point. You didn’t want to be in a lawsuit; you wanted working software.

So what is all that management overhead buying?

In order to understand that, we need to understand what software development managers believe they do. A
woman I know who seems to be very good at this job says software project management has five functions:

• To define goalsdefine goals and keep everybody pointed in the same direction

24

• To monitormonitor and make sure crucial details don’t get skipped

• To motivatemotivate people to do boring but necessary drudgework

• To organizeorganize the deployment of people for best productivity

• To marshal resourcesmarshal resources needed to sustain the project

Apparently worthy goals, all of these; but under the open-source model, and in its surrounding social context, they
can begin to seem strangely irrelevant. We’ll take them in reverse order.

My friend reports that a lot of resource marshallingresource marshalling is basically defensive; once you have
your people and machines and office space, you have to defend them from peer managers competing for the same
resources, and from higher-ups trying to allocate the most efficient use of a limited pool.

But open-source developers are volunteers, self-selected for both interest and ability to contribute to the projects
they work on (and this remains generally true even when they are being paid a salary to hack open source.) The
volunteer ethos tends to take care of the ‘attack’ side of resource-marshalling automatically; people bring their
own resources to the table. And there is little or no need for a manager to ‘play defense’ in the conventional sense.

Anyway, in a world of cheap PCs and fast Internet links, we find pretty consistently that the only really limiting
resource is skilled attention. Open-source projects, when they founder, essentially never do so for want of
machines or links or office space; they die only when the developers themselves lose interest.

That being the case, it’s doubly important that open-source hackers organize themselvesorganize themselves for
maximum productivity by self-selection—and the social milieu selects ruthlessly for competence. My friend,
familiar with both the open-source world and large closed projects, believes that open source has been successful
partly because its culture only accepts the most talented 5% or so of the programming population. She spends
most of her time organizing the deployment of the other 95%, and has thus observed first-hand the well-known
variance of a factor of one hundred in productivity between the most able programmers and the merely competent.

The size of that variance has always raised an awkward question: would individual projects, and the field as a
whole, be better off without more than 50% of the least able in it? Thoughtful managers have understood for a
long time that if conventional software management’s only function were to convert the least able from a net loss
to a marginal win, the game might not be worth the candle.

The success of the open-source community sharpens this question considerably, by providing hard evidence that
it is often cheaper and more effective to recruit self-selected volunteers from the Internet than it is to manage
buildings full of people who would rather be doing something else.

Which brings us neatly to the question of motivationmotivation. An equivalent and often-heard way to state
my friend’s point is that traditional development management is a necessary compensation for poorly motivated
programmers who would not otherwise turn out good work.

This answer usually travels with a claim that the open-source community can only be relied on only to do work that
is ‘sexy’ or technically sweet; anything else will be left undone (or done only poorly) unless it’s churned out by
money-motivated cubicle peons with managers cracking whips over them. I address the psychological and social

25

reasons for being skeptical of this claim in Homesteading the Noosphere [http://www.tuxedo.org/~esr/magic-
cauldron/]. For present purposes, however, I think it’s more interesting to point out the implications of accepting
it as true.

If the conventional, closed-source, heavily-managed style of software development is really defended only by
a sort of Maginot Line of problems conducive to boredom, then it’s going to remain viable in each individual
application area for only so long as nobody finds those problems really interesting and nobody else finds any way
to route around them. Because the moment there is open-source competition for a ‘boring’ piece of software,
customers are going to know that it was finally tackled by someone who chose that problem to solve because of a
fascination with the problem itself—which, in software as in other kinds of creative work, is a far more effective
motivator than money alone.

Having a conventional management structure solely in order to motivate, then, is probably good tactics but bad
strategy; a short-term win, but in the longer term a surer loss.

So far, conventional development management looks like a bad bet now against open source on two points
(resource marshalling, organization), and like it’s living on borrowed time with respect to a third (motivation).
And the poor beleaguered conventional manager is not going to get any succour from the monitoringmonitoring
issue; the strongest argument the open-source community has is that decentralized peer review trumps all the
conventional methods for trying to ensure that details don’t get slipped.

Can we save defining goalsdefining goals as a justification for the overhead of conventional software project
management? Perhaps; but to do so, we’ll need good reason to believe that management committees and corporate
roadmaps are more successful at defining worthy and widely shared goals than the project leaders and tribal elders
who fill the analogous role in the open-source world.

That is on the face of it a pretty hard case to make. And it’s not so much the open-source side of the balance (the
longevity of Emacs, or Linus Torvalds’s ability to mobilize hordes of developers with talk of “world domination”)
that makes it tough. Rather, it’s the demonstrated awfulness of conventional mechanisms for defining the goals of
software projects.

One of the best-known folk theorems of software engineering is that 60% to 75% of conventional software projects
either are never completed or are rejected by their intended users. If that range is anywhere near true (and I’ve
never met a manager of any experience who disputes it) then more projects than not are being aimed at goals that
are either (a) not realistically attainable, or (b) just plain wrong.

This, more than any other problem, is the reason that in today’s software engineering world the very phrase
“management committee” is likely to send chills down the hearer’s spine—even (or perhaps especially) if the
hearer is a manager. The days when only programmers griped about this pattern are long past; Dilbert cartoons
hang over executives’executives’ desks now.

Our reply, then, to the traditional software development manager, is simple—if the open-source community has
really underestimated the value of conventional management, why do so many of you display contempt for your
own process?why do so many of you display contempt for your own process?

Once again the example of the open-source community sharpens this question considerably—because we have
funfun doing what we do. Our creative play has been racking up technical, market-share, and mind-share successes
at an astounding rate. We’re proving not only that we can do better software, but that joy is an assetjoy is an asset.

26

Two and a half years after the first version of this essay, the most radical thought I can offer to close with is no
longer a vision of an open-source–dominated software world; that, after all, looks plausible to a lot of sober people
in suits these days.

Rather, I want to suggest what may be a wider lesson about software, (and probably about every kind of creative
or professional work). Human beings generally take pleasure in a task when it falls in a sort of optimal-
challenge zone; not so easy as to be boring, not too hard to achieve. A happy programmer is one who is neither
underutilized nor weighed down with ill-formulated goals and stressful process friction. Enjoyment predicts
efficiency.Enjoyment predicts efficiency.

Relating to your own work process with fear and loathing (even in the displaced, ironic way suggested by hanging
up Dilbert cartoons) should therefore be regarded in itself as a sign that the process has failed. Joy, humor, and
playfulness are indeed assets; it was not mainly for the alliteration that I wrote of “happy hordes” above, and it is
no mere joke that the Linux mascot is a cuddly, neotenous penguin.

It may well turn out that one of the most important effects of open source’s success will be to teach us that play is
the most economically efficient mode of creative work.

Epilog: Netscape Embraces the Bazaar
It’s a strange feeling to realize you’re helping make history….

On January 22 1998, approximately seven months after I first published The Cathedral and the Bazaar,
Netscape Communications, Inc. announced plans to give away the source for Netscape Communicator [http://-
www.netscape.com/newsref/pr/newsrelease558.html]. I had had no clue this was going to happen before the day
of the announcement.

Eric Hahn, executive vice president and chief technology officer at Netscape, emailed me shortly afterwards as
follows: “On behalf of everyone at Netscape, I want to thank you for helping us get to this point in the first place.
Your thinking and writings were fundamental inspirations to our decision.”

The following week I flew out to Silicon Valley at Netscape’s invitation for a day-long strategy conference (on 4
Feb 1998) with some of their top executives and technical people. We designed Netscape’s source-release strategy
and license together.

A few days later I wrote the following:

Netscape is about
to provide us with
a large-scale, real-
world test of the
bazaar model in the
commercial world.
The open-source
culture now faces a
danger; if Netscape’s
execution doesn’t

27

work, the open-source
concept may be so
discredited that the
commercial world
won’t touch it again
for another decade.

On the other hand, this
is also a spectacular
opportunity. Initial
reaction to the move
on Wall Street and
elsewhere has been
cautiously positive.
We’re being given
a chance to prove
ourselves, too. If
Netscape regains
substantial market
share through this
move, it just may set
off a long-overdue
revolution in the
software industry.

The next year should
be a very instructive
and interesting time.

And indeed it was. As I write in mid-2000, the development of what was later named Mozilla has been only
a qualified success. It achieved Netscape’s original goal, which was to deny Microsoft a monopoly lock on the
browser market. It has also achieved some dramatic successes (notably the release of the next-generation Gecko
rendering engine).

However, it has not yet garnered the massive development effort from outside Netscape that the Mozilla founders
had originally hoped for. The problem here seems to be that for a long time the Mozilla distribution actually broke
one of the basic rules of the bazaar model; it didn’t ship with something potential contributors could easily run
and see working. (Until more than a year after release, building Mozilla from source required a license for the
proprietary Motif library.)

Most negatively (from the point of view of the outside world) the Mozilla group didn’t ship a production-quality
browser for two and a half years after the project launch—and in 1999 one of the project’s principals caused a
bit of a sensation by resigning, complaining of poor management and missed opportunities. “Open source,” he
correctly observed, “is not magic pixie dust.”

And indeed it is not. The long-term prognosis for Mozilla looks dramatically better now (in November 2000) than
it did at the time of Jamie Zawinski’s resignation letter—in the last few weeks the nightly releases have finally

28

passed the critical threshold to production usability. But Jamie was right to point out that going open will not
necessarily save an existing project that suffers from ill-defined goals or spaghetti code or any of the software
engineering’s other chronic ills. Mozilla has managed to provide an example simultaneously of how open source
can succeed and how it could fail.

In the mean time, however, the open-source idea has scored successes and found backers elsewhere. Since the
Netscape release we’ve seen a tremendous explosion of interest in the open-source development model, a trend
both driven by and driving the continuing success of the Linux operating system. The trend Mozilla touched off
is continuing at an accelerating rate.

Notes
[JB] In Programing Pearls, the noted computer-science aphorist Jon Bentley comments on Brooks’s observation
with “If you plan to throw one away, you will throw away two.”. He is almost certainly right. The point of
Brooks’s observation, and Bentley’s, isn’t merely that you should expect first attempt to be wrong, it’s that starting
over with the right idea is usually more effective than trying to salvage a mess.

[QR][QR] Examples of successful open-source, bazaar development predating the Internet explosion and unre-
lated to the Unix and Internet traditions have existed. The development of the info-Zip [http://www.cdrom.com/-
pub/infozip/] compression utility during 1990–x1992, primarily for DOS machines, was one such example.
Another was the RBBS bulletin board system (again for DOS), which began in 1983 and developed a sufficiently
strong community that there have been fairly regular releases up to the present (mid-1999) despite the huge
technical advantages of Internet mail and file-sharing over local BBSs. While the info-Zip community relied
to some extent on Internet mail, the RBBS developer culture was actually able to base a substantial on-line
community on RBBS that was completely independent of the TCP/IP infrastructure.

[CV][CV] That transparency and peer review are valuable for taming the complexity of OS development turns out,
after all, not to be a new concept. In 1965, very early in the history of time-sharing operating systems, Corbató
and Vyssotsky, co-designers of the Multics operating system, wrote [http://www.multicians.org/fjcc1.html]

It is expected that
the Multics system
will be published
when it is operating
substantially… Such
publication is desirable
for two reasons:
First, the system
should withstand
public scrutiny and
criticism volunteered
by interested readers;
second, in an age of
increasing complexity,
it is an obligation to
present and future

29

system designers
to make the inner
operating system as
lucid as possible so
as to reveal the basic
system issues.

[JH][JH] John Hasler has suggested an interesting explanation for the fact that duplication of effort doesn’t seem
to be a net drag on open-source development. He proposes what I’ll dub “Hasler’s Law”: the costs of duplicated
work tend to scale sub-qadratically with team size—that is, more slowly than the planning and management
overhead that would be needed to eliminate them.

This claim actually does not contradict Brooks’s Law. It may be the case that total complexity overhead and
vulnerability to bugs scales with the square of team size, but that the costs from duplicatedduplicated work are
nevertheless a special case that scales more slowly. It’s not hard to develop plausible reasons for this, starting with
the undoubted fact that it is much easier to agree on functional boundaries between different developers’ code that
will prevent duplication of effort than it is to prevent the kinds of unplanned bad interactions across the whole
system that underly most bugs.

The combination of Linus’s Law and Hasler’s Law suggests that there are actually three critical size regimes in
software projects. On small projects (I would say one to at most three developers) no management structure more
elaborate than picking a lead programmer is needed. And there is some intermediate range above that in which the
cost of traditional management is relatively low, so its benefits from avoiding duplication of effort, bug-tracking,
and pushing to see that details are not overlooked actually net out positive.

Above that, however, the combination of Linus’s Law and Hasler’s Law suggests there is a large-project range in
which the costs and problems of traditional management rise much faster than the expected cost from duplication
of effort. Not the least of these costs is a structural inability to harness the many-eyeballs effect, which (as
we’ve seen) seems to do a much better job than traditional management at making sure bugs and details are not
overlooked. Thus, in the large-project case, the combination of these laws effectively drives the net payoff of
traditional management to zero.

[HBS][HBS] The split between Linux’s experimental and stable versions has another function related to, but
distinct from, hedging risk. The split attacks another problem: the deadliness of deadlines. When programmers
are held both to an immutable feature list and a fixed drop-dead date, quality goes out the window and there
is likely a colossal mess in the making. I am indebted to Marco Iansiti and Alan MacCormack of the Harvard
Business School for showing me me evidence that relaxing either one of these constraints can make scheduling
workable.

One way to do this is to fix the deadline but leave the feature list flexible, allowing features to drop off if not
completed by deadline. This is essentially the strategy of the “stable” kernel branch; Alan Cox (the stable-kernel
maintainer) puts out releases at fairly regular intervals, but makes no guarantees about when particular bugs will
be fixed or what features will beback-ported from the experimental branch.

The other way to do this is to set a desired feature list and deliver only when it is done. This is essentially the
strategy of the “experimental” kernel branch. De Marco and Lister cited research showing that this scheduling

30

policy (“wake me up when it’s done”) produces not only the highest quality but, on average, shorter delivery times
than either “realistic” or “aggressive” scheduling.

I have come to suspect (as of early 2000) that in earlier versions of this essay I severely underestimated the
importance of the “wake me up when it’s done” anti-deadline policy to the open-source community’s productivity
and quality. General experience with the rushed GNOME 1.0 release in 1999 suggests that pressure for a premature
release can neutralize many of the quality benefits open source normally confers.

It may well turn out to be that the process transparency of open source is one of three co-equal drivers of its quality,
along with “wake me up when it’s done” scheduling and developer self-selection.

[SU][SU] It’s tempting, and not entirely inaccurate, to see the core-plus-halo organization characteristic of
open-source projects as an Internet-enabled spin on Brooks’s own recommendation for solving the N-squared
complexity problem, the “surgical-team” organization—but the differences are significant. The constellation
of specialist roles such as “code librarian” that Brooks envisioned around the team leader doesn’t really exist;
those roles are executed instead by generalists aided by toolsets quite a bit more powerful than those of Brooks’s
day. Also, the open-source culture leans heavily on strong Unix traditions of modularity, APIs, and information
hiding—none of which were elements of Brooks’s prescription.

[RJ][RJ] The respondent who pointed out to me the effect of widely varying trace path lengths on the difficulty
of characterizing a bug speculated that trace-path difficulty for multiple symptoms of the same bug varies
“exponentially” (which I take to mean on a Gaussian or Poisson distribution, and agree seems very plausible).
If it is experimentally possible to get a handle on the shape of this distribution, that would be extremely valuable
data. Large departures from a flat equal-probability distribution of trace difficulty would suggest that even solo
developers should emulate the bazaar strategy by bounding the time they spend on tracing a given symptom before
they switch to another. Persistence may not always be a virtue…

[IN][IN] An issue related to whether one can start projects from zero in the bazaar style is whether the bazaar
style is capable of supporting truly innovative work. Some claim that, lacking strong leadership, the bazaar can
only handle the cloning and improvement of ideas already present at the engineering state of the art, but is unable
to push the state of the art. This argument was perhaps most infamously made by the Halloween Documents
[http://www.opensource.org/halloween/], two embarrassing internal Microsoft memoranda written about the open-
source phenomenon. The authors compared Linux’s development of a Unix-like operating system to “chasing
taillights”, and opined “(once a project has achieved “parity” with the state-of-the-art), the level of management
necessary to push towards new frontiers becomes massive.”

There are serious errors of fact implied in this argument. One is exposed when the Halloween authors themseselves
later observe that “often […] new research ideas are first implemented and available on Linux before they are
available / incorporated into other platforms.”

If we read “open source” for “Linux”, we see that this is far from a new phenomenon. Historically, the open-
source community did not invent Emacs or the World Wide Web or the Internet itself by chasing taillights or
being massively managed—and in the present, there is so much innovative work going on in open source that
one is spoiled for choice. The GNOME project (to pick one of many) is pushing the state of the art in GUIs and
object technology hard enough to have attracted considerable notice in the computer trade press well outside the
Linux community. Other examples are legion, as a visit to Freshmeat [http://freshmeat.net/] on any given day will
quickly prove.

31

But there is a more fundamental error in the implicit assumption that the cathedral modelcathedral model (or the
bazaar model, or any other kind of management structure) can somehow make innovation happen reliably. This
is nonsense. Gangs don’t have breakthrough insights—even volunteer groups of bazaar anarchists are usually
incapable of genuine originality, let alone corporate committees of people with a survival stake in some status quo
ante. Insight comes from individuals.Insight comes from individuals. The most their surrounding social machinery
can ever hope to do is to be responsiveresponsive to breakthrough insights—to nourish and reward and rigorously
test them instead of squashing them.

Some will characterize this as a romantic view, a reversion to outmoded lone-inventor stereotypes. Not so; I
am not asserting that groups are incapable of developingdeveloping breakthrough insights once they have been
hatched; indeed, we learn from the peer-review process that such development groups are essential to producing
a high-quality result. Rather I am pointing out that every such group development starts from—is necessarily
sparked by—one good idea in one person’s head. Cathedrals and bazaars and other social structures can catch that
lightning and refine it, but they cannot make it on demand.

Therefore the root problem of innovation (in software, or anywhere else) is indeed how not to squash it—but, even
more fundamentally, it is how to grow lots of people who can have insights in the first placehow to grow lots of
people who can have insights in the first place.

To suppose that cathedral-style development could manage this trick but the low entry barriers and process fluidity
of the bazaar cannot would be absurd. If what it takes is one person with one good idea, then a social milieu in
which one person can rapidly attract the cooperation of hundreds or thousands of others with that good idea is
going inevitably to out-innovate any in which the person has to do a political sales job to a hierarchy before he can
work on his idea without risk of getting fired.

And, indeed, if we look at the history of software innovation by organizations using the cathedral model, we
quickly find it is rather rare. Large corporations rely on university research for new ideas (thus the Halloween
Documents authors’ unease about Linux’s facility at coopting that research more rapidly). Or they buy out small
companies built around some innovator’s brain. In neither case is the innovation native to the cathedral culture;
indeed, many innovations so imported end up being quietly suffocated under the “massive level of management”
the Halloween Documents’ authors so extol.

That, however, is a negative point. The reader would be better served by a positive one. I suggest, as an experiment,
the following:

• Pick a criterion for originality that you believe you can apply consistently. If your definition is “I know it when
I see it”, that’s not a problem for purposes of this test.

• Pick any closed-source operating system competing with Linux, and a best source for accounts of current
development work on it.

• Watch that source and Freshmeat for one month. Every day, count the number of release announcements on
Freshmeat that you consider ‘original’ work. Apply the same definition of ‘original’ to announcements for
that other OS and count them.

• Thirty days later, total up both figures.

32

The day I wrote this, Freshmeat carried twenty-two release announcements, of which three appear they might push
state of the art in some respect, This was a slow day for Freshmeat, but I will be astonished if any reader reports
as many as three likely innovations a montha month in any closed-source channel.

[EGCS][EGCS] We now have history on a project that, in several ways, may provide a more indicative test of the
bazaar premise than fetchmail; EGCS [http://egcs.cygnus.com/], the Experimental GNU Compiler System.

This project was announced in mid-August of 1997 as a conscious attempt to apply the ideas in the early public
versions of The Cathedral and the Bazaar. The project founders felt that the development of GCC, the Gnu
C Compiler, had been stagnating. For about twenty months afterwards, GCC and EGCS continued as parallel
products—both drawing from the same Internet developer population, both starting from the same GCC source
base, both using pretty much the same Unix toolsets and development environment. The projects differed only in
that EGCS consciously tried to apply the bazaar tactics I have previously described, while GCC retained a more
cathedral-like organization with a closed developer group and infrequent releases.

This was about as close to a controlled experiment as one could ask for, and the results were dramatic. Within
months, the EGCS versions had pulled substantially ahead in features; better optimization, better support for
FORTRAN and C++. Many people found the EGCS development snapshots to be more reliable than the most
recent stable version of GCC, and major Linux distributions began to switch to EGCS.

In April of 1999, the Free Software Foundation (the official sponsors of GCC) dissolved the original GCC
development group and officially handed control of the project to the the EGCS steering team.

[SP][SP] Of course, Kropotkin’s critique and Linus’s Law raise some wider issues about the cybernetics of social
organizations. Another folk theorem of software engineering suggests one of them; Conway’s Law—commonly
stated as “If you have four groups working on a compiler, you’ll get a 4-pass compiler”. The original statement
was more general: “Organizations which design systems are constrained to produce designs which are copies of
the communication structures of these organizations.” We might put it more succinctly as “The means determine
the ends”, or even “Process becomes product”.

It is accordingly worth noting that in the open-source community organizational form and function match on
many levels. The network is everything and everywhere: not just the Internet, but the people doing the work
form a distributed, loosely coupled, peer-to-peer network that provides multiple redundancy and degrades very
gracefully. In both networks, each node is important only to the extent that other nodes want to cooperate with it.

The peer-to-peer part is essential to the community’s astonishing productivity. The point Kropotkin was trying to
make about power relationships is developed further by the ‘SNAFU Principle’: “True communication is possible
only between equals, because inferiors are more consistently rewarded for telling their superiors pleasant lies
than for telling the truth.” Creative teamwork utterly depends on true communication and is thus very seriously
hindered by the presence of power relationships. The open-source community, effectively free of such power
relationships, is teaching us by contrast how dreadfully much they cost in bugs, in lowered productivity, and in
lost opportunities.

Further, the SNAFU principle predicts in authoritarian organizations a progressive disconnect between decision-
makers and reality, as more and more of the input to those who decide tends to become pleasant lies. The way this
plays out in conventional software development is easy to see; there are strong incentives for the inferiors to hide,
ignore, and minimize problems. When this process becomes product, software is a disaster.

33

Bibliography
I quoted several bits from Frederick P. Brooks’s classic The Mythical Man-Month because, in many respects, his
insights have yet to be improved upon. I heartily recommend the 25th Anniversary edition from Addison-Wesley
(ISBN 0-201-83595-9), which adds his 1986 “No Silver Bullet” paper.

The new edition is wrapped up by an invaluable 20-years-later retrospective in which Brooks forthrightly admits
to the few judgements in the original text which have not stood the test of time. I first read the retrospective
after the first public version of this essay was substantially complete, and was surprised to discover that Brooks
attributed bazaar-like practices to Microsoft! (In fact, however, this attribution turned out to be mistaken. In 1998
we learned from the Halloween Documents [http://www.opensource.org/halloween/] that Microsoft’s internal
developer community is heavily balkanized, with the kind of general source access needed to support a bazaar
not even truly possible.)

Gerald M. Weinberg’s The Psychology Of Computer Programming (New York, Van Nostrand Reinhold 1971)
introduced the rather unfortunately-labeled concept of “egoless programming”. While he was nowhere near the
first person to realize the futility of the “principle of command”, he was probably the first to recognize and argue
the point in particular connection with software development.

Richard P. Gabriel, contemplating the Unix culture of the pre-Linux era, reluctantly argued for the superiority of
a primitive bazaar-like model in his 1989 paper “LISP: Good News, Bad News, and How To Win Big”. Though
dated in some respects, this essay is still rightly celebrated among LISP fans (including me). A correspondent
reminded me that the section titled “Worse Is Better” reads almost as an anticipation of Linux. The paper is
accessible on the World Wide Web at http://www.naggum.no/worse-is-better.html.

De Marco and Lister’s Peopleware: Productive Projects and Teams (New York; Dorset House, 1987; ISBN 0-
932633-05-6) is an underappreciated gem which I was delighted to see Fred Brooks cite in his retrospective.
While little of what the authors have to say is directly applicable to the Linux or open-source communities, the
authors’ insight into the conditions necessary for creative work is acute and worthwhile for anyone attempting to
import some of the bazaar model’s virtues into a commercial context.

Finally, I must admit that I very nearly called this essay “The Cathedral and the Agora”, the latter term being
the Greek for an open market or public meeting place. The seminal “agoric systems” papers by Mark Miller and
Eric Drexler, by describing the emergent properties of market-like computational ecologies, helped prepare me
to think clearly about analogous phenomena in the open-source culture when Linux rubbed my nose in them five
years later. These papers are available on the Web at http://www.agorics.com/agorpapers.html.

Acknowledgements
This essay was improved by conversations with a large number of people who helped debug it. Particular thanks to
Jeff Dutky <[email protected]>, who suggested the “debugging is parallelizable” formulation, and helped
develop the analysis that proceeds from it. Also to Nancy Lebovitz <[email protected]> for
her suggestion that I emulate Weinberg by quoting Kropotkin. Perceptive criticisms also came from Joan Eslinger
<[email protected]> and Marty Franz <[email protected]> of the General
Technics list. Glen Vandenburg <[email protected]> pointeed out the importance of self-selection in
contributor populations and suggested the fruitful idea that much development rectifies ‘bugs of omission’; Daniel

34

Upper <[email protected]> suggested the natural analogies for this. I’m grateful to the members of PLUG, the
Philadelphia Linux User’s group, for providing the first test audience for the first public version of this essay. Paula
Matuszek <[email protected]> enlightened me about the practice of software management.
Phil Hudson <[email protected]> reminded me that the social organization of the hacker culture
mirrors the organization of its software, and vice-versa. John Buck <[email protected]>
pointed out that MATLAB makes an instructive parallel to Emacs. Russell Johnston <[email protected]>
brought me to consciousness about some of the mechanisms discussed in “How Many Eyeballs Tame Complexity.”
Finally, Linus Torvalds’s comments were helpful and his early endorsement very encouraging.

35

1996. Pp. 109-128 in Computer-Mediated Communication: Linguistic, Social, and Cross-Cultural Perspectives, edited by Susan Herring.
Amsterdam: John Benjamins.

Managing the Virtual Commons:
Cooperation and Conflict in Computer Communities

Peter Kollock and Marc Smith

University of California, Los Angeles [1]

1. The Problem of Cooperation

Computer-mediated communication systems are believed to have powerful effects on social relationships. Many
claim that this new form of social interaction encourages wider participation, greater candor, and an emphasis on
merit over status. In short, the belief is that social hierarchies are dissolved and that flatter, more egalitarian
social organizations emerge. Networked communications, it is argued, will usher in a renewed era of democratic
participation and revitalized community. But as with earlier technologies that promised freedom and power, the
central problems of social relationships remain, although in new and possibly more challenging forms.

One of the most basic questions in the social sciences is the problem of cooperation. In the face of temptations to
behave selfishly, how might a group of people ever manage to establish or maintain cooperative relations? The
character and qualities of this problem are different when groups use computer-mediated communication to
interact, but the differences do not guarantee a uniformly positive effect or resolve many of the long standing
problems of cooperation. Indeed, we will show that there is a double edge to computer-mediated interaction:
many of its central qualities make it easier both to cooperate and to behave selfishly. Thus, computer-mediated
interaction raises political, practical, and sociological problems in new ways and with new stakes.

At the root of the problem of cooperation is the fact that there is often a tension between individual and
collective rationality. This is to say that in many situations, behavior that is reasonable and justifiable for the
individual leads to a poorer outcome for all. Such situations are termed social dilemmas and underlie many of
the most serious social problems we face.[2] One of the most famous models of social dilemmas is the “tragedy
of the commons” (Hardin 1968). Hardin described a group of herders having open access to a common parcel of
land on which they could let their cows graze. It is in each herders interest to put as many cows as possible onto
the land, even if the commons is damaged as a result. The herder receives all the benefits from the additional
cows and the damage to the commons is shared by the entire group. Yet if all herders make this individually
reasonable decision the commons is destroyed and all will suffer.

A related model of the tension between individual and collective rationality is the challenge of providing public
goods. A public good is a resource from which all may benefit, regardless of whether they have helped create the
good (e.g., public television or a community improvement project).3 The temptation is to enjoy a public good
without contributing to its production, but if all reach this decision, the good is never created and all suffer.

The tragedy of the commons and the challenge of providing public goods share a common feature:

At the heart of each of these models in the free-rider problem. Whenever one person cannot be excluded from
the benefits that others provide, each person is motivated not to contribute to the joint effort, but to free-ride on
the efforts of others. If all participants choose to free-ride, the collective benefit will not be produced. The
temptation to free-ride, however, may dominate the decision process and thus all will end up where no one
wanted to be. (Ostrom 1990: 6)

In the face of the free-rider problem, how is cooperation possible? The pessimistic conclusion of many
researchers (e.g., Hardin 1968; 1974) is that coercion by a strong external authority is necessary in order to
insure cooperation. But other researchers (e.g., Fox 1985) have argued that an external authority may not be
necessary and may even make the situation worse. The question becomes, to what extent can group members
regulate themselves, providing collective goods and managing common resources without recourse to external
authorities? Given the new possibilities that emerge in computer-mediated interaction, cyberspace provides an
important research site to explore this fundamental question of social order.

Thus, the free-rider problem and the ability of a group to overcome it is our focus for this chapter. We apply the
logic of social dilemmas to a portion of cyberspace known as the Usenet — a collection of several thousand
discussion groups that is distributed and maintained in a decentralized fashion. In sections 2 and 3 we describe
the Usenet and discuss the major social dilemmas that members of the Usenet face. In order to explore how
these problems might be solved in the Usenet, in section 4 we make use of the innovative work by Ostrom
(1990), who studied a wide variety of communities in order to determine what features of a group contribute to
its success or failure in managing collective goods. The set of cases she examined include common forest and
grazing grounds in Swiss and Japanese villages, fisheries in Canada and Sri Lanka, and irrigation systems in
Spain and the Philippines. She identified a set of design principles that are features of communities which have
successfully met the challenge of producing and maintaining collective goods despite the temptation to free-ride
and without recourse to an external authority. We discuss each of these principles and ask to what extent they are
present in the Usenet and whether their relevance changes when groups interact via computer networks. Thus,
our goal is to contribute both to the study of computer-mediated interaction and to research on cooperation and
social dilemmas.

Given the space constraints here, we are severely restricted in the amount of detail and number of examples we
can present. We are in the process of completing a book-length study in which we will go into much greater
depth in our analysis of the issues of social interaction and order in cyberspace (Kollock and Smith,
forthcoming).

2. The USENET

The Usenet is one of the largest computer-mediated communication systems in existence. Developed in 1981 as
an alternative to services available through the ARPANET, the Usenet has grown exponentially and currently
consists of several thousand discussion groups (termed newsgroups). Recent estimates suggest that roughly two
million people from all around the world participate in some way, with further increases expected. The Usenet is
similar in many ways to conferencing systems, often referred to as a Bulletin Board System (BBS), and compared
to e-mail distribution lists. It shares many qualities with these forms of computer-mediated communication, but
differs in significant ways. No central authority manages the Usenet, although considerable cooperation exists
around the definition of standards that determine the technical organization of the distribution system. It is
distributed in the sense that there is no central repository for Usenet postings, each contribution is passed
throughout the system of interconnected hosts — systems that receive and pass along each contribution they
receive. The Usenet is not a commercial product, it is distributed through connections that are often informally
maintained.

The Usenet is accessed via a variety of tools that alter the way in which groups and messages can be selected
and read. However, a theme common to most of the tools used is that one or more newsgroups are selected or
“subscribed” to, each of which contain one or more threads, or series of postings and responses (and the

responses to responses) on a common subject. There are roughly 4500 newsgroups in current wide circulation
covering a diverse range of topics. The topics of newsgroups are displayed in the name of the group and are
designed to advertise the focus of the group. For example, comp.sys.mac.hardware focuses on issues concerning
the Macintosh computer’s hardware. The Usenet has institutionalized eight general thematic categories.[4] and
has developed a range of conventions to describe and delineate the kinds of activity and contributions that group
considers desirable and appropriate.[5] The names serve not only to identify what is desired in a group, but what
is inappropriate as well. Thus, discussion of IBM PC’s, foreign affairs, film, or even Macintosh software are not
wanted in ]comp.sys.mac.hardware. Newsgroups provide a forum for individuals with esoteric interests to find
one another, thus providing the service of a “Schelling” point.[6]

A number of newsgroups are centered around technical subjects, such as programming languages, operating
systems, and kinds of computer hardware. However, less technical subjects are the basis for many newsgroups
as well. For example, sci.lang.japan contains discussions about the Japanese language, and a collection of
groups starting with the name alt.current-events have focused on issues ranging from the World Trade Center
bombing to the Los Angeles earthquake. Many newsgroups focus on cultural or recreational activities, such as
soc.culture.bangladesh and rec.arts.movies. There are newsgroups, like alt.barney.die.die.die or
alt.swedish.chef.bork.bork.bork, that are intended to provide a venue for a humorous and whimsical discussion.
Other newsgroups cover subjects that rarely get candid public discussion in any other forum, such as the alt.sex
groups. There are also newsgroups, like alt.sexual.abuse.recovery, that are specifically created to provide
support for its members.

Newsgroups often contain requests for information, replies to requests, discussions of the validity and accuracy
of replies, and further questions prompted by the discussion. Newsgroups can and often do have dozens of
threads running simultaneously, some referring to one another, some cross-posted to other newsgroups.[7]

Figure 1 illustrates how a collection of threads is displayed in the newsgroup comp.org.eff.talk, a newsgroup
sponsored by the Electronic Frontier Foundation (EFF), which is dedicated to the discussion of the legal,
political and economic issues and problems raised by new information technologies. The first column provides a
menu letter for each thread (typing this letter selects the thread listed next to it), the next column lists the authors’
names (or usernames)[8] for each response in that thread. The third column indicates the number of messages in
each thread (which can be as many as hundreds of messages), and the final column displays the thread’s subject.
The “>” character indicates that this is a reply to a message with the same subject.

comp.org.eff.talk 117 articles

a Tom Miller 1 >NET system

b Joe Cipale 4 >A chance to repeal the DAT tax

John Henders

Don Reid

John A Sigmon 1 >Where can I find HI FI World in S.Bay

d J Heitkoetter 1 Big Dummy’s Guide in Texinfo, etc….

e Bob Smart 1 *FLASH* Moby SUBPOENA served

f Stephen Savitzky 1 NSA, meet NRA — If s/w is a munition…

(Mail) — Select threads — 47% [>Z] —

Figure 1. Display of threads in a sample Usenet newsgroup

This newsgroup, like many others, is a forum used to provide information and news about issues of relevance to
the EFF and hosts extended discussions and debates. Selecting a thread causes the messages stored within it to
be displayed.

comp.org.eff.talk #16743 (52 + 61 more) +-( )+-( )+-(1)–(2)

Newsgroups: comp.org.eff.talk,sci.crypt,alt. | | -[1]+-[1]

+ security.pgp,talk.politics.crypto | | -[1]

[1] Re: *FLASH* Moby SUBPOENA served | -( )–[1]

From: [email protected] (Bob Smart) |-( )–[1]

Date: Mon Sep 20 8:00:08 PDT 1993 -[1]+-[1]

Distribution: inet

Organization: Citicorp+TTI

Nntp-Posting-Host: bsmart.tti.com

Lines: 30

In article <[email protected]>, [email protected] (TedDunning) writes:

> no. ecpa-86 only prohibits recordings made without the permission

> of either party. if one party to the conversation consents, then

> the tap is legal. thus you can record your own conversations.

That’s not necessarily the whole story, though: some states require that ALL parties to a
conversation must consent to any recording. At a minimum, you need to know whether you’re in a two-
party or a one-party state before you proceed.

[…]

———

A fanatic is someone who does what he knows that God would do if God knew the facts of the case.

Some mailers apparently munge my address; you might have to use [email protected] — or if that
fails, fall back to [email protected] Ain’t UNIX grand?

Figure 2. A sample posting to a Usenet newsgroup

Figure 2 illustrates an excerpt from one of the postings listed in Figure 1. This post is typical of many found in
the Usenet. First, the top block of lines contains header information, such as the names of the newsgroups to
which the message should be added, the subject line (which is used to construct threads), the date and author of
the post, and information concerning each of the machines that passed along the message. Opposite the header is
a thread tree, generated by some newsreaders, that provides a graphical representation of where in the numerous
turns in a thread this message is located. Messages that copy the subject line from this message are represented
as branches below this message. Below both the header and thread tree is the body of the message. The body of
this message is typical of many Usenet messages in that it contains “quoted” material, often from a message
posted earlier in the thread. Here, for example, the quoted text is preceded by “>” characters with a line
attributing the source of the quote above it. This cycle of quoting and then commenting can go on for many

rounds and sometimes results in postings that are several pages long, but contain very little new text. Finally, the
last few lines are a signature, often referred to as a sig. Sigs frequently serve a combination of the functions of
bumper stickers and business cards; quotes and jokes are common, along with return addresses and phone
numbers.

Contributing to a Usenet newsgroup is a simple matter. A post can be written immediately after reading another
post, and the contents of any post can be copied into the reply. Sending the post is similar to sending e-mail,
however, the message sent is copied to the newsgroup(s) specified by the sender, and so will be read by all
participants of the newsgroup rather than by just a single person.

Having described the Usenet, we turn in the next two sections to a discussion of the free-rider problem in this
part of cyberspace and the design principles of successful communities. We base the analysis that follows on
extended observations of the daily workings of the Usenet. It is important to note that Usenet postings, like
audio recordings of telephone conversations, have the advantage of capturing everything that was publicly
available to the participants in that setting. The copies of postings we drew from the Usenet are exact copies of
what others who read them saw. Usenet postings also have the advantage that one can observe patterns of
interactions without affecting those patterns. But as with telephone conversations, there is much that is beyond
the spoken word or string of ASCII; Usenet postings cannot capture the private meanings people may intend or
take from messages. Further, even more than records of spoken interaction, postings have an ambiguous tone.
While a variety of textual practices have been developed to convey the subtleties of communication that are
normally carried by tone, posture, gesture, and a host of other indicators of nuance, this medium remains
particularly open to multiple interpretations. In addition, members of the Usenet have a multitude of back-
channels of communication that often escape our examination. Participants in the Usenet may e-mail each other
directly, avoiding the public arena of a newsgroup, or may even telephone, write or meet each other without
evidence of this appearing in a newsgroup. While these limitations should caution against over-ambitious claims,
similar constraints exist for all forms of observation. The fact that the postings we use to ground our claims are
available for examination by others provides a useful check on distorted interpretations.

3. Social Dilemmas in Cyberspace

There is a layer of cooperation and coordination in the details of communication, conversation, and interaction
that is unacknowledged by most researchers. An important exception is work by ethnomethodologists and
conversational analysts, who have shown how orderly processes of interaction are founded upon an immense
amount of collaborative work which is ordinarily taken for granted. The tension between individual and group
outcomes can be seen here as well. There is a sense, for example, in which the conversational “floor” constitutes
a commons: if access to the floor is allocated in an ordered way by speakers exchanging “turns”, each has the
opportunity to accomplish his or her interactional goals, but if all crowd in, the communication breaks down.
Similarly, the interactional work that is necessary to keep a conversation going is a kind of public good in the
sense that it is possible to free-ride on others’ efforts, using and abusing the conversation without contributing to
its maintenance. While there are many important ways in which spoken conversation differs from interaction on
the Usenet, similar challenges exist there as well.

Despite the great potential of the Usenet to provide collective goods, it is often the case that this potential is not
realized. The endemic tension between individual and collective rationality is as present in Usenet newsgroups
as it is in shared pasture lands. In the Usenet, the key common resource is not an open pasture, but bandwidth.
The term refers to “the volume of information per unit time that a computer, person, or transmission medium can
handle.” (Raymond 1993) Thus, bandwidth refers to both the limited capacity of the Usenet in terms of its
technical capacity to carry and store information, and the capacity of its members to attend to and consume that
information. A great concern on the Usenet is using the available bandwidth wisely, which is to say, refraining
from posting unnecessary information. Among the actions that are usually considered an inappropriate use of
bandwidth are: posting extremely long articles; reproducing long sections of text from a previous post rather
than summarizing or excerpting only the relevant passages; including long signatures full of comments and

diagrams at the end of a post; and posting the same message to many newsgroups instead of one or a small, well-
chosen set.

If members exhibit restraint in their use of bandwidth, the Usenet benefits everyone by being an effective and
efficient means of exchanging information and carrying on discussions. Unfortunately, an individual member
looking out on the huge capacity of the Usenet can reason (with some justification) that his or her individual use
of bandwidth does not appreciably affect what is available for others, and so use this common resource without
restraint. The collective outcome of too many people reaching this individually rational decision is, of course,
disaster. Here then is a crucial way in which a participant of the Usenet might free-ride on the efforts of other
members: using the available bandwidth without restraint while others regulate their own behavior.

Overusing bandwidth is not the only social dilemma members of the Usenet face. Whatever the goal of the
newsgroup, it’s success depends on the active and ongoing contributions of those who choose to participate in it.
If the goal of the newsgroup is to exchange information and answer questions about a particular topic (e.g.,
alt.comp.sys.gateway-2000), participants must be willing to answer questions raised by others, summarize and
post replies to queries they have made themselves, and pass along information that is relevant to the group. If the
goal of the newsgroup is to discuss a current event or social issue (e.g., soc.veterans), participants need to
contribute to the discussion and to encourage its development. Once again there is the temptation to free-ride:
asking question but not answering them; gathering information but not distributing it; or reading ongoing
discussions without contributing to them (termed lurking). Some newsgroups successfully meet these
challenges, others start well and then degrade, and still other newsgroups fail at the beginning of their existence,
never managing to attract a critical mass of participants.

Wise use of bandwidth and the active participation of its members is not enough to ensure the success of a
newsgroup. One of the most important collective goods that the Usenet provides is a system for coordinating the
exchange of information. By providing the means for maintaining a set of several thousand topics, as well as
more specific threads within each topic, the Usenet allows individuals with common interests to find and interact
with each other. Given the huge amount of information that is transferred through the Usenet, it is critical that
members respect the focus of a newsgroup and of the various threads within a newsgroup by sticking to the topic
that is being discussed. Being off-topic threatens the coordination of discussion that the Usenet rests on. The
logic of social dilemmas is present here as well. If no one worried about being on-topic, meaningful interaction
would be impossible on the Usenet, but as long as most people are careful to make comments that are relevant to
the newsgroup and thread, others can free-ride on this restraint by posting their opinions widely and
indiscriminately to many groups, without concern for their relevance. Users who do post to many newsgroups
without regard to the topic are said to be grandstanding, a violation that highlights both the erosion of the
organizational boundaries that enable the Usenet to remain a coherent place and the moral and practical limits on
the use of another’s attention.

Finally, a successful newsgroup depends on its members following rules of decorum. What counts as acceptable
behavior can, of course, vary tremendously from newsgroup to newsgroup: a hostile, provocative post (termed
flaming) is an etiquette breach in most newsgroups, but not in alt.flaming, where violating decorum would mean
engaging in a sober, restrained discussion. Often the cultural rules that define what is and is not appropriate are
implicit or poorly understood and articulated, which can itself lead to conflict as participants with different
expectations attempt to interact. Whatever the local rules of decorum, it is important that most participants
follow them. However, there is the temptation to free-ride on others’ efforts to maintain norms of civility while
violating those norms oneself, saying whatever one wants to without any self-regulation.

Ideally, members of the Usenet would make efficient use of bandwidth, participate actively in newsgroups,
insure that their comments are posted only to relevant newsgroups, and abide by the local norms and culture that
govern decorum. Everyone is better off if all behave in such a manner, but there is the temptation to free-ride on
the efforts of others. Thus, some participants post articles that are unnecessarily long, or lurk rather than
contributing to the give and take that is the essential feature of any newsgroup, or post articles that are off-topic,
or violate the local rules of decorum. The more people free-ride, the more difficult it is to produce useful
information and interaction. In the language of the Usenet, the signal-to-noise ratio deteriorates. The challenge

becomes how a group of individuals can “organize and govern themselves to obtain collective benefits in
situations where the temptations to free-ride and to break commitments are substantial” (Ostrom 1990: 27).

4. Managing the Virtual Commons

To address this issue, Ostrom (1990) studied a wide range of communities which had a long history of
successfully producing and maintaining collective goods. She also studied a number of communities which had
failed partially or completely in meeting this challenge. In comparing the communities, Ostrom found that
groups which are able to organize and govern themselves are marked by the following design principles:

1. Group boundaries are clearly defined
2. Rules governing the use of collective goods are well matched to local needs and conditions
3. Most individuals affected by these rules can participate in modifying the rules
4. The rights of community members to devise their own rules is respected by external authorities
5. A system for monitoring member’s behavior exists; this monitoring is undertaken by the community

members themselves
6. A graduated system of sanctions is used
7. Community members have access to low-cost conflict resolution mechanisms[9]

We use these design principles as a way of organizing our discussion of the Usenet. Our analysis extends
Ostrom’s original points and applies them to the kinds of organization found in the Usenet. We have grouped the
various design principles under three general headings: group size and boundaries (in which we discuss the first
principle and the related issue of group size); rules and institutions (in which we discuss the second, third and
fourth principles); and monitoring and sanctioning (in which the last three principles are discussed). In each case
we ask to what extent these design principles can be found in the Usenet and whether the relevance and costs
and benefits of these design principles change in this new form of social interaction.

4.1. Group Size and Boundaries

One of the most common and accepted tenets in the literature on cooperation is that “the larger the group, the
less it will further its common interests” (Olson 1965: 36). Researchers have identified a number of reasons why
cooperation may be more difficult as group size increases. First, as the group becomes larger, the costs of an
individual’s decision to free-ride are spread over a greater number of people (Dawes 1980). If an individual’s
action does not appreciably affect others, the temptation to free-ride increases. More generally, the larger the
group, the more difficult it may be to affect others’ outcomes by one’s own actions. Thus, an individual may be
discouraged from cooperating if his or her actions do not affect others in a noticeable way. Second, it is often the
case that as group size increases, anonymity becomes increasingly possible and an individual can free-ride
without others noticing his or her actions (Dawes 1980). Third, the costs of organizing are likely to increase
(Olson 1965), i.e., it becomes more difficult to communicate with others and coordinate the activities of
members in order to provide collective goods and discourage free-riding.

Does this logic hold in the Usenet? In many ways it does not because the costs and effectiveness of defection,
social control and coordination in the Usenet are very different than groups that interact without computer-
mediated communication. A key difference is that one’s behavior in a newsgroup is visible to every other
participant of the newsgroup, whether there are 10 participants or 10,000. Thus, the costs of free-riding by, for
example, being off-topic, posting huge articles, or violating decorum, are not diffused as the number of
participants in the newsgroup increases. Indeed, one could argue that the effects of free-riding increase as
newsgroup membership increases because there are a greater number of participants to be inconvenienced or
angered by such actions. This characteristic of the Usenet creates new challenges for those wishing to establish

cooperative communities, but also new possibilities. The fact that every individual’s behavior is visible and
identifiable discourages free-riding among those who only free-ride when they can do so anonymously. This
same visibility can make monitoring people’s actions easier.

Another important difference is that the Usenet can reduce the costs of communication and coordination, in
some cases allowing groups to produce and maintain collective goods that would otherwise be too expensive. In
particular: the challenge of finding people with similar interests is greatly reduced; the usual problems of
meeting in a common time and place are eliminated; communicating with a thousand people involves essentially
the same personal costs as sending a message to a single individual; a great number of members can participate
in discussions involving numerous topics without overloading participants; and an historical record of member’s
interactions is automatically produced. Thus, there may be the potential to sustain cooperation in much larger
groups than is possible without computer-mediated communication. For example, the
comp.sys.ibm.pc.games.action newsgroup provides several thousand people scattered around the planet with
access to each other, detailed information about where to find games for the IBM PC, strategies for playing those
games, and reports of problems and patches for fixing bugs. While this group could exist by meeting face-to-
face, or could publish a paper newsletter, by interacting via the Usenet, participants can interact more frequently,
at less cost, and among a larger and more wide-spread group than could be sustained otherwise.

However, these features of the Usenet do not by themselves guarantee a cooperative community, as is readily
apparent to any participant in the Usenet. There are other design principles that also seem to be necessary if a
community is to work well.

Ostrom found that one of the most important features of successful communities is that they have clearly defined
boundaries: “Without defining the boundaries of the [collective good] and closing it to ‘outsiders,’ local
appropriators face the risk that any benefits they produce by their efforts will be reaped by others who have not
contributed to those efforts. At the least, those who invest in the [collective good] may not receive as high a
return as they expected. At the worst, the actions of others could destroy the resource itself” (Ostrom 1990: 91).
Boundaries are also important in that they encourage frequent, ongoing interaction among group members. This
is critical because repeated interaction is perhaps the single most important factor in encouraging cooperation
(Axelrod 1984). If individuals are not likely to interact in the future, there is a huge temptation to behave
selfishly and free-ride. On the other hand, knowing that one will be interacting with others on a continual basis
can lead to the creation of reputations and serve as a powerful deterrent to short-run, selfish behavior.

One of the greatest challenges to cooperation in the Usenet is that its boundaries are often both undefendable and
undefined and cannot sufficiently ward off those who would exploit the collective goods produced by others.
While there are many resources to construct boundaries in the Usenet, many of these boundaries exist only by
voluntary compliance and are easily violated.[10]

In many ways, a newsgroup’s name is one of its most effective means of defining a boundary: by announcing its
contents it attracts the interested and repels the disinterested. But within this boundary a newsgroup’s
membership can be extremely fluid. Some newsgroups do attract and hold a fairly stable group, but many do not.
To the extent membership in a newsgroup isn’t stable and its boundaries are not clearly defined, cooperation will
be more difficult.

One way of increasing the stability of a group is by actively restricting its membership. The overwhelming
majority of newsgroups in the Usenet are potentially open to anyone.11 However, there is no technological
reason why restricted newsgroups cannot be created, just as there are e-mail distribution lists that one must ask
to join or private conferences on bulletin board systems.12 In the Usenet, there are two broad types of
boundaries that are relevant: barriers to access to the content of the newsgroup and barriers to posting to the
newsgroup. Thus, one possible type of restricted newsgroup might allow anyone to read a discussion but permit
only admitted members to contribute to it. Alternatively, both reading and posting could be limited to group
members.

There is, however, a technical device called the kill file or bozo filter that an individual can use to create a kind
of customized personal boundary. If someone’s actions in the Usenet are considered objectionable, an individual

can put this person in his or her kill file, which filters out any future posting by this person. In some ways a kill
file reduces a member’s reliance on the larger group’s ability to define and defend a boundary. This offers both
individuals and groups greater flexibility — the effects of some sorts of violations of the commons can be
minimized without the costs of restraining the offending activity. It also illustrates the kinds of powerful
interaction tools that can be built in cyberspace — imagine a conversation in which one could make invisible any
objectionable person. While this capacity might be longed for in many situations, it has some practical problems:
even though the person using the filter won’t see the offending party’s postings, other participants in the
newsgroup will see future postings and comment on them. Thus, one must continue to deal with the reactions to
the posting even if the original postings are kept from one’s eyes.

Although it does not yet exist in the Usenet, one way of addressing this limitation would be to create a
community kill file. In other words, members of a newsgroup could decide (via majority voting, consensus, etc.)
to place an offending individual in a shared, newsgroup-specific kill file such that the individual would be
prevented from posting to the newsgroup in the future. Note that this is a different approach to group boundaries
than the idea of a private, restricted newsgroup discussed above. A community kill file allows anyone to join a
newsgroup but provides a mechanism for banishing people. In contrast, the emphasis in a private newsgroup is
making it difficult to join in the first place.

4.2. Rules and Institutions

Any successful community will have a set of rules — whether they are implicit or explicit — that govern how
common resources should be used and who is responsible for producing and maintaining collective goods.
However, it is important that the rules are tailored to the specific needs and circumstances of the group. Ostrom
identifies this as another design principle that is a feature of cooperative communities: there is a good match
between the goals and local conditions of a group and the rules that govern the actions of the group’s members.
Her research indicates that there is often great variation from community to community in the details of the rules
for managing collective goods. One lesson is that it is dangerous to take the specific rules of a successful group
and apply them blindly to other groups.

Ostrom also found that an additional characteristic of successful communities is that most of the individuals
affected by the rules governing the use of common resources can participate in modifying those rules. She
argued that this feature results in better designed rules because the individuals with the knowledge of the day to
day workings of the group and the challenges the group faces could modify the rules over time to better fit local
conditions. In contrast, rules that were created and forced upon a community by outside authorities often failed
miserably because the rules did not take into account knowledge of local conditions or because the same set of
rules were applied in a procrustean fashion to many communities despite important differences between them.
Indeed, another design principle that marked successful communities was that external government authorities
recognized (at least to some extent) the rights of communities to devise their own rules and respected those rules
as legitimate.

Are these features present, and are the issues underlying them relevant in the Usenet? A well-crafted set of rules
for managing collective resources is certainly important for newsgroups, and some progress has been made in
defining those rules. Rules and institutions exist on a global and local level throughout the Usenet. At the global
level, there are some concerns that are common to all newsgroups, and a set of documents exist which chart out
rules that should govern participation. Six key documents have been grouped together in what is described as a
“mandatory course” for new users.[13] These documents discuss rules of etiquette, suggestions for using the
Usenet efficiently, cautions against wasting bandwidth or being off-topic, and many other issues.

On the local level, and consistent with the principle that rules should be tailored to local conditions, many
newsgroups have also established a body of information about the newsgroup, complete with prescriptions and
proscriptions, that is know as a Frequently-Asked Questions file, or FAQ. However, there are problems: not
every newsgroup has a FAQ (indeed, the creation of a FAQ is often the first sign that a group has resolved some

of the hurdles of collective organization); some FAQ’s do not addresses critical issues or do so ambiguously;
some newsgroups do not have a clear sense of their goals or the challenges they face; and many participants in
the Usenet (especially new members) do not bother reading FAQ’s and other related documents. Finally, these
documents contain no specific recommendations for dealing with violations of their rules; all enforcement in the
Usenet remains an informal process (this is discussed in the following section).

These points raise the issue of socialization. Even if a community has developed a good set of rules, there is the
task of teaching new members about those rules. The logic of social dilemmas exists here as well. All benefit if
all members have learned the information and rules necessary to carry on interaction in a newsgroup, but long-
time members are tempted to ignore questions from neophytes (termed newbies) and to not contribute to the
creation or maintenance of FAQ files. New members are tempted to wade into a newsgroup without first
learning the local culture by reading the documents that have been prepared by other members and by observing
the group for a period of time before attempting to participate.

The production of FAQ’s illustrates the ways in which local rules are produced and modified endogenously, by
the members themselves. However, participation in creating and modifying the rules that govern a community
does not necessarily mean that every member is involved in every decision. A FAQ may be produced by a single
entrepreneurial member of a newsgroup or may be the product of many individual contributions.[14]

Even in newsgroups that have produced a FAQ, many of the rules and institutions that are present remain
informal, undocumented and difficult to enforce.15 As a result, there are certain chronic problems that are
difficult to resolve through these informal means. In some of these cases, groups have decided to deal with a
social dilemma by turning over authority for the management of a collective good to a particular member or
group of members, trusting these leaders to manage the resource well. This is, in a broad sense, Hobbes’ classic
solution of Leviathan: people give up part of their personal freedom to an authority in exchange for some
measure of social order. While Leviathan conjures up visions of a fascist, totalitarian state, a milder version of
this solution can be found in the Usenet in the form of moderated groups. “These are groups which usually have
one or more individuals … who must approve articles before they are published to the net. … [Moderated groups
are often] derived from regular groups with such a high volume that it is hard for the average reader to keep up,
… [or] from regular groups that have often been abused” (Spafford et al. 1993b). Since each contribution is
evaluated for its appropriateness to the newsgroup, a moderated group avoids many of the problems of
unrestrained participation. But it resolves the problem of collective organization by depending on the willingness
of a moderator to invest significant time and effort in managing the newsgroup. And for the majority of
newsgroups that cannot find someone to make such a contribution or oppose ceding control to a central
authority, the problem of self-organization remains. Moderated groups are one of the rare examples of a formal
and enforceable institution in the Usenet.

Finally, in it’s present state, the Usenet is not subject to much interference from external authorities. This has the
advantage of allowing newsgroups to fashion their own rules and institutions. However, increased government
regulation is a possibility in the future. There are political pressures to regulate cyberspace, and external
interference, despite its dangers and limitations, is sometimes necessary if communities are unable to solve their
own social dilemmas. To the extent the Usenet successfully manages its collective resources, and retains its
distributed, decentralized structure, it can avoid the need for external regulation and resist outside pressures
encouraging external regulation.

4.3. Monitoring and Sanctioning

Each of the successful communities studied by Ostrom were marked by clearly defined group boundaries and a
set of well-designed rules. Because community members participated in refining the rules and the rules were
well-matched to local conditions, most members believed in the rules and were committed to following them.
However, this does not seem to be enough to insure cooperative relations. Some type of system to monitor and
sanction member’s actions was a feature of every successful community.

Monitoring and sanctioning is important not simply as a way of punishing rule-breakers, but also as a way of
assuring members that others are doing their part in using common resources wisely. Ostrom and other
researchers (Levi 1988) have argued that many individuals are willing to comply with a set of rules governing
collective goods if they believe the rules are efficacious and if they believe most others are complying with the
rules. That is, many people are contingent cooperators, willing to cooperate as long as most others do. Thus,
monitoring and sanctioning serves the important function of providing information about other persons’ actions.

In every successful community studied, the monitoring and sanctioning of people’s behavior was undertaken by
the community member’s themselves rather than by external authorities. Another common pattern was that
cooperative communities employed a graduated system of sanctions. While sanctions could be as severe as
banishment from the group, the initial sanction for breaking a rule was often very low. Community members
realized that even a well-intentioned person might break the rules when facing an unusual situation or extreme
hardship. Severely punishing such a person might alienate him or her from the community, causing greater
problems:

A large monetary fine imposed on a person facing an unusual problem may produce resentment and
unwillingness to conform to the rules in the future. Graduated punishments ranging from insignificant fines all
the way to banishment, applied in settings in which the sanctioners know a great deal about the personal
circumstances of the other appropriators and the potential harm that could be created by excessive sanctions,
may be far more effective than a major fine imposed on a first offender. (Ostrom 1990, p. 98)

Interaction in the Usenet makes monitoring much easier, but poses special problems for sanctioning others.
Because of the nature of computer-mediated communication, it becomes possible to monitor others more
thoroughly and more cheaply than has heretofore been possible in groups. Most forms of free-riding in the
Usenet, such as using the bandwidth unwisely, being off-topic, or violating norms of decorum, are seen by all
other participants of the newsgroup, and one’s actions are usually identifiable because each posting is
accompanied by the person’s e-mail address.[16] Further, because an exact record of every participants’ actions
is kept (at least for a few weeks), it is possible to “go back into history” and recover a sequence of interaction.
On the Usenet, unlike most interactional settings, the claim “I didn’t say that” had better be truthful, because
anyone can call up the exact words.

While monitoring can be accomplished at a very low cost (almost as a side effect of regular interaction),
sanctioning participant’s behavior in the Usenet is more of a challenge. There are some types of sanctions that
are simply impossible: threats of physical violence are necessarily empty threats[17], and no system exists to
levy and collect monetary fines (though such a system is technically possible). Indeed, it is very difficult to force
anyone to do anything — this is both the charm and frustration of the Usenet.

What participants can do is use a variety of informal sanctions to try to shape behavior. Free-riders might be
insulted, parodied, or simply informed that their actions are undesirable. Often the response is both intense and
voluminous, in part because of the effortlessness with which one can comment on other’s actions. [18] In this
sense, informal sanctions are easier to carry out in the Usenet than in many other settings. However, enforcing
social order is made more difficult by the fact that many newsgroups have no clear common understanding of
what should and should not occur in their interactions.

Nonetheless, some actions step clearly out of the bounds of acceptability. For example, recent discussions of
cruel acts to cats in the rec.pets.cats newsgroup were recognized as a clear violation of decorum. A post with the
subject “**** MAKE MONEY FAST ****” containing an invitation to participate in a classic pyramid scheme
was recently widely cross-posted throughout the Usenet and also drew widespread sanctions.[19] Responses
ranged from cautions against participating to expressions of extreme irritation and personal insults directed to
the poster. In addition, there were some calls for a coordinated collective response: “Remember people — Just
ignore it and it will go away. If you have to write something, do it via e-mail. … Behavior modification in action:
Don’t bother flaming them — attention is their reward. Just ignore them. They’ll get bored and go away.”[20]
These kinds of informal social control mechanisms depend upon moral suasion to have an effect — they lack any
capacity to actually restrict deviant behavior, they can only discourage it. Nevertheless, many people report that
informal sanctions do have a significant effect on their behavior.

More severe sanctions are possible but rarely carried out. In extreme situations a participant might have his or
her computer account revoked by the institution that controls the physical hardware. This occurs rarely, can
provoke widespread outrage, and is ultimately not a fool-proof way of banishing someone from the Usenet
because of the many alternate routes of getting access.

No set of rules is perfectly designed, and there will always be ambiguity in applying a particular rule.
Consequently, it is important to have some method to resolve the conflicts that will inevitably arise. This is the
final design principle Ostrom identifies as common to successful communities: access to low-cost conflict
resolution mechanisms. The need for these mechanisms in the Usenet is clear: for the reasons already discussed,
conflicts in newsgroups are fairly common. In fact, some newsgroups seem to be dedicated entirely to on-going
conflicts. However, formal methods for dealing with these conflicts have yet to develop — there is no Usenet
court system or even a place to engage in arbitration. While the Usenet has survived without these institutions
for many years, as the size and diversity of the Usenet population increases, these institutions may become
increasingly necessary. Other forms of social organization in cyberspace have already developed such
institutions. For example, some MOO’s and MUD’s have developed councils and judiciary systems to resolve
conflicts.[21] In contrast, the Usenet relies on the principle that most conflicts die out after a period of time, if
for no other reason than the combatants become exhausted.

5. Conclusions

As computer-mediated communication increasingly becomes the media through which public discourse takes
place, the ways in which that discourse is socially organized becomes more consequential. While systems like
the Usenet are continuously changing, their present form has implications for the future nature of a society
increasingly woven together by these technologies. Computers are being used, in effect, to manage networks of
relationships between people, changing the costs and benefits of cooperation.

Cooperation is an accomplishment, and in the Usenet cooperation must occur without recourse to external
authorities. That it occurs at all is somewhat amazing. As Olson (1965: 1) observed in his classic work on
collective action, “if the members of some group have a common interest or objective, and if they would all be
better off if that objective were achieved, it [does not necessarily follow] that the individuals in that group would
… act to achieve that objective.” For all its declared faults, the Usenet has developed into a remarkably robust
institution: it has endured more than a decade while it has grown exponentially to include millions of
participants.

For all of this cooperation, however, there remain significant shortcomings. Many newsgroups remain relatively
uncooperative places, filled with noise and argument. The Usenet may not need to resolve these problems, it
may simply become the public space in cyberspace where the balance between order and autonomy is decided in
favor of the latter. Other institutions in cyberspace may, however, learn the lessons the Usenet can teach and
provide alternatives that satisfy a wide range of desires.

One of the broad lessons that we draw from the social organization of the Usenet is that cyberspace has a double
edge: monitoring the behavior of others becomes easier while sanctioning undesirable behavior becomes more
difficult; the costs of communication between members of a large group are decreased while the effects of
defecting are often amplified; and the existence of several thousand newsgroups makes it easy for individuals to
find others who share specific interests and goals but also makes those who want to disrupt those groups able to
find them. Thus, there is no simple conclusion to this story, and one-note predictions of either a utopian or
dystopian future must be considered suspect.

To deepen our knowledge of the ways in which computer-mediated communications technologies alter the
economies of cooperation, we propose to engage in an extended ethnographic exploration of newsgroups,
charting their development and interviewing their participants to uncover the emergence of norms and
expectations concerning acceptable use and appropriate behavior. To supplement this research we are preparing

a network-based survey instrument to gather basic but as yet unavailable information about the demographics
and common experiences and practices of members of the Usenet.

References

Lynda M. Applegate, Harvard Business School. “Computer Links Erode Hierarchical Nature of Workplace
Culture”, 9 December 1993, John R. Wilke, WSJ.

Axelrod, Robert.

1984. The Evolution of Cooperation. New York: Basic Books.

Curtis, Pavel.

1991. “Mudding: Social Phenomena in Text-Based Virtual Reality.” Electronic document. (FTP:
parcftp.xerox.com).

Dawes, Robyn.

1980. “Social Dilemmas.” Annual Review of Psychology 31:169-193.

Fox, Dennis R.

1985. “Psychology, Ideology, Utopia, and the Commons.” American Psychologist 40(1):48-58.

Hardin, Garrett.

1968. “The Tragedy of the Commons.” Science 162:1243-48. Reprinted in Managing the Commons, edited by
Garrett Hardin and John Baden (1977, pp. 16-30). San Francisco: Freeman.

Hardin, Garrett.

1974. “Living on a Lifeboat.” BioScience 24. Reprinted in Managing the Commons, edited by Garrett Hardin
and John Baden (1977, pp. 261-79). San Francisco: Freeman.

Horton, Mark, et al.

1993. “Rules for posting to Usenet.” Electronic document. (FTP:rtfm.mit.edu).

Kollock, Peter, and Marc Smith.

1995 (Forthcoming). The Sociology of Cyberspace: Social Interaction and Order in Computer Communities.
Thousand Oaks, CA: Pine Forge Press.

Levi, Margaret.

1988. Of Rule and Revenue. Berkeley: University of California Press.

Messick, David M., and Marilynn B. Brewer.

1983. “Solving Social Dilemmas.” Pp. 11-44 in Review of Personality and Social Psychology (Vol. 4), edited by
L. Wheeler and P. Shaver. Beverly Hills, CA: Sage.

Offutt, A. Jeff, et al.

1992. “Hints on writing style for Usenet.” Electronic document. (FTP: rtfm.mit.edu).

Olson, Mancur.

1965. The Logic of Collective Action: Public Goods and the Theory of Groups. Cambridge, MA: Harvard
University Press.

Ostrom, Elinor.

1990. Governing the Commons: The Evolution of Institutions for Collective Action. New York: Cambridge
University Press.

Raymond, Eric (editor).

1993. “The On-Line Hacker Jargon File” (ver. 3.0.0). Electronic document. (FTP: rtfm.mit.edu). Also published
as “The New Hacker’s Dictionary” (2nd ed.). Cambridge, MA: MIT Press.

Salzenberg, Chip, et al.

1992. “What is Usenet?” Electronic document. (FTP: rtfm.mit.edu).

Schelling, Thomas.

1960. The Strategy of Conflict. Cambridge, MA: Harvard University Press.

Schwarz, Jerry, et al.

1993. “Answers to Frequently Asked Questions about Usenet.” Electronic document. (FTP: rtfm.mit.edu).

Spafford, Gene, et al.

1993a. “List of Active Newsgroups (Parts I & II).” Electronic document. (FTP: rtfm.mit.edu).

Spafford, Gene, et al.

1993b. “List of Moderators for Usenet.” Electronic document. (FTP: rtfm.mit.edu).

Taylor, Michael.

1987. The Possibility of Cooperation. Cambridge: Cambridge University Press.

Templeton, Brad.

1991. “Emily Postnews Answers Your Questions on Netiquette.” Electronic document. (FTP: rtfm.mit.edu).

Von Rospach, Chuq, et al.

1993. “A Primer on How to Work With the Usenet Community.” Electronic document. (FTP: rtfm.mit.edu).

Footnotes:

1. Direct correspondence to Peter Kollock, Department of Sociology, University of California, Los Angeles, CA
90024-1551 ([email protected]). Order of authorship is alphabetical to indicate equal contributions. We wish to
thank Ronald Obvious for comments on an earlier draft of this paper.

2. For general reviews of the research on social dilemmas, see Messick and Brewer (1983); Dawes (1980).

3. Public good is sometimes defined in a more restricted sense (see Taylor 1987: 5-8). Here we use term public
good (or collective good) simply to refer to resources that are in some degree non-excludable.

4. Usenet newsgroups are named according to a loose convention. Groups are to start with one of eight main
hierarchy names and then add words separated by periods that increasingly narrow the scope of the group. There
are seven broad official classifications of Usenet newsgroups: “news”, “soc”, “talk”, “misc”, “sci”, “comp” and
“rec”. As Spafford et al. (1993a) describe them: “Each of these classifications is organized into groups and
subgroups according to topic: ‘comp’ [contains] topics of interest to both computer professionals and hobbyists,
including topics in computer science, software source, and information on hardware and software systems; ‘sci’
[contains] discussions marked by special and usually practical knowledge, relating to research in or application
of the established sciences; ‘misc’ [contains] groups addressing themes not easily classified under any of the
other headings or which incorporate themes from multiple categories; ‘soc’ [contains] groups primarily
addressing social issues and socializing; ‘talk’ [contains] groups largely debate-oriented and tending to feature
long discussions without resolution and without appreciable amounts of generally useful information; ‘news’
[contains] groups concerned with the news network and software themselves; ‘rec’ [contains] groups oriented
towards the arts, hobbies and recreational activities.” Finally, the “alt” hierarchy contains “alternative”
newsgroups that are less regulated.

5. For example, newsgroups with suffixes of “.d” are intended as places for meta-commentary on the antecedent
newsgroup.

6. In The Strategy of Conflict, Schelling (1960) wrote of features on a landscape that permit tacit coordination.
For example, there are points in a city that provide natural spaces for finding others, such as the clock in Grand
Central Station.

7. Cross-posting is the practice of posting the same message to multiple newsgroups. This is intended to allow
items of interest to be easily shared by more than one group. In practice, it is often the source of annoyance and
conflict as items of limited relevance are cross-posted to a number of groups.

8. Usernames are labels that identify the machine and user a message originates from. “Real” identity is
sometimes difficult to determine from usernames. This is due in part to usernames like IZZY3046. But even a
username like [email protected] UCLA.EDU conveys a minimum amount of information about its
owner.

9. Ostrom identified an eighth design principle that is relevant in complex social systems: monitoring,
sanctioning, and other governance activities are organized in multiple layers of nested enterprises. Note also that
Ostrom considers this list to be a first, speculative attempt to isolate what is required to successfully manage a
common resource. She and her colleagues are currently involved in a large research project to further develop
and refine this list.

10. Social boundaries are never hermetic; their value to a group is often based on what they let in and let out as
much as they keep in and keep out. Further, it is a mistake to conceive of boundaries as singular forces. Instead,
boundaries are erected and maintained by a variety of practices and tools, some of which have conflicting
effects.

11. Note, however, that there are de facto barriers that can keep people out of the Usenet in general. Some people
do not have access to or cannot afford the hardware necessary to connect to Usenet. Others may have access to
the hardware but do not have the necessary knowledge in order to participate — they may not know how to use
newsreading software or may not even be aware of Usenet’s existence. These barriers are likely to decrease in
the future as access becomes both simpler and cheaper.

12. There has been limited experimentation with restricted newsgroups through the use of coded messages that
can only be decoded by members who have been provided with a key. Another example is the Clarinet
newsgroups, which provide information from commercial news providers to paying subscribers only. Legal
recourse provides Clarinet with a major element of its boundary.

13. The course consists of: “A Primer on How to Work With the Usenet Community” (Von Rospach et al. 1993),
“Answers to Frequently Asked Questions about Usenet” (Schwarz et al. 1993), “Emily Postnews Answers Your
Questions on Netiquette” (Templeton 1991), “Hints on writing style for Usenet” (Offutt et al. 1992), “Rules for
posting to Usenet” (Horton et al. 1993), and “What is Usenet?” (Salzenberg et al. 1992).

14. Note that boundaries and rules are interrelated: Having members of a group participate in the design of rules
to govern the group makes sense if the members all have experience in the group, knowledge about the
challenges the group faces, and an investment in the group (i.e., they intend to stay in the group and value their
membership in the group). But if the boundaries of a group are not well defined so that there are many
participants who have little knowledge about the group or little investment in it, involving all affected
participants in the modification of rules can result in poorly designed institutions.

15. By “institutions” we mean “…the sets of working rules that are used to determine who is eligible to make
decisions in some arena [and] what actions are allowed or constrained….” (Ostrom 1990: 51)

16. However, there has been increasing use of services that provide a form of anonymity or pseudo-anonymity
for users of e-mail and the Usenet. “Anonymous name servers” accept e-mail or Usenet postings, strip all
identifying information from them, assign a pseudonym (such as [email protected]), and redirect them to
the person or newsgroup to which they are addressed. The effects anonymity has on the social organization of
groups based on computer-mediated communication bears close investigation but goes beyond the scope of this
chapter.

17. Although the very real instances of stalking that have been accomplished through the use of networks
highlights the fact that the Usenet can become a means by which real violence is carried out.

18. Ironically, the sanction itself can consume more bandwidth than the original violation, but the sanction may
still make sense if it encourages wiser use of this common resource in the future. A similar logic can be seen in
the action of the agents of the I.R.S., who sometimes spend more finding and prosecuting a tax offender than
they collect in back taxes and fines.

19. This post was sent to a set of unrelated groups including: comp.sys.powerpc, rec.motorcycles,
cmu.misc.market, alt.astrology, alt.bbs.internet, alt.bbs, alt.best.of.internet, rec.games.video.arcade, alt.asian-
movies.

20. From: alt.best.of.internet, message id# 3211, 23 January 1994.

21. MUD’s and MOO’s are real-time text-based social worlds. For a detailed description of MUD’s and MOO’s
see Curtis (1991).

What is a Reflection Paper?

Reflection papers are written expressions of how a specific article or set of articles has shaped
your understanding of a given topic. The reflection papers are required to tie together all the
assigned readings, exploring how to complement or refute each other.
They should take the form of a brief critical essay. Quality over quantity counts! However, at
the graduate level you are expected to properly cite your in-text sources as well as provide a
proper cited sources list at the end of each assignment. Points will be deducted for improper
citation format and grammar errors. Be sure to proofread your work before your final
submission.
You can explore many styles of writing reflection papers, especially depending on the topic of
the week, but you can organize your views around explaining questions, such as:

• What is the overarching theme that ties the readings together?
• What is their significance to the discipline of strategic communication?
• How reading the assigned works have shaped your views?
• Why are these articles important, and how they contribute your understanding of the
issue?

Present the most critical issues from the readings, such as:

• What contrasting positions can be taken?
• What do you think about the core argument of the paper?
• How do you support your idea? Etc.

Reflection Paper format when submitting your assignments:

• Follow the APA style formatting
• Double spaced
• Properly cite your in-text sources and provide a works cited page, if you use someone
else’s thoughts, ideas or words!

The first section of the outline is the introduction, which identifies the subject and gives an

overview of your reaction to it. The introduction paragraph ends with your thesis statement,

which identifies whether your expectations were met and what you learned. The thesis

statement serves as the focal point of your paper. It also provides a transition to the body of the

paper and will be revisited in your conclusion.

The body of your paper identifies the three (or more, depending on the length of your paper)

major points that support your thesis statement. Each paragraph in the body should start with a

topic sentence. The rest of each paragraph supports your topic sentence. Keep in mind that a

transition sentence at the end of each paragraph creates a paper that flows logically and is easy

to read. When creating the outline, identify the topic sentence for each paragraph, and add the

supporting statements, evidence, and your own experiences or reactions to the subject

underneath.

The conclusion wraps up your essay, serving as the other bookend in stating and proving your

thesis statement. In outlining the conclusion, identify the thesis statement and add the main

points from the body paragraphs as a recap. Don’t add new information to the conclusion and

be sure to identify the closing statement of your reflection paper.

I. Introduction

A. Identify and explain subject

B. State your reaction to the subject

1. Agree/disagree?

2. Did you change your mind?

3. Did the subject meet your expectations?

4. What did you learn?

C. Thesis Statement

II. Body Paragraph 1

A. Topic Sentence

1. Supporting evidence 1

2. Supporting evidence 2

3. Supporting evidence 3

III. Body Paragraph 2

A. Topic Sentence

1. Supporting evidence 1

2. Supporting evidence 2

3. Supporting evidence 3

IV. Body Paragraph 3

A. Topic Sentence

1. Supporting evidence 1

2. Supporting evidence 2

3. Supporting evidence 3

V. Conclusion

A. Recap thesis statement

B. Recap Paragraph 1

C. Recap Paragraph 2

D. Recap Paragraph 3

E. Conclusion statement