of
a giver's bid for status is delicately dependent on the critical judgement
of peers. Another peculiarity is the relative purity of the open-source
culture. Most gift cultures are compromised -- either by exchange-economy
relationships such as trade in luxury goods, or by command-economy relationships
such as family or clan groupings. No significant analogues of these exist
in the open-source culture; thus, ways of gaining status other than by
peer repute are virtually absent. ------------- 9. Ownership Rights and
Reputation Incentives We are now in a position to pull together the previous
analyses into a coherent account of hacker ownership customs. We
understand the yield from homesteading the noosphere now; it is peer repute
in the gift culture of hackers, with all the secondary gains and side-effects
that implies. From this understanding, we can analyze the Lockean property
customs of hackerdom as a means of maximizing reputation incentives; of
ensuring that peer credit goes where it is due and does not go where it
is not due. The three taboos we observed above make perfect sense under
this analysis. One's reputation can suffer unfairly if someone else misappropriates
or mangles one's work; these taboos (and related customs) attempt to prevent
this from happening. (Or, to put it more pragmatically, hackers generally
refrain from forking or rogue-patching others projects in order to be able
to deny legitimacy to the same behavior practiced against themselves.)
Forking projects is bad because it exposes pre-fork contributors to a reputation
risk they can only control by being active in both child projects simultaneously
after the fork. (This would generally be too confusing or difficult to
be practical.) Distributing rogue patches (or, much worse, rogue binaries)
exposes the owners to an unfair reputation risk. Even if the official code
is perfect, the owners will catch flak from bugs in the patches (but see
[RP]). Surreptitiously filing someone's name off a project is, in cultural
context, one of the ultimate crimes. It steals the victim's gift to be
presented as the thief's own. Of course, forking a project or distributing
rogue patches for it also directly attacks the reputation of the original
developer's group. If I fork or rogue-patch your project, I am saying:
"you made a wrong decision [by failing to take the project where I am taking
it]"; and Anyone who uses my forked variation is endorsing this challenge.
But this in itself would be a fair challenge, albeit extreme; it's the
sharpest end of peer review. It's therefore not sufficient in itself to
account for the taboos, though it doubtless contributes force to them.
All three of these taboo behaviors inflict global harm on the open-source
community as well as local harm on the victim(s). Implicitly they damage
the entire community by decreasing each potential contributor's perceived
likelihood that gift/productive behavior will be rewarded. It's important
to note that there are alternate candidate explanations for two of these
three taboos. First, hackers often explain their antipathy to forking projects
by bemoaning the wasteful duplication of work it would imply as the child
products evolved in more-or-less parallel into the future. They may also
observe that forking tends to split the co-developer community, leaving
both child projects with fewer brains to work with than the parent. A respondent
has pointed out that it is unusual for more than one offspring of a fork
to survive with significant `market share' into the long term. This strengthens
the incentives for all parties to cooperate and avoid forking, because
it's hard to know in advance who will be on the losing side and see a lot
of their work either disappear entirely or languish in obscurity. Dislike
of rogue patches is often explained by observing that they can complicate
bug-tracking enormously, and inflict work on maintainers who have quite
enough to do catching their own mistakes. There is considerable truth to
these explanations, and they certainly do their bit to reinforce the Lockean
logic of ownership. But while intellectually attractive, they fail to explain
why so much emotion and territoriality gets displayed on the infrequent
occasions that the taboos get bent or broken -- not just by the injured
parties, but by bystanders and observers who often react quite harshly.
Cold-blooded concerns about duplication of work and maintainance hassles
simply do not sufficiently explain the observed behavior. Then, too, there
is the third taboo. It's hard to see how anything but the reputation-game
analysis can explain this. The fact that this taboo is seldom analyzed
much more deeply than ``It wouldn't be fair'' is revealing in its own way,
as we shall see in the next section. ----------- 10. The Problem of Ego
At the beginning of the paper I mentioned that the unconscious adaptive
knowledge of a culture is often at odds with its conscious ideology. We've
seen one major example of this already in the fact that Lockean ownership
customs have been widely followed despite the fact that they violate the
stated intent of the standard licenses. I have observed another interesting
example of this phenomenon when discussing the reputation-game analysis
with hackers. This is that many hackers resisted the analysis and showed
a strong reluctance to admit that their behavior was motivated by a desire
for peer repute or, as I incautiously labeled it at the time, `ego satisfaction'.
This illustrates an interesting point about the hacker culture. It consciously
distrusts and despises egotism and ego-based motivations; Self-promotion
tends to be mercilessly criticized, even when the community might appear
to have something to gain from it. So much so, in fact, that the culture's
`big men' and tribal elders are required to talk softly and humorously
deprecate themselves at every turn in order to maintain their status. How
this attitude meshes with an incentive structure that apparently runs almost
entirely on ego cries out for explanation. A large part of it, certainly,
stems from the generally negative Europo-American attitude towards `ego'.
The cultural matrix of most hackers teaches them that desiring ego satisfaction
is a bad (or at least immature) motivation; that ego is at best an eccentricity
tolerable only in prima-donnas and often an actual sign of mental pathology.
Only sublimated and disguised forms like ``peer repute'', ``self-esteem'',
``professionalism'' or ``pride of accomplishment'' are generally acceptable.
I could write an entire other essay on the unhealthy roots of this part
of our cultural inheritance, and the astonishing amount of self-deceptive
harm we do by believing (against all the evidence of psychology and behavior)
that we ever have truly `selfless' motives. Perhaps I would, if Friedrich
Wilhelm Nietzsche and Ayn Rand had not already done an entirely competent
job (whatever their other failings) of deconstructing `altruism' into unacknowledged
kinds of self-interest. But I am not doing moral philosophy or psychology
here, so I will simply observe one minor kind of harm done by the belief
that ego is evil, which is this: it has made it emotionally difficult for
many hackers to consciously understand the social dynamics of their own
culture! But we are not quite done with this line of investigation. The
surrounding culture's taboo against visibly ego-driven behavior is so much
intensified in the hacker (sub)culture that one must suspect it of having
some sort of special adaptive function for hackers. Certainly the taboo
is weaker (or nonexistent) among many other gift cultures, such as the
peer cultures of theater people or the very wealthy! ------- 11. The Value
of Humility Having established that prestige is central to the hacker culture's
reward mechanisms, we now need to understand why it has seemed so important
that this fact remain semi-covert and largely unadmitted. The contrast
with the pirate culture is instructive. In that culture, status-seeking
behavior is overt and even blatant. These crackers seek acclaim for releasing
``zero-day warez'' (cracked software redistributed on the day of the original
uncracked version's release) but are closemouthed about how they do it.
These magicians don't like to give away their tricks. And, as a result,
the knowledge base of the cracker culture as a whole increases only slowly.
In the hacker community, by contrast, one's work is one's statement. There's
a very strict meritocracy (the best craftsmanship wins) and there's a strong
ethos that quality should (indeed must) be left to speak for itself. The
best brag is code that ``just works'', and that any competent programmer
can see is good stuff. Thus, the hacker culture's knowledge base increases
rapidly. The taboo against ego-driven posturing therefore increases productivity.
But that's a second-order effect; what is being directly protected here
is the quality of the information in the community's peer-evaluation system.
That is, boasting or self-importance is suppressed because it behaves like
noise tending to corrupt the vital signals from experiments in creative
and cooperative behavior. For very similar reasons, attacking the author
rather than the code is not done. There is an interesting subtlety here
that reinforces the point; hackers feel very free to flame each other over
ideological and personal differences, but it is unheard of for any hacker
to publicly attack another's competence at technical work (even private
criticism is unusual and tends to be muted in tone). Bug-hunting and criticism
are always project-labeled, not person-labeled. Furthermore, past bugs
are not automatically held against a developer; the fact that a bug has
been fixed is generally considered more important than the fact that one
used to be there. As one respondent observed, one can gain status by fixing
`Emacs bugs', but not by fixing `Richard Stallman's bugs' -- and it would
be considered extremely bad form to criticize Stallman for old Emacs bugs
that have since been fixed. This makes an interesting contrast with many
parts of academia, in which trashing putatively defective work by others
is an important mode of gaining reputation. In the hacker culture, such
behavior is rather heavily tabooed -- so heavily, in fact, that the absence
of such behavior did not present itself to me as a datum until that one
respondent with an unusual perspective pointed it out nearly a full year
after this paper was first published! The taboo against attacks on competence
(not shared with academia) is even more revealing than the (shared) taboo
on posturing, because we can relate it to a difference between academia
and hackerdom in their communications and support structures. The hacker
culture's medium of gifting is intangible, its communications channels
are poor at expressing emotional nuance, and face-to-face contact among
its members is the exception rather than the rule. This gives it a lower
tolerance of noise than most other gift cultures, and goes a long way to
explain the taboo against attacks on competence. Any significant incidence
of flames over hackers' competence would intolerably disrupt the culture's
reputation scoreboard. The same vulnerability to noise goes for to explain
the example in public humility required of the hacker community's tribal
elders. They must be seen to be free of boast and posturing so the taboo
against dangerous noise will hold. [DC] Talking softly is also functional
if one aspires to be a maintainer of a successful project; one must convince
the community that one has good judgement, because most of the maintainer's
job is going to be judging other people's code. Who would be inclined to
contribute work to someone who clearly can't judge the quality of their
own code, or whose behavior suggests they will attempt to unfairly hog
the reputation return from the project? Potential contributors want project
leaders with enough humility and class be able to to say, when objectively
appropriate, ``Yes, that does work better than my version, I'll use it''
-- and to give credit where credit is due. Yet another reason for humble
behavior is that in the open source world, you seldom want to give the
impression that a project is `done'. This might lead a potential contributor
not to feel needed. The way to maximize your leverage is to be humble about
the state of the program. If one does one's bragging through the code,
and then says ``Well shucks, it doesn't do x, y, and z, so it can't be
that good'', patches for x, y, and z will often swiftly follow. Finally,
I have personally observed that the self-deprecating behavior of some leading
hackers reflects a real (and not unjustified) fear of becoming the object
of a personality cult. Linus Torvalds and Larry Wall both provide clear
and numerous examples of such avoidance behavior. Once, on a dinner expedition
with Larry Wall, I joked ``You're the alpha hacker here -- you get to pick
the restaurant''. He flinched audibly. And rightly so; failing to distinguish
their shared values from the personalities of their leaders has ruined
a good many voluntary communities, a pattern of which Larry and Linus cannot
fail to be fully aware. On the other hand, most hackers would love to have
Larry's problem, if they could but bring themselves to admit it. -------------
12. Global Implications of the Reputation-Game Model The reputation-game
analysis has some more implications that may not be immediately obvious.
Many of these derive from the fact that one gains more prestige from founding
a successful project than from cooperating in an existing one. One also
gains more from projects which are strikingly innovative, as opposed to
being `me, too' incremental improvements on software that already exists.
On the other hand, software that nobody but the author understands or has
a need for is a non-starter in the reputation game, and it's often easier
to attract good notice by contributing to an existing project than it is
to get people to notice a new one. Finally, it's much harder to compete
with an already successful project than it is to fill an empty niche. Thus,
there's an optimum distance from one's neighbors (the most similar competing
projects). Too close and one's product will be a ``me, too!'' of limited
value, a poor gift (one would be better off contributing to an existing
project). Too far away, and nobody will be able to use, understand, or
perceive the relevance of one's effort (again, a poor gift). This creates
a pattern of homesteading in the noosphere that rather resembles that of
settlers spreading into a physical frontier -- not random, but like a diffusion-limited
fractal. Projects tend to get started to fill functional gaps near the
frontier (see [NO] for further discussion of the lure of novelty). Some
very successful projects become `category killers'; nobody wants to homestead
anywhere near them because competing against the established base for the
attention of hackers would be too hard. People who might otherwise found
their own distinct efforts end up, instead, adding extensions for these
big, successful projects. The classic `category killer' example is GNU
Emacs; its variants fill the ecological niche for a fully-programmable
editor so completely that no competitor has gotten much beyond the one-man
project stage since the early 1980s. Instead, people write Emacs modes.
Globally, these two tendencies (gap-filling and category-killers) have
driven a broadly predictable trend in project starts over time. In the
1970s most of the open source that existed was toys and demos. In the 1980s
the push was in development and Internet tools. In the 1990s the action
shifted to operating systems. In each case, a new and more difficult level
of problems was attacked when the possibilities of the previous one had
been nearly exhausted. This trend has interesting implications for the
near future. In early 1998, Linux looks very much like a category-killer
for the niche `open-source operating systems' -- people who might otherwise
write competing operating systems are now writing Linux device drivers
and extensions instead. And most of the lower-level tools the culture ever
imagined having as open-source already exist. What's left? Applications.
As the year 2000 approaches, it seems safe to predict that open-source
development effort will increasingly shift towards the last virgin territory
-- programs for non-techies. A clear early indicator is the development
of GIMP, the Photoshop-like image workshop that is open source's first
major application with the kind of end-user-friendly GUI interface considered
de rigueur in commercial applications for the last decade. Another is the
amount of buzz surrounding application-toolkit projects like KDE and GNOME.
A respondent to this paper has pointed out that the homesteading analogy
also explains why hackers react with such visceral anger to Microsoft's
``embrace and extend'' policy of complexifying and then closing up Internet
protocols. The hacker culture can coexist with most closed software; the
existence of Adobe Photoshop, for example, does not make the territory
near GIMP (its open-source equivalent) significantly less attractive. But
when Microsoft succeeds at de-commoditizing [HD] a protocol so that only
Microsoft's own programmers can write software for it, they do not merely
harm customers by extending their monopoly. They also reduce the amount
and quality of noosphere available for hackers to homestead and cultivate.
No wonder hackers often refer to Microsoft's strategy as ``protocol pollution'';
they are reacting exactly like farmers watching someone poison the river
they water their crops with! Finally, the reputation-game analysis explains
the oft-cited dictum that you do not become a hacker by calling yourself
a hacker -- you become a hacker when other hackers call you a hacker. A
`hacker', considered in this light, is somebody who has shown (by contributing
gifts) that he or she both has technical ability and understands how the
reputation game works. This judgement is mostly one of awareness and acculturation,
and can only be delivered by those already well inside the culture. ----------------
13. How Fine a Gift? There are consistent patterns in the way the hacker
culture values contributions and returns peer esteem for them. It's not
hard to observe the following rules: 1. If it doesn't work as well as I
have been led to expect it will, it's no good -- no matter how clever and
original it is. Note the `led to expect'. This rule is not a demand for
perfection; beta and experimental software is allowed to have bugs. It's
a demand that the user be able to accurately estimate risks from the stage
of the project and the developers' representations about it. This rule
underlies the fact that open-source software tends to stay in beta for
a long time, and not get even a 1.0 version number until the developers
are very sure it will not hand out a lot of nasty surprises. In the closed-source
world, Version 1.0 means ``Don't touch this if you're prudent.''; in the
open-source world it reads something more like ``The developers are willing
to bet their reputations on this.'' 2. Work that extends the noosphere
is better than work that duplicates an existing piece of functional territory.
The naive way to put this would have been: Original work is better than
duplicating the functions of existing software. But it's not actually quite
that simple. Duplicating the functions of existing closed software counts
as highly as original work if by doing so you break open a closed protocol
or format and make that territory newly available. Thus, for example, one
of the highest-prestige projects in the present open-source world is Samba
-- the code that allows Unix machines to act as clients or servers for
Microsoft's proprietary SMB file-sharing protocol. There is very little
creative work to be done here; it's mostly an issue of getting the reverse-engineered
details right. Nevertheless, the members of the Samba group are perceived
as heroes because they neutralize a Microsoft effort to lock in whole user
populations and cordon off a big section of the noosphere. 3. Work that
makes it into a major distribution is better than work that doesn't. Work
carried in all major distributions is most prestigious. The major distributions
include not just the big Linux distributions like Red Hat, Debian, Caldera,
and S.u.S.E., but other collections that are understood to have reputations
of their own to maintain and thus implicitly certify quality -- like BSD
distributions or the Free Software Foundation source collection. 4. Utilization
is the sincerest form of flattery -- and category killers are better than
also-rans. Trusting the judgment of others is basic to the peer-review
process. It's necessary because nobody has time to review all possible
alternatives. So work used by lots of people is considered better than
work used by a few, To have done work so good that nobody cares to use
the alternatives any more is therefore to have earned huge prestige. The
most possible peer esteem comes from having done widely popular, category-killing
original work that is carried by all major distributions. People who have
pulled this off more than once are half-seriously referred to as `demigods'.
5. Continued devotion to hard, boring work (like debugging, or writing
documentation) is more praiseworthy than cherrypicking the fun and easy
hacks. This norm is how the community rewards necessary tasks that hackers
would not naturally incline towards. It is to some extent contradicted
by: 6. Nontrivial extensions of function are better than low-level patches
and debugging. The way this seems to work is that on a one-shot basis,
adding a feature is likely to get more reward than fixing a bug -- unless
the bug is exceptionally nasty or obscure, such that nailing it is itself
a demonstration of unusual skill and cleverness. But when these behaviors
are extended over time, a person with a long history of paying attention
to and nailing even ordinary bugs may well rank someone who has spent a
similar amount of effort adding easy features. A respondent has pointed
out that these rules interact in interesting ways and do not necessarily
reward highest possible utility all the time. Ask a hacker whether he's
likely to become better known for a brand new tool of his own or for extensions
to someone else's and the answer ``new tool'' will not be in doubt. But
ask about (a) a brand new tool which is only used a few times a day invisibly
by the OS but which rapidly becomes a category killer versus (b) several
extensions to an existing tool which are neither especially novel nor category-killers,
but are daily used and daily visible to a huge number of users and you
are likely to get some hesitation before the hacker settles on (a). These
alternatives are about evenly stacked. Said respondent gave this question
point for me by adding ``Case (a) is fetchmail; case (b) is your many Emacs
extensions, like vc.el and gud.el.'' And indeed he is correct; I am more
likely to be tagged `the author of fetchmail' than `author of a boatload
of Emacs modes', even though the latter probably have had higher total
utility over time. What may be going on here is simply that work with a
novel `brand identity' gets more notice than work aggregated to an existing
`brand'. Elucidation of these rules, and what they tell us about the hacker
culture's scoreboarding system, would make a good topic for further investigation.
14. Noospheric Property and the Ethology of Territory To understand the
causes and consequences of Lockean property customs, it will help us to
look at them from yet another angle; that of animal ethology, specifically
the ethology of territory. Property is an abstraction of animal territoriality,
which evolved as a way of reducing intra-species violence. By marking his
bounds, and respecting the bounds of others, a wolf diminishes his chances
of being in a fight that could weaken or kill him and make him less reproductively
successful. Similarly, the function of property in human societies is to
prevent inter-human conflict by setting bounds that clearly separate peaceful
behavior from aggression. It is fashionable in some circles to describe
human property as an arbitrary social convention, but this is dead wrong.
Anybody who has ever owned a dog who barked when strangers came near its
owner's property has experienced the essential continuity between animal
territoriality and human property. Our domesticated cousins of the wolf
know, instinctively, that property is no mere social convention or game,
but a critically important evolved mechanism for the avoidance of violence.
(This makes them smarter than a good many human political theorists.) Claiming
property (like marking territory) is a performative act, a way of declaring
what boundaries will be defended. Community support of property claims
is a way to minimize friction and maximize cooperative behavior. These
things remain true even when the ``property claim'' is much more abstract
than a fence or a dog's bark, even when it's just the statement of the
project maintainer's name in a README file. It's still an abstraction of
territoriality, and (like other forms of property) based in territorial
instincts evolved to assist conflict resolution. This ethological analysis
may at first seem very abstract and difficult to relate to actual hacker
behavior. But it has some important consequences. One is in explaining
the popularity of World Wide Web sites, and especially why open-source
projects with websites seem so much more `real' and substantial than those
without them. Considered objectively, this seems hard to explain. Compared
to the effort involved in originating and maintaining even a small program,
a web page is easy, so it's hard to consider a web page evidence of substance
or unusual effort. Nor are the functional characteristics of the Web itself
sufficient explanation. The communication functions of a web page can be
as well or better served by a combination of an FTP site, a mailing list,
and Usenet postings. In fact it's quite unusual for a project's routine
communications to be done over the Web rather than via a mailing list or
newsgroup. Why, then, the popularity of Web sites as project homes? The
metaphor implicit in the term `home page' provides an important clue. While
founding an open-source project is a territorial claim in the noosphere
(and customarily recognized as such) it is not a terribly compelling one
on the psychological level. Software, after all, has no natural location
and is instantly reduplicable. It's assimilable to our instinctive notions
of `territory' and `property', but only after some effort. A project home
page concretizes an abstract homesteading in the space of possible programs
by expressing it as `home' territory in the more spatially-organized realm
of the World Wide Web. Descending from the noosphere to `cyberspace' doesn't
get us all the way to the real world of fences and barking dogs yet, but
it does hook the abstract property claim more securely to our instinctive
wiring about territory. And this is why projects with web pages seem more
`real'. This point is much strengthened by hyperlinks and the existence
of good search engines. A project with a web page is much more likely to
be noticed by somebody exploring its neighborhood in the noosphere; others
will link to it, searches will find it. A web page is therefore a better
advertisement, a more effective performative act, a stronger claim on territory.
This ethological analysis also encourages us to look more closely at mechanisms
for handling conflict in the open-source culture. It leads us to expect
that, in addition to maximizing reputation incentives, ownership customs
should also have a role in preventing and resolving conflicts. 16. Project
Structures and Ownership The trivial case is that in which the project
has a single owner/maintainer. In that case there is no possible conflict.
The owner makes all decisions and collects all credit and blame. The only
possible conflicts are over succession issues -- who gets to be the new
owner if the old one disappears or loses interest. The community also has
an interest, under issue (C), in preventing forking. These interests are
expressed by a cultural norm that an owner/maintainer should publicly hand
title to someone if he or she can no longer maintain the project. The simplest
non-trivial case is when a project has multiple co-maintainers working
under a single `benevolent dictator' who owns the project. Custom favors
this mode for group projects; it has been shown to work on projects as
large as the Linux kernel or Emacs, and solves the ``who decides'' problem
in a way that is not obviously worse than any of the alternatives. Typically,
a benevolent-dictator organization evolves from an owner-maintainer organization
as the founder attracts contributors. Even if the owner stays dictator,
it introduces a new level of possible disputes over who gets credited for
what parts of the project. In this situation, custom places an obligation
on the owner/dictator to credit contributors fairly (through, for example,
appropriate mentions in README or history files). In terms of the Lockean
property model, this means that by contributing to a project you earn part
of its reputation return (positive or negative). Pursuing this logic, we
see that a `benevolent dictator' does not in fact own his entire project
unqualifiedly. Though he has the right to make binding decisions, he in
effect trades away shares of the total reputation return in exchange for
others' work. The analogy with sharecropping on a farm is almost irresistible,
except that a contributor's name stays in the credits and continues to
`earn' to some degree even after that contributor is no longer active.
As benevolent-dictator projects add more participants, they tend to develop
two tiers of contributors; ordinary contributors and co-developers. A typical
path to becoming a co-developer is taking responsibility for a major subsystem
of the project. Another is to take the role of `lord high fixer', characterizing
and fixing many bugs. In this way or others, co-developers are the contributors
who make a substantial and continuing investment of time in the project.
The subsystem-owner role is particularly important for our analysis and
deserves further examination. Hackers like to say that `authority follows
responsibility'. A co-developer who accepts maintainance responsibility
for a given subsystem generally gets to control both the implementation
of that subsystem and its interfaces with the rest of the project, subject
only to correction by the project leader (acting as architect). We observe
that this rule effectively creates enclosed properties on the Lockean model
within a project, and has exactly the same conflict-prevention role as
other property boundaries. By custom, the `dictator' or project leader
in a project with co-developers is expected to consult with those co-developers
on key decisions. This is especially so if the decision concerns a subsystem
which a co-developer `owns' (that is, has invested time in and taken responsibility
for). A wise leader, recognizing the function of the project's internal
property boundaries, will not lightly interfere with or reverse decisions
made by subsystem owners. Some very large projects discard the `benevolent
dictator' model entirely. One way to do this is turn the co-developers
into a voting committee (as with Apache). Another is rotating dictatorship,
in which control is occasionally passed from one member to another within
a circle of senior co-developers; the Perl developers organize themselves
this way. Such complicated arrangements are widely considered unstable
and difficult. Clearly this perceived difficulty is largely a function
of the known hazards of design-by-committee, and of committees themselves;
these are problems the hacker culture consciously understands. However,
I think some of the visceral discomfort hackers feel about committee or
rotating-chair organizations is because they're hard to fit into the unconscious
Lockean model hackers use for reasoning about the simpler cases. It's problematic,
in these complex organizations, to do an accounting of either ownership
in the sense of control or ownership of reputation returns. It's hard to
see where the internal boundaries are, and thus hard to avoid conflict
unless the group enjoys an exceptionally high level of harmony and trust.
17. Conflict and Conflict Resolution We've seen that within projects, an
increasing complexity of roles is expressed by a distribution of design
authority and partial property rights. While this is an efficient way to
distribute incentives, it also dilutes the authority of the project leader
-- most importantly, it dilutes the leader's authority to squash potential
conflicts. While technical arguments over design might seem the most obvious
risk for internecine conflict, they are seldom a serious cause of strife.
These are usually relatively easily resolved by the territorial rule that
authority follows responsibility. Another way of resolving conflicts is
by seniority -- if two contributors or groups of contributors have a dispute,
and the dispute cannot be resolved objectively, and neither owns the territory
of the dispute, the side that has put the most work into the project as
a whole (that is, the side with the most property rights in the whole project)
wins. (Equivalently, the side with the least invested loses. Interestingly
this happens to be the same heuristic that many relational database engines
resolve deadlocks. When two threads are deadlocked over resources, the
side with the least invested in the current transaction is selected as
the deadlock victim and is terminated. This usually selects the longest
running transaction, or the more senior, as the victor.) These rules generally
suffice to resolve most project disputes. When they do not, fiat of the
project leader usually suffices. Disputes that survive both these filters
are rare. Conflicts do not as a rule become serious unless these two criteria
("authority follows responsibility" and "seniority wins") point in different
directions, and the authority of the project leader is weak or absent.
The most obvious case in which this may occur is a succession dispute following
the disappearance of the project lead. I have been in one fight of this
kind. It was ugly, painful, protracted, only resolved when all parties
became exhausted enough to hand control to an outside person, and I devoutly
hope I am never anywhere near anything of the kind again. Ultimately, all
of these conflict-resolution mechanisms rest on the wider hacker community's
willingness to enforce them. The only available enforcement mechanisms
are flaming and shunning -- public condemnation of those who break custom,
and refusal to cooperate with them after they have done so. 18. Acculturation
Mechanisms and the Link to Academia An early version of this paper posed
the following research question: How does the community inform and instruct
its members as to its customs? Are the customs self-evident or self-organizing
at a semi-conscious level, are they taught by example, are they taught
by explicit instruction? Teaching by explicit instruction is clearly rare,
if only because few explicit descriptions of the culture's norms have existed
to be used up to now. Many norms are taught by example. To cite one very
simple case, there is a norm that every software distribution should have
a file called README or READ.ME that contains first-look instructions for
browsing the distribution. This convention has been well established since
at least the early 1980s; it has even, occasionally, been written down.
But one normally derives it from looking at many distributions. On the
other hand, some hacker customs are self-organizing once one has acquired
a basic (perhaps unconscious) understanding of the reputation game. Most
hackers never have to be taught the three taboos I listed earlier in this
paper, or at least would claim if asked that they are self-evident rather
than transmitted. This phenomenon invites closer analysis -- and perhaps
we can find its explanation in the process by which hackers acquire knowledge
about the culture. Many cultures use hidden clues (more precisely `mysteries'
in the religio/mystical sense) as an acculturation mechanism. These are
secrets which are not revealed to outsiders, but are expected to be discovered
or deduced by the aspiring newbie. To be accepted inside, one must demonstrate
that one both understands the mystery and has learned it in a culturally
approved way. The hacker culture makes unusually conscious and extensive
use of such clues or tests. We can see this process operating at at least
three levels: Password-like specific mysteries. As one example, there is
a USENET newsgroup called alt.sysadmin.recovery that has a very explicit
such secret; you cannot post without knowing it, and knowing it is considered
evidence you are fit to post. The regulars have a strong taboo against
revealing this secret. The requirement of initiation into certain technical
mysteries. One must absorb a good deal of technical knowledge before one
can give valued gifts (e.g. one must know at least one of the major computer
languages). This requirement functions in the large in the way hidden clues
do in the small, as a filter for qualities (such as capability for abstract
thinking, persistence, and mental flexibility) which are necessary to function
in the culture. Social-context mysteries. One becomes involved in the culture
through attaching oneself to specific projects. Each project is a live
social context of hackers which the would-be contributor has to investigate
and understand socially as well as technically in order to function. (Concretely,
a common way one does this is by reading the project's Web pages and/or
email archives.) It is through these project groups that newbies experience
the behavioral example of experienced hackers. In the process of acquiring
these mysteries, the would-be hacker picks up contextual knowledge which
(after a while) does make the three taboos and other customs seem `self-evident'.
One might, incidentally, argue that the structure of the hacker gift culture
itself is its own central mystery. One is not considered acculturated (concretely:
no one will call you a hacker) until one demonstrates a gut-level understanding
of the reputation game and its implied customs, taboos, and usages. But
this is trivial; all cultures demand such understanding from would-be joiners.
Furthermore the hacker culture evinces no desire to have its internal logic
and folkways kept secret -- or, at least, nobody has ever flamed me for
revealing them! Respondents to this paper too numerous to list have pointed
out that hacker ownership customs seem intimately related to (and may derive
directly from) the practices of the academic world, especially the scientific
research commmunity. This research community has similar problems in mining
a territory of potentially productive ideas, and exhibits very similar
adaptive solutions to those problems in the ways it uses peer review and
reputation. Since many hackers have had formative exposure to academia
(it's common to learn how to hack while in college) the extent to which
academia shares adaptive patterns with the hacker culture is of more than
casual interest in understanding how these customs are applied. Obvious
parallels with the hacker `gift culture' as I have characterized it abound
in academia. Once a researcher achieves tenure, there is no need to worry
about survival issues. (Indeed, the concept of tenure can probably be traced
back to an earlier gift culture in which ``natural philosophers'' were
primarily wealthy gentlemen with time on their hands to devote to research.)
In the absence of survival issues, reputation enhancement becomes the driving
goal, which encourages sharing of new ideas and research through journals
and other media. This makes objective functional sense because scientific
research, like the hacker culture, relies heavily on the idea of `standing
upon the shoulders of giants', and not having to rediscover basic principles
over and over again. Some have gone so far as to suggest that hacker customs
are merely a reflection of the research community's folkways and have actually
(in most cases) been acquired there by individual hackers. This probably
overstates the case, if only because hacker custom seems to be readily
acquired by intelligent high-schoolers! 19. Gift Outcompetes Exchange There
is a more interesting possibility here. I suspect academia and the hacker
culture share adaptive patterns not because they're genetically related,
but because they've both evolved the one most optimal social organization
for what they're trying to do, given the laws of nature and and the instinctive
wiring of human beings. The verdict of history seems to be that free-market
capitalism is the globally optimal way to cooperate for economic efficiency;
perhaps, in a similar way, the reputation-game gift culture is the globally
optimal way to cooperate for generating (and checking!) high-quality creative
work. Support for this theory becomes from a large body of psychological
studies on the interaction between art and reward [GNU]. These studies
have received less attention than they should, in part perhaps because
their popularizers have shown a tendency to overinterpret them into general
attacks against the free market and intellectual property. Nevertheless,
their results do suggest that some kinds of scarcity-economics rewards
actually decrease the productivity of creative workers such as programmers.
Psychologist Theresa Amabile of Brandeis University, cautiously summarizing
the results of a 1984 study of motivation and reward, observed ``It may
be that commissioned work will, in general, be less creative than work
that is done out of pure interest.''. Amabile goes on to observe that ``The
more complex the activity, the more it's hurt by extrinsic reward.'' Interestingly,
the studies suggest that flat salaries don't demotivate, but piecework
rates and bonuses do. Thus, it may be economically smart to give performance
bonuses to people who flip burgers or dug ditches, but it's probably smarter
to decouple salary from performance in a programming shop and let peeople
choose their own projects (both trends that the open-source world takes
to their logical conclusions). Indeed, these results suggest that the only
time it is a good idea to reward performance in programming is when the
programmer is so motivated that he or she would have worked without the
reward! Other researchers in the field are willing to point a finger straight
at the issues of autonomy and creative control that so preoccupy hackers.
``To the extent one's experience of being self-determined is limited,''
said Richard Ryan, associate psychology professor at the University of
Rochester, ``one's creativity will be reduced as well.'' In general, presenting
any task as a means rather than an end in itself seems to demotivate. Even
winning a competition with others or gaining peer esteem can be demotivating
in this way if it is experienced as work for reward (which may explain
why hackers are culturally prohibited from explicitly seeking or claiming
that esteem). To complicate the management problem further, controlling
verbal feedback seems to be just as demotivating as piecework payment.
Ryan found that corporate employees who were told, ``Good, you're doing
as you should'' were ``significantly less intrinsically motivated than
those who received feedback informationally.'' It may still be intelligent
to offer incentives, but they have to come without attachments to avoid
gumming up the works. There is a criticl difference (Ryan observes) between
saying, ``I'm giving you this reward because I recognize the value of your
work'' and ``You're getting this reward because you've lived up to my standards.''
The first does not demotivate; the second does. In these psychological
observations we can ground a case that an open-source development group
will be substantially more productive (especially over the long term, in
which creativity becomes more critical as a productivity multiplier) than
an equivalently sized and skilled group of closed-source programmers (de)motivated
by scarcity rewards. This suggests from a slightly different angle one
of the speculations in The Cathedral And The Bazaar; that, ultimately,
the industrial/factory mode of software production was doomed to be outcompeted
from the moment capitalism began to create enough of a wealth surplus that
many programmers could live in a post-scarcity gift culture. Indeed, it
seems the prescription for highest software productivity is almost a Zen
paradox; if you want the most efficient production, you must give up trying
to make programmers produce. Handle their subsistence, give them their
heads, and forget about deadlines. To a conventional manager this sounds
crazily indulgent and doomed -- but it is exactly the recipe with which
the open-source culture is now clobbering its competition. 0. Conclusion:
From Custom to Customary Law We have examined the customs which regulate
the ownership and control of open-source software. We have seen how they
imply an underlying theory of property rights homologous to the Lockean
theory of land tenure. We have related that to an analysis of the hacker
culture as a `gift culture' in which participants compete for prestige
by giving time, energy, and creativity away. We have examined the implications
of this analysis for conflict resolution in the culture. The next logical
question to ask is "Why does this matter?" Hackers developed these customs
without conscious analysis and (up to now) have followed them without conscious
analysis. It's not immediately clear that conscious analysis has gained
us anything practical -- unless, perhaps, we can move from description
to prescription and deduce ways to improve the functioning of these customs.
We have found a close logical analogy for hacker customs in the theory
of land tenure under the Anglo-American common-law tradition. Historically
[Miller], the European tribal cultures that invented this tradition improved
their dispute-resolution systems by moving from a system of unarticulated,
semi-conscious custom to a body of explicit customary law memorized by
tribal wisemen -- and eventually, written down. Perhaps, as our population
rises and acculturation of all new members becomes more difficult, it is
time for the hacker culture to do something analogous -- to develop written
codes of good practice for resolving the various sorts of disputes that
can arise in connection with open-source projects, and a tradition of arbitration
in which senior members of the community may be asked to mediate disputes.
The analysis in this paper suggests the outlines of what such a code might
look like, making explicit that which was previously implicit. No such
codes could be imposed from above; they would have to be voluntarily adopted
by the founders or owners of individual projects. Nor could they be completely
rigid, as the pressures on the culture are likely to change over time.
Finally, for enforcement of such codes to work, they would have to reflect
a broad consensus of the hacker tribe. I have begun work on such a code,
tentatively titled the "Malvern Protocol" after the little town where I
live. If the general analysis in this paper becomes sufficiently widely
accepted, I will make the Malvern Protocol publicly available as a model
code for dispute resolution. Parties interested in critiquing and developing
this code, or just offering feedback on whether they think it's a good
idea or not, are invited to contact me by email. 21. Questions for Further
Research The culture's (and my own) understanding of large projects that
don't follow a benevolent-dictator model is weak. Most such projects fail.
A few become spectacularly successful and important (Perl, Apache, KDE).
Nobody really understands where the difference lies. There's a vague sense
abroad that each such project is sui generis and stands or falls on the
group dynamic of its particular members, but is this true or are there
replicable strategies a group can follow? 22. Bibliography [Miller] Miller,
William Ian; Bloodtaking and Peacemaking: Feud, Law, and Society in Saga
Iceland; University of Chicago Press 1990, ISBN 0-226-52680-1. A fascinating
study of Icelandic folkmoot law, which both illuminates the ancestry of
the Lockean theory of property and describes the later stages of a historical
process by which custom passed into customary law and thence to written
law. [Mal] Malaclypse the Younger; Principia Discordia, or How I Found
Goddess and What I Did To Her When I Found Her; Loompanics, ISBN 1-55950-040-9.
There is much enlightening silliness to be found in Discordianism. Amidst
it, the `SNAFU principle' provides a rather trenchant analysis of why command
hierarchies don't scale well. There's a browseable HTML version. [BCT]
J. Barkow, L. Cosmides, and J. Tooby (Eds.); The adapted mind: Evolutionary
psychology and the generation of culture. New York: Oxford University Press
1992. An excellent introduction to evolutionary psychology. Some of the
papers bear directly on the three cultural types I discuss (command/exchange/gift),
suggesting that these patterns are wired into the human psyche fairly deep.
[MHG] Goldhaber, Michael K.; The Attention Economy and the Net. I discovered
this paper after my version 1.7. It has obvious flaws (Goldhaber's argument
for the inapplicability of economic reasoning to attention does not bear
close examination), but Goldhaber nevertheless has funny and perceptive
things to say about the role of attention-seeking in organizing behavior.
The prestige or peer repute I have discussed can fruitfully be viewed as
a particular case of attention in his sense. ---------------------
The Circus Midget and the Fossilized Dinosaur Turd -or- "What up with that
software industry?" A Treatise on Free Software Development. With apologies
to Eric S. Raymond. ---- By Martin Hock (oxymoron@bigsky.net) Copyright
1998. This is a parody. It is completely fictitious. I assume no liability.
Please don't hurt me. Yes, there's an actual point to this. I went down
to the Ethnic Quarter of the Montanan "city" I live in today, which normally
consists of approximately three black people. Today, however, was different.
Not only were there the normal three black people, but there were a couple
of weird Europeans who had apparently gotten lost. On my way into the Cheap
Legal Drugs Mart, I happened to overhear their conversation, which went
approximately as follows: "You looka at the state ofa the software industry
today, my frien, anda what do you see? You see a biga ball of the shit.
That'sa what you see." The other guy didn't say anything, probably because
he was too busy staring at a woman across the street. Still, it got me
thinking. What up with that software industry, anyway? As I went home that
night, I couldn't shake the image of the slobbering man from my mind. While
I watched for the umpteenth time the Juiceman Juicer infomercial formed
by a beam of electrons refreshing half the screen 60 times a second, I
suddenly realized that I could make money off this concept if I went around
the country making speeches about what up with that software industry.
I looked at the room around me. Filled with empty beer bottles and crinkled
pornography magazines dating back to the late 1970's, I realized that sinking
all of my money into the simple pleasures in life brought me all the satisfaction
that I ever needed. Oh, right, the software part. Yeah, anyway, I thought
back to when I was a little kid and how I used to love the circus. I didn't
like the lions, or the stupid gymnasts, or the evil foul-smelling clowns.
What I liked were the freaks. They helped remind me that there were people
in the world who were even more pathetic than myself. I especially liked
the midget. His bulging little eyes used to follow me around my room, his
stained leotard a constant reminder to the audience that bladder control
is essential to functioning as a part of society. I wondered what that
little man got paid. Probably sub-minimum wage. My parents used to feel
guilty when they walked by him. He had a little tattered hat next to him
with a small card taped in front that simply stated, "Donations." It was
always empty, except for a couple of pennies. "The horrible way that circus
treats that poor man," my mother always said. "If he didn't like it, he'd
work somewhere else," my father would respond gruffly, his mono-brow dipped
downward in the middle. They never put anything in the hat. Other days,
we used to go to the museum. There were many things to look at when we
went there, but the ones I most liked to observe were the dinosaurs. They
were so huge and fierce. They reminded me that there were forces in life
stronger even than parents. The big, bony structures didn't really tell
me much, though. What I really liked to look at were the turds. They were
these gigantic, ellipsoid masses. I could almost touch them except for
a thin pane of Plexiglas. The small brass plate called it "excrement" or
"feces" but I knew better; it was a turd, nothing less. I would dream about
going in there at night, shattering the barrier, and taking the mass home
with me. It wasn't scatological or anything. What I really wanted to do
was drop it on a car from the overpass. Those cinder blocks did hardly
any damage on the hardtops and hitting the windshield was nearly impossible
from such an angle. The midget was a lot like free software. True, getting
into the carnival wasn't free, so I guess that's like the hardware. But
you could look at the midget all you liked. You could take pictures of
the midget and bring them home. He modified himself sometimes; you'd see
a new stain every time the carnival came in town. He'd get a little older,
a little uglier. Back when I was a kid it was really cool, but if I went
there today to see the midget, I wouldn't even care. There are better things
to do with one's afternoon than to go look at a midget. The fossilized
dinosaur turd was a lot like commercial software. It was big and robust.
It was well supported by a velveteen cushion. It even had a nice layer
of security instated by the Plexiglas. I could have stolen it, but there
would be potential repercussions. I know that I could have taken the midget
with me, but what would be the point? Also, the turd has a lot of potential
uses. You could drop it on a car, a bus, or even a pedestrian. That's what
I call adaptive. I could have modified the midget by feeding him lead shot
over a course of several weeks, but this would have been time consuming.
Why waste your time when the turd is already there, ready for use? So that's
what I have to say about software development. You wanna give me my money
now? Oh, I suppose you'd expect a little more than that for ten grand.
All right, I'll continue. Look at the midget. It is feeble and weak compared
to the dinosaur turd. It is the undiscovered, the lost. There was no banner
trumpeting the arrival of the midget in town. However, it is alive. The
dinosaur turd, though famous and strong, is dead. It has little hope for
improvement, as the dinosaur that laid it is long extinct. Young dinosaurs
may have frolicked in the field of turds, but a thick dust cloud ended
all hopes of survival. A dust cloud, you might notice, made up of thousands
of tiny particles, all working in unison. The midget stands alone, hoping
for support, but the dust particles, all driven by the jurassic breeze,
manage to topple even the largest dinosaur. Only the small, well-protected
creatures remain. So what of the dust? Ah, it is the proletariat rebellion,
waiting to happen, to conquer the bourgeois beast! It is inevitable, but
we can bring it on ourselves if we work hard enough. We must employ thousands
of workers at equal wages to create a giant fan fit for the ages. Then,
we make a solar-powered generator, which allows for the falling away of
the state since we won't have to turn the crank ourselves. Then, we just
sit back and relax as the winds blow the dust and blissful anarchy sets
in. But what of the tiny creatures? Ah, these are the seeds of a new generation!
These will grow up one day to form factions, which can only be prevented
from taking over the government if we implement plenty of checks and balances...
Oh, I'm done now? I get the check already? But I have another nine and
a half hours... ------------ Fame? Ego? Oversimplification! (I originally
wrote this 14 July 1998 in response to a thread on Slashdot.) Many messages
appearing on Slashdot in the last couple of days have made me wince pretty
hard...and consider whether, in fact, I was really wise to try to haul
the social dynamics of hackerdom out into the light. What's bothering me
the most is some of the people who have gotten enthusiastic about the analysis
I presented in The Cathedral and the Bazaar (CatB) and Homesteading The
Noosphere (HtN), but, in their enthusiasm, are arguing something like a
bad parody of it. I don't use the word `fame' at all in either paper, except
once in reporting on Fare Rideau's critique of an early version of HtN.
(The reference has since been removed; Fare reworded his critique after
reading this essay.) This is not an accident. `Fame' is a vulgar, brassy,
and shallow thing when compared to the earned and considered esteem of
one's peers. Believe me on this, because I've had quite a bit of both (especially
lately) and I know which one feels like a cheap high with a bad hangover
and which one is food for the soul. And so, I think, do most hackers. It
oversimplifies my work and (much more importantly) insults the people and
culture my work describes to imply that most hackers have some inner fantasy
of tickertape parades, talk-show appearances, and hordes of adoring groupies.
But that is exactly what the word `fame' connotes -- and the way people
have been flinging it around in disagreement and (worse) agreement with
me suggests that a lot of them need to think carefully about the difference
between `fame' and `peer repute'. That difference is crucial to understanding
our culture. Because `fame' is a mob phenomenon, essentially an emotional
response. It's irrational and self-reinforcing. There are people who are
famous for being famous. The photographer who took the pictures for my
People interview back in 1996 during my pre-CatB first fifteen minutes
of fame called them `face people'. Often, there's nothing behind the face.
Peer repute, on the other hand, is a much subtler and solider thing. The
earned and considered approbation of one's peers has to come from accomplishment,
from productivity. Often those peers are few, and this becomes more true
as one becomes more accomplished. Higher levels of it, unlike fame, become
progressively harder to earn because one's own standards for who is a fit
peer keep rising. Linus said "I am your God" at Linux Expo on stage and
brought down the house. The line was ironic and hilarious precisely because
what he has is not `fame', not uncritical adoration, not the masses gazing
up at him in awe, but rather a rational peer response to real achievement.
He knows that; and he knows that we know it. I thought most of us did,
anyway. The last day or two of Slashdot makes me wonder. So, in case it
needs saying again, don't confuse `peer repute' with `fame'. And if you've
interpreted CatB and HtN as assertions that `fame' is the only significant
motive for hackers, think again. Reality, as usual, is more subtle and
complex than that. ----------------------- Raymond on 9/11:
Decentralism Against Terrorism ----- (I wrote this on September
11th, 2001, hours after learning that the World Trade Center had been destroyed,
with thousands of lives lost, by terrorists who hijacked two jetliners
using carpet knives.) Some friends have asked me to step outside my normal
role as a technology evangelist today, to point out in public that a political
panic reaction to the 9/11 terrorist attack could do a great deal more
damage than the attack itself. Today will not have been a victory for terrorism
unless we make it one. If we reward in any way the Palestinians who are
now celebrating this hideous crime in the streets of the West Bank, that
wil have been a victory for terrorism. If we accept "anti-terrorism" measures
that do further damage to our Constitutional freedoms, that will have been
a victory for terrorism. But if we learn the right lessons, if we make
policies that preserve freedom and offer terrorists no result but a rapid
and futile death, that will have been a victory for the rest of us. We
have learned today that airport security is not the answer. At least four
separate terror teams were able to sail right past all the elaborate obstacles
-- the demand for IDs, the metal detectors, the video cameras, the X-ray
machines, the gunpowder sniffers, the gate agents and security people trained
to spot terrorists by profile. There have been no reports that any other
terror units were successfully prevented from achieving their objectives
by these measures. In fact, the early evidence is that all these police-state-like
impositions on freedom were exactly useless -- and in the smoldering ruins
of the World Trade Center lies the proof of their failure. We have learned
today that increased surveillance is not the answer. The FBI's "Carnivore"
tap on the U.S.'s Internet service providers didn't spot or prevent this
disaster; nor did the NSA's illegal Echelon wiretaps on international telecommunications.
Video monitoring of public areas could have accomplished exactly nothing
against terrorists taking even elementary concealment measures. If we could
somehow extend airport-level security to the entire U.S., it would be just
as useless against any determined and even marginally competent enemy.
We have learned today that trying to keep civilian weapons out of airplanes
and other areas vulnerable to terrorist attack is not the answer either
-- indeed, it is arguable that the lawmakers who disarmed all the non-terrorists
on those four airplanes, leaving them no chance to stop the hijackers,
bear part of the moral responsibility for this catastrophe. I expect that
in the next few months, far too many politicians and pundits will press
for draconian "anti-terrorist" laws and regulations. Those who do so will
be, whether intentionally or not, cooperating with the terrorists in their
attempt to destroy our way of life -- and we should all remember that fact
come election time. As an Internet technologist, I have learned that distributed
problems require distributed solutions -- that centralization of power,
the first resort of politicians who feed on crisis, is actually worse than
useless, because centralizers regard the more effective coping strategies
as threats and act to thwart them. Perhaps it is too much to hope that
we will respond to this shattering tragedy as well as the Israelis, who
have a long history of preventing similar atrocities by encouraging their
civilians to carry concealed weapons and to shoot back at criminals and
terrorists. But it is in that policy of a distributed response to a distributed
threat, with every single citizen taking personal responsibility for the
defense of life and freedom, that our best hope for preventing recurrences
of today's mass murders almost certainly lies. If we learn that lesson,
perhaps today's deaths will not have been in vain. ---------------------
The Biology of Promiscuity Why do human beings screw around when it complicates
our lives so much? Why do we preach fidelity at each other and then, so
often, practice adultery? The cheap and obvious answer, "because it feels
too good to stop" isn't a good one, as it turns out. Evolutionary biology
teaches us that humans being, like other animals, are adaptive machines;
"feels good" is simply instinct's way to steer us towards behaviors that
were on average successful for our ancestors. So that answer simply sets
up another question: why has our species history favored behavior that
is (as the agony columns, bitter ballads, tragic plays and veneral-disease
statistics inform us) often destructive to all parties involved? This question
has extra point for humans because human sex and childbirth are risky business
compared to that of most of our near relatives. Human infants have huge
heads, enough to make giving birth a chancy matter -- and even so, the
period during which they remain dependent on nurturing is astonishingly
long and requires a lot of parental investment. If we were redesigning
humans to cope with the high investment requirement, one obvious way would
be to rewire our instincts such that we pair-bond exclusively for life.
It's certainly possible to imagine an evolved variant of humanity in which
"infidelity" is never an issue because mated pairs imprint on each other
so specifically that nobody else is sexually interesting. Some birds are
like this. So why aren't we like this? Why haven't promiscuity and adultery
been selected out? What adaptive function do they serve that balances out
the risk to offspring from unstable matings? The route to an answer lies
in remembering that evolutionary selection is not a benign planner that
tries to maximize group survival but rather a blind competition between
individual genetic lines. We need to look more closely at the conflicting
strategies used by competing players in the reproduction game. Male promiscuity
has always been relatively easy to understand. While total parental investment
needs to be pretty intense, men have a dramatically lower minimum energy
and risk investment in children than women do; one index of the difference
is that women not infrequently died in childbirth under pre-modern conditions.
This means genetic lines propagating through us hairy male types have an
optimum strategy that tilts us a little more towards "have lots of offspring
and don't nurture much", while women tilt towards "have few offspring,
work hard at making sure they survive to breed". This also explains why
cultures that have not developed an explicit ideology of sexual equality
invariably take female adultery much more seriously than male adultery.
A man who fails to take a grave view of his mate's "unfaithfulness" is
risking a much larger fraction of his reproductive potential than a woman
who ignores her husband's philandering. Indeed, there is a sense in which
a man who is always "faithful" is under-serving his genes -- and the behavioral
tendency to do that will be selected against. His optimal strategy is to
be promiscuous enough to pick up opportunities to have his reproductive
freight partly paid by other men, while not being so "faithless" that potential
mates will consider him a bad risk (e.g. for running off with another woman
and abandoning the kids). What nobody had a good theory for until the mid-1990s
was why women cooperate in this behavior. Early sociobiological models
of human sexual strategy predicted that women should grab the best provider
they could attract and then bend heaven and earth to keep him faithful,
because if he screwed around some of his effort would be likely to be directed
towards providing for children by other women. In these theories, female
abstinence before marriage and fidelity during it was modeled as a trade
offered men to keep them faithful in turn; an easy trade, because nobody
had noticed any evolutionary incentives for women to cheat on the contract.
In retrospect, the resemblence of the female behavior predicted by these
models to conventional moral prescriptions should have raised suspicions
about the models themselves -- because they failed to predict the actual
pervasiveness of female promiscuity and adultery even in observable behavior,
let alone concealed. Start with a simple one: If the trade-your-fidelity-for-his
strategy were really a selective optimum, singles bars wouldn't exist,
because genotypes producing women with singles-bar behavior would have
been selected out long ago. But there's an even bigger whammy... Actual
paternity/maternity-marker studies in urban populations done under guarantees
that one's spouse and others won't see the results have found that the
percentage of adulterous children born to married women with ready access
to other men can be startlingly high, often in the 25% to 45% range. In
most cases, the father has no idea and the mother, in the nature of things,
was unsure before the assay. These statistics cry out for explanation --
and it turns out women do have an evolutionary incentive to screw around.
The light began to dawn during studies of chimpanzee populations. Female
chimps who spurn low-status bachelor males from their own band are much
more willing to have sex with low-status bachelor males from other bands.
That turned out to be the critical clue. There may be other incentives
we don't understand, but it turns out that women genetically "want" both
to keep an alpha male faithful and to capture maximum genetic variation
in their offspring. Maximum genetic variation increases the chance that
some offspring will survive the vicissitudes of rapidly-changing environmental
stresses, of which a notably important one is co-evolving parasites and
pathogens. Assume Jane can keep Tarzan around and raise four children.
Her best strategy isn't to have all four by Tarzan -- it's to have three
by Tarzan and one by some romantic stranger, a bachelor male from another
pack. As long as Tarzan doesn't catch them at it, the genes conditioning
Jane's sexual strategy get 50% of the reproductive payoff regardless of
who the biological father is. If the stranger is a fitter male than the
best mate she could keep faithful, so much the better. Her kids will win.
And this isn't just a human strategy either. Similar behavior has been
observed in other species with high parental investment, notably among
birds. So. The variation effect predicts that mated women should have a
fairly strong genetic incentive to sneak off into the bushes with romantic
strangers -- that is, other men who are (a) from outside their local breeding
population, and (b) are physically attractive or talented or intelligent,
or (c) show other, socially-mediated signs of high fitness (such as wealth
or fame). It may also explain why polyamorism is only now emerging as a
social movement, after women's liberation, and why its most energetic partisans
tend to be women. Our instincts don't know about contraceptive intervention;
from our genes' point of view sexual access is equivalent to reproductive
use. As our instincts see it, polyamory (the ideology of open marriage)
enables married women to have children with bachelor males without risking
losing their husband's providership for any children. Men gain less from
the change, because they trade away a claim on exclusive use of their wives'
scarce reproductive capacity for what may be only a marginal increase in
access to other women (relative to the traditional system combining closed
marriage and high rates of covert adultery). This model may not please
prudes and Victorians very much, but at least it explains her cheatin'
heart as well as his. (Thanks to Gale Pedowitz for the email discussion
that stimulated this essay.) In The evolution of human mating: Trade-offs
and strategic pluralism, Steven W. Gangestad and Jeffry A. Simpson have
explored some similar themes, focusing on within-sex variation in mating
strategies and the idea that there may be tradeoffs between fitness-to-mate
and willingness-to-nurture signals.