This essay describes the facts and merits of the decentralized
form of Linux development and support. It suggests some ways that the
continued development and growth of free
software such as Linux can be encouraged. Despite the use of
Linux as a specific instance of a "free" OS, this is not intended to
be overly Linux-specific; the principles should largely be applicable
to other initiatives such as the free BSD
projects and many
others that are directed towards applications rather than OS
platforms.

Many people have complained over the last few years that there
should be some sort of "central" Linux organization.

Some common reasons that I see include:

It would be good for there to be some "central"
Linux presence on the Web.

It would be good for there to be some organization
holding the Linux trademark to prevent situations like one that recently
occurred where an individual named Della Croce claimed a trademark on
the name "Linux," and began demanding royalties from anyone using the
name.

Items "in public trust" need to remain in public trust.

Many companies want and need some sort of central
"Linux Authority."

Companies want a rather more formalized support mechanism than
Go ask questions on the Internet. They want a
formal system of Linux Support so that users having problems have some
sort of help desk to call. This could involve telephone support as well
as more formal consulting.

Linux could use some sort of centralized advocacy
organization for marketing purposes. The point here being that other
operating systems have marketing machines, while
Linux doesn't.

A less favorable reason is that many people apparently
like to be herded by others.

There is little agreement as to what
organization should be "king," but there is desire for a "king"
nonetheless.

I would like to argue that while there are imperfections in the
support presently available for Linux, this does not mandate
the creation of an authoritative central Linux organization.

I would argue furthermore that the decentralization Linux
displays represents a strength in that it allows support to
grow simultaneously in many areas, unhindered by any
particular controlling agency.

There are disadvantages to the decentralized nature of Linux
development, as it causes support arrangements to be somewhat
fragmented. Linux does not have a single organization offering all of
the sorts of things on the list below, as is the case for most other
operating systems. There is no single organization responsible for the
various support roles which are presently distributed across various
organizations in the Linux community:

There are many Linux Web sites, none being
authoritatively the home of Linux.

The same can be said for FTP sites where Linux
software resides.

There is a lot of duplication of effort in
documentation maintenance.

There are many marketing efforts coming from different
directions and organizations, none really organized to help
The Linux Community.

This is an intractable problem, since due to the diversity of
interested parties the needs and desires of the community are themselves
diverse.

There are a number of kernels and other "base"
system utilities that are of interest as possible application targets.

This includes kernels in the version 1.2.x "series," which has
been stable for a long time to the point of being "ancient," to the
more modern 2.0.x series, which is also quite stable, but changes every
so often, to the 2.1.x experimental series, which changes on roughly a
weekly basis.

In addition, there have been a number of sets of C libraries
(with the recent transition from LIBC 5.x to the FSF "GLIBC"),
multiple binary formats (with the transition from a.out to ELF), and, of
late, three C/C++ compilers to worry about (GCC 2.7.x, GCC 2.8.x, EGCS).
Transitions between libraries have sometimes been painful.

Critics of Linux often overstate these problems. Most programs
do not need to be aware of the distinctions between these system
components, and typically do not even need to be recompiled in order to
function on a system using a newer kernel/library. Those few that
do need coding changes tend to attract a great deal
of publicity.

It can still be difficult for developers to select an appropriate
target kernel or a special purpose library particularly when some
specialized or experimental feature such as resource locking, threading,
or multiprocessing are needed.

Just how many libraries are there out there
for GUI development? Which one should I use?

There are many graphics libraries for X that run on Linux, including Xt, Motif, Tk, Qt,
Gtk, and GNUStep, to name only the most popular ones. This problem is
of course not restricted to Linux. It has roughly the same impact on
applications running on both "free" and commercial Unix-like OSes.

This problem also plagues popular non-Unix-like platforms. The
various Microsoft Windows variants support quite a wide variety of
graphic APIs and libraries that come from a variety of vendors.

Organizations that use Linux would prefer to have one
authoritative Linux organization to look to for support.

Despite there being some problems to decentralization, I believe
that there is a net advantage for Linux to having the variety
of independent organizations fulfilling their various roles. We may
view the independent organizations in aggregate as a sort of "virtual
corporation."

Here are a number of ways that decentralization
benefits the development of the system:

Independent organizations can act in their
own best interests.

So long as these various organizations remain legally
and truly independent, Linux will not run into some problems
that have beset IBM and Apple.

Apple has the Software or Hardware? problem. Their software operations might make
better decisions if they did not need to care about the health of the
hardware side, and vice versa. And this might well benefit the
overall organization.

IBM has long been noted for spending millions of dollars
developing new products, and then dropping them before even
trying to release them as soon as another division determined
that the new product would cannibalize their own sales. OS/2 is a good case in point:

The PC division benefited more from selling computers
running MS-DOS and Microsoft Windows, and has never stood behind OS/2.
It could be argued that they should have been required to
bundle OS/2 with every computer IBM sold, with other third-party
operating systems as "extra cost options."

OS/2 could not be permitted to be a "threat" to the
mainframe or "mini" (e.g. AS400) divisions.

There is also the AIX/Unix group that would also be
unhappy about competition from OS/2.

Trying to come up with real advocates of OS/2 within
the company was a problem; there were ample reasons for these and other
IBM divisions to actively oppose increased use of OS/2.
There are, no doubt, times when Unix-related sales efforts threaten
mainframe sales, which begs the question of which "threat" to prefer.
OS/2 evidently didn't receive the support needed to compete against
Microsoft's operating system products.

Note that Brett
Fleisch has written a paper on the failure of IBM's "Workplace
OS" project that puts a somewhat different slant on this. If things
were as he indicates, OS/2 was, in fact, strategic to IBM; the problem
was that IBM's main efforts went into building the Workplace OS for
which OS/2 would have been a primary "personality." In effect, OS/2
(in the form that we've seen it) was put on the "back burner." Had
Workplace OS turned out as expected, there would have been a "new,
improved" OS/2 on top of it. In effect, the failure of Workplace OS
doomed OS/2 to a niche existence in the marketplace.

I expect that both my thesis and his have merit: his paper makes
many events at IBM over the last five years make a lot more sense, while
I believe my thesis to be reasonably representative of how decisions
were likely made in the personal computing division...

Having independent organizations reduces the incidence
of bottlenecks.

The Free Software
Foundation has had the problem that their leader, Richard M.
Stallman (commonly known as "RMS") appear to think that they ought to
have control over the entire body of free/GPL (GNU Public License)
software. Ignoring the consideration that others may not want to be so
controlled, they simply don't have enough staff to manage all of the
projects that are active.

There is the potential for free software to represent a billion dollar a year sort of
effort. The Free Software Foundation is certainly not large
enough to manage the results that come from that level of activity. I'm
not sure that they could grow large enough to oversee it all.

Linux developers have nobody stopping them from introducing
something new. Nobody is waiting for permission from Linus Torvalds for
much of anything.

Moreover, Linux developers can and typically do use standardized
protocols that allow projects to work independently, which means we're
seldom forced to wait for any particular resource. Since,
for instance, the X11 protocol used to implement the X Window System is a defined graphical
standard that is quite stable, development can proceed independently on
such things as X servers, GUI libraries, the Linux kernel, and other
system components both large and small.

In contrast, MS-Windows system updates/upgrades have often been
painful as system components as well as applications are deeply
interlinked. This is good for the careers of consultants that want to
have continuing work cleaning up after the problems, but it is expensive
for the global community on which the changes are inflicted.

Unix-like OSes and Microsoft's OSes can be contrasted nicely from
a technical perspective in this fashion:

Microsoft Windows (in its various versions) enforces
relatively little separation between the different system components.
Programs using the Win32 API fairly directly access all pieces of the
system, from file systems to GUI to memory management.

This has the unfortunate result that if the way any
system component functions is changed, all programs typically
need to be changed to conform.

Unix-like systems enforce a clear separation between
kernel and user processes, and furthermore separate users. The X
Windows graphical infrastructure is also clearly separated from both
kernel and users.

On Unix systems, it is typical for component upgrades to
not have adverse affects on programs. There are
counterexamples, but they are the exception rather than the rule.

Linux kernel upgrades normally only directly affect
the functioning of a small number of programs.

Upgrading the X Windows System from version X11R4
to X11R5 to X11R6 to whatever other new revision has not normally caused
significant disturbance to programs written for the older versions;
there may be benefit to rewriting programs to use
functionality from the new libraries, but that is not
necessary. It is not normally necessary to recompile
programs. The existing programs normally continue to function as if
their environment had not changed.

Moreover, running a program using the "new"Gtk
GUI does not make the Xt applications stop working. This
also displays a valuable and novel thing about the X Window
System, which is that it can simultaneously accommodate applications
that use different graphical user interfaces.

When ownership of Linux is distributed to many
people around the world, this discourages attempts to privatize Linux
as any one person or organization's asset.

The situation where Della Croce laid claim to the trademark
"Linux" pointed out the importance of this. He temporarily had
legal control over the name, which caused much concern in the Linux
community. A
Petition to Cancel was filed against this Linux Trademark. The
matter was settled out of court; Linus Torvalds is now the holder of
the trademark to Linux.

In contrast, since Linus Torvalds started accepting changes to
Linux using the GNU Public License, nobody has had, or will have, sole
control from a legal standpoint over the source code to Linux. In
order to legally privatize the source code to Linux, thousands of
contributors to Linux would need to agree to this. There are enough
that are likely to disagree that this is extremely unlikely to happen.

Independent organizations can agree to disagree,
when necessary.

If Linux remains somewhat fragmented in its support network,
this does mean that there may be some problems that fall through the
cracks, which is unfortunate.

The NetBSD Project (building a
"free" operating system based on a system called "BSD 4.4 Lite")
seems to have fractured due to internal disagreements, where people
with dissenting opinions left and created OpenBSD. It is difficult to (without entering into
great controversy) establish precise causes. Regardless of the
causes, the split has hurt the credibility of both groups.

But by not having a monolithic
organization, Linux people are free to work as independently as they
wish to. They can agree to disagree, and support may grow
independently of the political limitations on any
of the organizations that are involved.

The GGI graphics project is a good
example of why it is good to have a degree of
independence. It has been ongoing for several years, with occasional
intentions to be a replacement for X.

In a "fully-integrated" Linux Organization, such projects
would either become:

Critical path projects which, since they're not done
yet, prevent further development of anything else that somehow depends
on them (bringing us back to a bottleneck), or

Projects discarded because Linus Torvalds did not
agree that they should go into the official kernel.

There are disagreements over the wisdom of these particular efforts;
the fact that they're going on independently means that at present,
they disturb no one that doesn't want to be so disturbed.

For instance, while I would personally like to see GGI
"succeed," at least as a platform on which X could be run
very fast, I think that the (former) "Berlin
Project" efforts have been misguided. My feelings doesn't necessarily
hinder either effort; in the decentralized world of Linux development,
they are free to succeed or fail independently of what I want or
think. Some developments may be a waste of time, but ultimately do
not critically hurt other Linux efforts.

This doesn't necessarily hinder either effort; in the
decentralized world of Linux development, they are free to succeed or
fail independently of what I want or think. Some developments may be
a waste of time, but ultimately do not critically hurt other Linux
efforts.

Were the efforts centralized, failure of a critical development
project would injure everyone. For instance, if I were the authority,
and I, "Great Tyrant Chris," required that the GGI graphical
infrastructure be adopted, this would leave continuing work on many
other Linux-based applications and subsystems vulnerable to the
possibility that GGI may not become stable and useful in a timely
fashion.

With the highly distributed development model, Linux is not
generally vulnerable in this fashion.

There simply does not exist some monolithic or
otherwise easily characterized body known as The Linux
Community.

In other words, there is no fixed centre that to Linux, although
Linus Torvalds and a number of other "core developers" certainly exert
influence.

Linux has a whole variety of users with a wide variety of
desires, needs, wants, and abilities.

Some would describe it as a server OS, just like the other Unix operating systems. Others deny that,
claiming that it's the greatest "PC" OS, and that demanding that
applications support multiple users is a foolish notion.

Linux is a very powerful system that can indeed by used in many
ways. Writing applications that limit themselves to a
particular "mode" of operation if they can be readily extended to
provide added flexibility and power is clearly unwise. You can, very
often, have your cake, and eat it too with Linux.

All of this could be taken to imply that the process of
developing software for Linux takes place in some sort of "anarchistic
utopia" where everyone simply does what they feel like doing, and
somehow it all just works out.

That is not in fact the case. In various places around the
system, there are developers that hold "dictatorial" powers. Many
"neat ideas" for developing the kernel have been considered and
discarded because Linus Torvalds did not approve of them. Decisions
concerning how the C libraries should be implemented are
effectively made by a relatively small group of highly
competent developers. The code base may be open for anyone to examine,
but the "authoritative" kernel sources come from Linus Torvalds and
Alan Cox, and changes are filtered through these people.

This has not caused serious organizational problems thus far, and
the "free" licensing approach means that should some core developer
take some ridiculous stand, someone with a more palatable answer,
and superior code, could replace the offending
developer.

As a good case in point, the people responsible for the graphical
infrastructure called GGI
have been in a fairly longstanding disagreement with Linus on the merits
of including their code in the kernel. Their code is most definitely
not yet ready for inclusion in "stable" kernel releases.
GGI is (somewhat arguably) also too experimental for general release as
part of the "experimental" releases that while not deemed "stable"
are still expected to be fairly stable.

If the GGI people can come up with a sufficiently mature, stable
release of their code, this might convince the Linux community to
overrule Linus' objections and include GGI in an "official" kernel
release. This has not happened, which is, thus far, a perfectly fair
outcome.

(As preparations occur for the Version 2.2 kernel release, there
seems now to be a "frame buffer" scheme that may be used with the
kernel that certainly parallels what the GGI folk would want; it is not
yet clear whether or not this represents a subsystem that supports the
kernel functionality they need...)

Three companies that have seen fairly spectacular levels of
growth come out of an "allow independence" approach are:

SAP AG

This German Enterprise Software organization has not
tried to provide all consulting services for care and feeding of their
(exceedingly complex) software, but actually has encouraged the use of
external consultants. R/3 has received tremendous support from "Big
Six" consulting firms as a clear and direct result.

SAP AG and consulting firms actually do work
cooperatively because they have all found this cooperation to be
profitable.

It is more common for sales organizations pay lip-service to such
cooperation, while working against it in reality. Oracle's application
"suite" has not done so well in the marketplace, and I believe that
this can be attributed in part to Oracle being less cooperative.

Hewlett Packard

The independence of the various components of Hewlett-Packard is
quite remarkable from organizational, technical, and economic
perspectives. The current dominance of the their printing unit in the
global market displays the importance and value of
independence.

Microsoft - Personal Computers

Many years ago, corporate information systems were largely run by
IBM. They provided robust systems, but systems development was time
consuming and expensive, and departments were often unhappy with service
provided by MIS departments.

Personal computers running MS-DOS and PC-DOS gave departments the
opportunity to own and control their own computing environments,
independent of central MIS management, at relatively low cost. PCs
certainly didn't provide the robustness and scalability of mainframe
systems. But it was fairly easy to make PCs do useful things without too
much effort using prepackaged word processing, spreadsheet, and database
software. From a political perspective, software licenses could be
acquired relatively cheaply at the departmental level rather than having
to go through the MIS department. This gave departments more
power.

The fact that these personal computers were neither particularly
reliable nor particularly manageable compared to the mainframes (and
this can lead to nightmarish problems to this day as organizations try
to scale up PC LANs) does not contradict this. PCs have been "good
enough" to prove useful and valuable.

It is a commonly-held view that having multiple implementations
of things is a waste of effort. I do not agree; let me count the
ways...

Reducing the Risks of Experimentation

Having multiple implementations of things allows people to more
safely experiment with new approaches without the risk that failure
will have disastrous effects.

Several Linux projects are exploring new sorts of file systems;
since they're not working on the only file
system, they can take some more substantial risks within their
individual projects without increasing the risk of potentially
hampering all Linux development.

Diverse participation

Having multiple small projects allows individuals to participate
in whatever they want to work on.

Coping with conflict

If I were to have a severe "personality" conflict with Linus,
it might well be that there would not be room for us to work together on
the same operating system kernel. If Linux was the only "free" OS
kernel out there being developed, that would leave us a dilemna; only
one of us could do this sort of work.

Fortunately, there is no such apparent conflict, but this sort of
thing has happened before on many projects. It has on occasion caused
"splits" that have resulted in multiple viable projects. (NetBSD and
OpenBSD come to mind...)

More common is the situation where there are substantially
different positions held as to what direction development efforts should
take. That is a good place for the people with
unreconcilably different positions to go in those different directions.
The results will establish which directions were wise or otherwise.

Cutting off Proprietary Directions

Many users have not made substantial use of the Debian
Linux distribution.

I am glad it is there, however, and would argue that its simple
existence benefits me as well as many others that may never use it.

At one point in time, Linux distributions were regarded as
"works" that could usefully be copyrighted, and for which use could be
restricted. Thus, even though substantially all the components of the
Blue Fedora Distribution may have been GPLed software, and
thus, by one metric, freely redistributable, the collection,
being suitably copyrighted, might itself require licensing fees.

Enter Debian. It provides a useful distribution that provides
similar sets of applications to common commercial ones like Red Hat's,
SuSE's, and Caldera's. Presto! Prices fall.

You can get Debian free of licensing fees, and it
contains substantially the same software as provided by the "expensive
guys." I argue that this has cut off the option of the "other guys"
charging substantial amounts for building their
distributions.

They are still free to charge for service and for third-party
commercial (non-free) software; they can't realistically charge a
whole lot for collecting together the software on CD.

Similarly, the existence of multiple X implementations has
brought prices down and encouraged common developments of useful
functionality.

This can also be applied to the KDE versus GNOME Controversy.
The fact that there are two independent projects
substantially defuses the possible problems that could result from the
more paranoid interpretations of Qt licensing.

If Red Hat Software were to head off down any
particularly proprietary paths, the existence of Debian
represents a ready alternative.

If Troll Tech, producers of Qt, were to do anything
particularly proprietary, the existence of GNOME provides a (not quite
ready yet...) alternative.

If Linus Torvalds were to move to Microsoft, and could
(by some legal bizarreness) make Linux proprietary, the existence of
FreeBSD, OpenBSD, NetBSD, and others represent potential alternatives.

I would thus conclude that it is not, in general, a
waste of time to have multiple implementations of things.

If anything, I would suggest that there may be some places where
additional projects could be useful. There do exist certain
"bottlenecks" where there is dependance on a single tool:

GCC/EGCS

There do exist other substantially different compiler efforts
such as TenDRA and LCC that have seen very little attention.

There could be improvements that could
result from taking substantially different approaches from those used by
GCC/EGCS.