Rethinking the Linux Distribution

This article ties together a number of exciting ideas in the Free/Open Source (FOSS) community, to suggest a new direction for the Linux distribution. Many of these ideas are also applicable to BSD-based systems.

Although there are several mature, high quality distributions available, Linux has had a very hard time breaking through in certain markets, such as the desktop. In addition, the internet, which has already dramatically transformed the environment for other content-creating industries, may now alter the established methods for software packaging and installation.

The activities around Web 2.0 are giving rise to Software as a Service (SaaS). For example, Google claims that more than 100,000 small businesses, as well as a few large ones, have signed up for Google Apps. Of course, Microsoft is trying to build its own SaaS offering, Windows Live. Meanwhile, many categories of Web applications are already mature, including email, social networking, and e-commerce. The next step is the Web OS; a race that, in the end, may go to a startup company.

As I hope to demonstrate in this article, FOSS tools are the right technology to define the post-PC software era, and not merely as a backend platform for someone else's proprietary SaaS suite. Today's typical Linux distribution, however, follows a design that resembles a legacy Unix system with a Windows-style front end bolted on. This is a competitor to products such as Vista, which may actually be the last of its kind, even for Microsoft. It would be unfortunate indeed to suddenly find ourselves stuck with yesterday's business model.

It is important to realize that the current approach to packaging FOSS is not the only possibility. The FOSS universe is by far the most diverse codebase in the world, and a capacity for diversity is the key to resilience as well as innovation. This is true in ecosystems, economies, investment portfolios, and even the skills of an individual. So, let us see how we can rearrange the traditional Linux distribution, to meet the challenge of the emerging new era.

The central purpose of this article is to try and start a genuinely fruitful discussion on the future of the Linux (or *BSD) distribution, a discussion which may help inform the direction of existing projects, and perhaps spawn new ones. Please continue this discussion by contributing ideas, references to other articles, and especially links to anything that includes actual code (even tiny examples).

Above all, I hope we treat this as an opportunity for shared exploration, not a contest about who is the "smartest guy in the room." This is a matter of proper governance (see the "A Note on Governance" section below), which is perhaps of equal importance to the future of FOSS as actual running code.

Here is a list of topics covered in the article.

Reconsidering system administration and related tools, the "glue" that holds a distribution together.

Combining local and remote applications under a single UI.

The emergence of a free Web OS.

A model for governance of FOSS projects, and online collaboration generally.

In addition, two appendices cover important supplementary topics for Linux distribution developers and SaaS providers. The first concerns packaging, hosting and delivery of Linux distributions. The second proposes a strategy to deal with the costly (and environmentally damaging) waste resulting from large server farms.

Now, let us begin our (re)exploration of the Linux distribution.

Cleaning Up

Before you can build for the future, you must take care of the past. Years ago, a wonderful toolset evolved around Unix. awk, sed, find and others, used together in shell scripts, could accomplish a great deal of work very quickly. Compare this approach to writing everything in C!

Today, however, the state of the art has advanced considerably. High-level languages (of which Perl, Python, and Ruby are the most popular) are far more capable than the old patchwork of small tools bound together with shell scripts. They incorporate elegant programming constructs, supply comprehensive standard libraries, and support rapidly growing communities of extension projects.

Yet, system administration is still tied to the shell and the old toolset, despite the astounding advantages of moving to a modern high-level language. Programming with such languages is faster and less error prone. The code is much more reusable, portable and upgradeable. Developers can write readable programs, which makes collaboration easier. Likewise, finding and training developers is a simpler task. Customizing server farms, clusters, corporate desktop rollouts and even novice tinkering all benefit.

Any one of Perl, Python, or Ruby (as well as a few others) could become the primary system administration tool, displacing the shell. In my experience, however, Python's highly readable, compact and consistent syntax makes it the ideal choice for this sort of work.

Among the high level languages, Python seemed to be the best choice, since we already use it in many places like package build scripts, package manager, control panel modules, and installer program YALI. Python has small and has clean source codes. Standard library is full of useful modules. Learning curve is easy, most of the developers in our team picked up the language in a few days without prior experience.

Certainly, there exists a requirement for backwards compatibility with the old shell-based code. It is not difficult, however, to make these legacy tools available as optional packages, to ease the transition to the new system. Even here, augmentation with high-level languages provides significant benefits. The IPython project is an interactive Python shell that also retains much traditional shell functionality. In addition, IPython supports the Matplotlib graphing package.

Figure 1 shows an example of what IPython and Matplotlib can do. in about nine lines of code, excluding comments. Each circular patch on the graph represents a single process, larger patches indicate greater virtual memory size (vsz format specifier to the ps command). Percent CPU (%cpu) use is on the vertical axis; you can see that most processes are using little CPU. More red color in a patch indicates a process whose resident size uses a greater percentage of the machine's memory (%mem).

To generate a graph simiar to Figure 1 on your system, download the IPython log and replay it with ipython -pylab -p pysh -logplay <filename>. It is beyond the scope of this article to cover installation of IPython and related tools, but this should not be difficult. Your distribution may also have packages available already.

Even basic system administration tasks, using the simplest of traditional commands, benefit from supplementation with a high-level language.

The Browser Is the Desktop

There is definite enthusiasm for Firefox as a platform. The Web is already an essential tool, just about everyone has a browser open all the time. The browser is our interface for doing research, catching up on email, booking an airline ticket, and performing banking transactions. As mentioned at the beginning of this article, many SaaS applications are now mature, and more are on the way. Online tools can even handle photo editing. In fact, SaaS may already fulfill all the needs of some users.

Firefox, however, with its tabbed interface and long list of powerful extensions, makes even the wilderness of the Web seem like home. Could it not do the same for local applications? What if Firefox became a portal that mashed up plain web sites, remote SaaS applications, and local programs with each other? Is this the real future of the Web OS?

Surprisingly, even currently mature technology is nearly sufficient to make Firefox the primary user interface. The browser itself can perform the role of an X window manager (see below); its tabs provide an elegant alternative to traditional windows. After all, if users are spending more and more time on the Web, what could be more natural than just opening another tab to use an application, even if that application happens to be local. In fact, Mac OS X has a tool "which extends the tab browsing experience to the Desktop." The Ion X window manager also uses this approach.

Figure 2 shows Firefox running without a window manager. The tabs work as expected, and even dialog boxes are largely usable. For now, a minimalist window manager (blackbox or ratpoison, for example) would improve some functionality, such as the dialog boxes.

Figure 2: Firefox without a window manager; the "Preferences" dialog box is functional, despite the lack of a window frame

Of course, a Firefox tab is ideally suited to focus on just one application. Figure 4 shows OpenOffice, running with the ratpoison window manager. The VNC-based integration with Firefox is nearly identical to that in Figure 3. If Firefox included some window manager features, it could control applications directly.

A window manager designed specifically for the VNC-based integration illustrated here could replace or augment any window management facilities built directly into Firefox. With multiple tabs open to the same desktop, such a window manager could automatically launch and maximize the desired application, depending on which Firefox tab the user selects.

It should be relatively straightforward to complete the required command path. After all, a VNC applet running inside a tab can already operate a window manager, via the VNC server. This design would allow the browser to control multiple remote applications, as well as local programs.

The specialized window manager could also maintain support for a traditional desktop view (as shown in Figure 3), or a tool similar to Skippy could provide this functionality. In addition, a compositing feature could generate window controls in the form of transparent HUD-style overlays (such as those used in aircraft) instead of traditional window decoration. The idea is not to compete with Compiz or Beryl for visual sophistication, but to provide a very compact interface, which focuses the user's attention on a just a single underlying task. Firefox itself would be the desktop environment, coordinating multiple tasks.

There are also alternatives to VNC (in various stages of development) for integrating X-based applications with Firefox. The WeirdX Java-based X server is one possibility. The WeirdMind project integrates WeirdX with the MindTerm SSH client (also in Java), giving encrypted shell and X access via the browser. Unfortunately, this project only supports the SSH1 protocol, and there are no recent updates.

In summary, a complete Firefox desktop is perhaps not ready today, but with some work on integrating the components, it should be possible to produce one quickly. This would be an important step toward a consistent interface between web applications and traditional local programs. It also translates well to future portable devices, where a tabbed interface would ease navigation and make efficient use of the smaller screen space. Since the web browser will almost certainly be a central part of such devices, it is a good choice for implementing the overall user environment.

Overall, the Firefox desktop should encourage a move toward mashup suites of online and installed code, instead of monolithic, standalone software of either type. Now, the distinction between local and remote applications really begins to disappear.

The Free Web OS

The logical conclusion of the above refactorings may well be the Web OS. It would, however, be a free (FOSS) Web OS, not a proprietary one. This, in turn, is what may finally allow broad adoption of the technology. If only a few large suppliers offer such services, then the Web OS must overcome the steep reluctance of customers to give up control of their data, reluctance which may have doomed Microsoft's Hailstorm, and could yet harm Google Apps. The free Web OS, however, supports many more options for deployment, to better suit the needs of different users.

For example, ISPs know Linux well. The have provided managed hosting on this platform for awhile; some are now adding virtual private servers as well. For them to offer the Web OS, reached via the route described here, is not much more difficult than installing another Linux distribution. Many small and medium organizations may get better personal service from a local, trusted ISP than from a large supplier.

Similar arguments apply inside the corporate datacenter. With generic Linux boxes running a free Web OS, organizations can get most of the benefits, while retaining full control of their data.

Of course, products from large suppliers, such as Windows Live and Google Apps, will continue to have an important role. They will, however, be part of a diverse Web OS universe, a fact that may ultimately make them more successful than if they stood alone.

How will the free Web OS work? To fully understand any Web OS, you must consider at least the client machine, in addition to the server to which it connects. From the client's perspective, it is possible to use software in three major ways:

Wholly local (such as the Linux kernel)

Some components local (e.g., Firefox plugins), others online

Wholly online

Variations include downloading local code in binary form or compiling it from source (the approach used by Gentoo), and online code designed for multiple browsers or a single browser (such as Firefox 3.0).

Clearly, a pure Web OS is impossible; a kernel must run wholly on the client, for example. The exact mixture of components, however, can be very flexible in a free Web OS. This should allow multiple system flavors to emerge. In the future, it may even be possible to just go to a trusted site, and start using any application right in the browser, even if some (or all) of the components will be installed locally at first launch.

Advanced users may choose to run more software on their workstations, and assemble suites of online components on their own. Other users may choose a single Web OS provider, and run that provider's set of applications. Just like it is readily possible to switch ISPs today, a competitive Web OS environment will allow customers to select services from different vendors, piece by piece.

A Note on Governance

As mentioned earlier, it is my hope that the arguments presented here will create a discussion, from which genuine improvements will emerge. To this end, it is necessary to mention a few important things about project governance and the nature of debate.

Proper governance, however, needs a sound philosophy, a model, in order to function. Of course, nothing could be accomplished without the community's desire for fairness and justice, but the model is also essential. The community is the musician, the model is the instrument. Interacting with others based on the wrong set of assumptions will lead to problems, including process paralysis, the emergence of "poisonous people", or worse.

When considering FOSS governance and online discussions, such as the future of the Linux distribution, the notion of a scientific inquiry is particularly suitable. Just like FOSS emerged from the academic world to transform other areas (including business), so the basic practices of science can inform projects outside the realm of research.

Here are the best parts of the scientific approach.

A willingness to explore the unknown, including unusual ideas, in a rational search for truth.

An understanding that even negative results (failures) provides valuable information.

A commitment to openness; all decisions have a definite, logical basis.

Explicit recognition of every contributor.

An effort to find prior work, to better inform current efforts.

Emphasis on gathering hard evidence and reproducible results.

Note that the common FOSS practice of encouraging patches from newcomers is an example of the scientific approach. New ideas are welcome, but it is important to show something that actually works.

The recognition of the value of negative results is in contrast to the prevailing strategy of blaming individuals for failure. The latter approach is actually of limited corrective value, and often leads to the kind of hostile situations cited at the beginning of this section. The Aviation Safety Reporting System (ASRS) offers a striking example of the value of learning from mistakes. Operated by NASA, the ASRS encourages detailed, penalty-free reporting of errors in the field of aviation, then anonymizes and publishes the results. The idea has clear interdisciplinary potential. Kim Vicente's book, The Human Factor, describes how hospitals which adopted ASRS-style techniques in place of traditional focus on individual guilt saw medical errors decline 90 percent.

If we adopt the spirit of scientific inquiry, then the resulting discussion has a chance to stimulate the emergence of a new type of platform. The subject will likely turn out to be much deeper than the familiar "Linux on the Desktop" or even "Linux for the Enterprise" debates. After all, the vastness and openness of the FOSS codebase is good for more than just following the existing trends. It can also uncover new trails.

Further Reading

This section highlights some of the items referenced in the article, and provides supplementary materials. The URLs are fully specified in the text, so that they show up on any printed copies.

Appendix A: Special Delivery

The packaging and delivery of Linux distributions is a difficult problem today. Current distributions are expensive and complicated to host, due to their size. Last year, Mepisfaced this problem. The highly regarded distribution (number five on DistroWatch over the past 12 months at the time of writing) violated the GPL by not hosting the source code for every binary which it included.

Of course, Mepis was not trying to hide anything (the complete source code was readily available elsewhere) but it was still contrary to the GPL. The fault here is not with Mepis or the FSF. It lies in the fact that existing hosting, mirroring, packaging and delivery systems (which have served us well in the past) are beginning to show their age. Now is the time to consider what comes next.

As mentioned at the beginning of this article, diversity is the strength of FOSS. It should be easier for new distributions to get started. At the same time, having lots of identical, and nearly identical, programs floating around on the internet is wasteful and confusing. Which version is the canonical one, released by the project's authors? How do the available variants differ from one another?

What if the package management utilities were to download only the canonical project sources, via BitTorrent? These utilities could then use patches to "personalize" the code for a particular distribution. This approach is similar to how Gentoo works today.

Using BitTorrent in the package manager shields the user from having deal with torrents directly. It also makes it possible to create a package-level repository of torrent seeds, which many distributions could share.

For source distributions, the seeds would derive directly from the released code for each project. For binary distributions, perhaps the Linux Standard Base (LSB), which has made binary compatibility between distributions its goal, could operate the required seed repository.

Such a "FOSS seed bank" would consume minimal resources. Downloading via BitTorrent makes far better use of the network than the traditional host-and-mirror approach. In fact, the need for mirroring should decline significantly.

In general, we should pay more attention to the location and flow of FOSS code across the global internet. Better organization of the entire distributed FOSS codebase, with clearly visible lineage information between projects, would allow everyone to use this vast universe of resources much more effectively.

Of course, there is evidence that FOSS, which requires fewer computing resources and less frequent upgrades, is already friendlier to the environment. What is needed is a specific effort towards green software, in parallel to the movement towards the Web OS. Programs which emphasize efficient use of computer resources (memory, CPU, disk, and bandwidth) delay obsolescence, while requiring fewer machines in the first place. This, in turn, reduces e-waste and electricity consumption. Saving computing resources translates directly into preservation of actual physical resources.

Nor are kernel changes, or virtualization alone sufficient to create green software. First of all, developers of such system-level software must already consider resource conservation, otherwise the resulting platform would be impractical for running application code. Therefore, rethinking resource use at the application level would likely uncover many more unrealized efficiency gains. Also, virtualization, which provides a great efficiency benefit by consolidating underutilized hardware, may actually hurt performance under peak loads.

The Linux distribution, however, is more than just a kernel, or system-level tools. It includes thousands of applications, assembled into entire environments. A "SaaS Ready" distribution can provide built-in examples, frameworks, network protocols, and benchmarking suites (complete with GUIs) to guide developers in writing more efficient applications. In turn, careful attention to conservation at the application level should reduce resource requirements (particularly electricity) that would hamper the adoption of SaaS and the emergence of the Web OS.

George Belotsky
is a software architect who has done extensive work on high-performance internet servers, as well as hard real-time and embedded systems.