Modularity

In today's editorial, David Symonds shares his views on what's good
and what's bad about modularity, and suggests that more tools should take advantage of modular techniques.

The Present

Modularity is often referred to as a good thing. So what's so good
about it? And is it always good? Could modularity ever be bad?

The first thing most people ask about anything is: "What's in it for
me?" In the software world, there are generally two categories of
people: developers and users. For users, the convenience of being able
to swap and rearrange configurations is great; it also removes a fair
bit of dependence on the supplier. For developers, modularity can ease
the production of software by breaking a program into distinct, yet
interconnected, components.

Apache

The ubiquity of the Apache HTTP
server is a testament not only to the effectiveness of Open Source
development, but also to the benefits of modularity. Perhaps the two
most widely-used modules for Apache are mod_perl and mod_php. They're
both used for dynamically generating content, and both were developed
separately from the Apache Project. The obvious implication is that
without Apache having its clearly defined API, mod_perl and mod_php
would not have succeeded anywhere near as well as they have.

Kernel Modules

Another example of the benefits of modularity comes in the form of kernel
modules (for example, in Linux). I, for one, use kernel modules
frequently. At the time of writing, I have 28 modules, and 4 of them are
currently loaded (for my sound card). When I connect to the Internet, 3
more are loaded (for PPP); after disconnection, they're unloaded. When I
need to do some printing, 3 modules are loaded (for parallel port access),
and unloaded automatically after a period of time. When I mount an Iomega
Zip disk, the ide-floppy.o module is loaded; after unmounting the disk,
the module is unloaded.

Since my hard disk is formatted using the ext2 filesystem, the ext2.o
module is compiled into the kernel. I periodically have to read and
write disks for exchange with Windows 95 machines, so I have the
vfat.o and fat.o modules unloaded by default.

For me as a user, all this is an incredible convenience. Quickly
checking, I find that the 4 sound modules eat up 94.5K of memory. If I
load all 28 modules, this figure jumps to a whopping 431.8K. Although
that might not sound like much, it's huge when compared to my
kernel of 408K. If all those modules were compiled in, not only would
it take longer to start up, but more RAM would be taken unnecessarily
by the kernel, and it would be very difficult to reconfigure some
components on-the-fly. At present, if I wanted to, say, change which
IRQ the sound card uses, I simply edit the configuration file
(/etc/isapnp.conf), and restart the sound subsystem
(/etc/rc.d/init.d/sound restart). Modularity makes reconfiguring your
sound card as simple as reconfiguring Apache!

OpenGL

One of the best things about OpenGL is the modularity of its pipeline
interface. When nVidia came out
with its GeForce line of chips with geometry transformation and
lighting in hardware, every OpenGL game instantly got a speed boost
without even needing to be recompiled. For the users to feel the
benefit, they simply bought and installed the card, installed the
drivers, and ran their OpenGL games. That's the beauty of modularity
-- when well designed, it makes upgrades a matter of Plug and
Play.

The Dark Side of Modularity

Unfortunately, modularity isn't universally applicable. There are
several situations in which modularity can make things worse, either
by slowing things down, complicating matters, or providing security
holes. Since most of these situations have technically obscure
justifications, I'll simply list two below:

Microkernels -- they compromise speed for the sake of stability and
security (though this is not necessarily a bad thing).

Some of the original SGI machines -- they were so modular that a thief
could simply open the side of the case and pull various components out
(even while the system was still running!).

The Future

Ok, so some things are modular. Where do we go from here?

If someone asked me this question, I'd ask him to think about it
himself; there are so many things that aren't modular. Unfortunately,
things are going to stay this way while backward compatibility is
deemed important. This point is one of the primary reasons why BeOS is so efficient -- the designers
threw away the current conventions and designed the system from the
ground up. The result is exactly what it was designed to be: a
fantastic multimedia operating system.

Throwing the bath-water out with the baby

So what needs to be thrown away? At the risk of becoming a smoking
pile of debris (no flames please), the first thing would be the Unix
commandline pipe structure. Don't get me wrong; I live and breathe the
commandline, and have certain misgivings about X. But the pipeline is
too specific. Let me list a few of the shortcomings I see:

In my fantasy world (where, of course, I'm using Linux 4.2.18), this
would tar the 'myfiles' directory and simultaneously gzip it to a
backup tarball (locally) while bzip2ing it and sending it to a remote
machine over the network. Not only would that be neat, it would be
useful for many applications.

Hardware

My favourite characters in movies have always been the snipers
(assuming, of course, the movie in question actually has a
sniper). They'd trek to their location (perhaps an abandoned building
with a tower), open their weapons cases, and put their sniping rifles
together from individual components. They'd grab the main firing
chamber, screw the silencer onto the front, clip on the scope, lock in
an ammo magazine, and shoot away.

Like that sniper, I prefer it when my stuff fits together. I like
being able to record MP3s to cassette simply by connecting the
speaker-out of my soundcard to the microphone jack of a tape deck,
hitting the deck's record button, and invoking mpg123. So why do I keep seeing KDE- or
GNOME-specific applications that could have quite easily been
generalized to use, for example, the Qt toolkit. Why should a cardgame
care whether it's running under KDE, GNOME, or plain X? Why does a
clock program require KDE?

Why are so many people so unbelievably misguided? As Martin Luther
King put it, "We have guided missiles, and misguided men". KDE is a
desktop environment; it's for application frontends, not
a basis for applications themselves.

Intel, AMD, Cyrix, Intel, Transmeta, Intel (oh, did I mention
Intel?)

Just three years ago, the standard connection between x86 processors and
the motherboard was the ubiquitous "Socket 7" ZIF (Zero Insertion Force)
sockets. Then Intel brought out the Pentium II (April 1998), with a
radically different CPU-to-motherboard connector: "Slot 1". From there,
it's all gone downhill, with "Socket 370" (Intel; Celeron PPGA, April
1998), "Slot 2" (Intel; Pentium II Xeon, June 1998) and "Slot A" (AMD;
Athlon, August 1999).

Unfortunately, I don't think things are going to get better anytime soon,
especially with "The CPU Formerly Known As Merced" (Itanium) just around
the corner, and the Pentium IV coming even sooner (Late 2000). Can
anything be done? The answer to that question would be a whole editorial
itself.

References

David Symonds (xoxus@usa.net) is a first year
Computer Science student at the University of Sydney, Australia. His
current career ambition is to find a job.

T-Shirts and Fame!

We're eager to find people interested in writing editorials on
software-related topics. We're flexible on length, style, and topic,
so long as you know what you're talking about and back up your
opinions with facts. Anyone who writes an editorial gets a freshmeat
t-shirt from ThinkGeek in
addition to 15 minutes of fame. If you think you'd like to try your
hand at it, let jeff.covey@freshmeat.net
know what you'd like to write about.

Recent comments

That the autor wrote it to learn KDE was most probably the reason in this case. Another possible reason could be that the program wanted to use the look-and-feel from Qt/KDE.

KDE (and also GNOME) is more than just a widget set among many others.
Our 'problems' with them result from none of KDE and GNOME being the standard Desktop Environment, and surely from several technical reasons. KDE was intended to be the desktop environment, so that there were no reason against using it in your application. Look at MS Windows: If programming for that system, would you mind using the Win32 GUI? Of course, with Linux, reality is different.

KDE and GNOME should join efforts to provide at least some sort of a common configuration (Start menu, MIME types, maybe even GUI appearance).

And I do not think only frontends should use KDE. Personally, the thing I always disliked with KDE was the absence of "real applications".

Trivial Program examples
I find the examples of the essentially trivial programs (ie a clock) being linked against Gnome or KDE are silly. Such things are almost certainly written by an author who wanted learn KDE/Gnome programming, not someone who has spotted an otherwise unnoticed hole in the clock application market. As such they are hand educational tools both for the people who right them and people who follow in their footsteps.

DLL Hell is a good bad example
For a good example of problems with modularity look at Windows DLL Hell. Many applications depend on shared System (or semi-System) DLLs. They run fine with the versions of these DLLs that they are tested with but then installing another application replaces the shared DLLs with newer (and sometimes older!) versions. The new version has a new bug or requires stricter compliance with an interface contract and so blows up.

Author's Response
I thought I'd better respond to some of these comments...

B!nej: True, there are different types of users, but it's entirely possible to set things up so that the "casual user" can just power-up and go, while allowing the "dedicated user" to flick a switch and get all the power. If the preset configuration is adequate for the casual user, they won't get frustrated and confused (cf Corel Linux); however, the dedicated user can still jump in and configure things relatively easily to their taste. Same things go for developers.

Microkernels: AFAIK, one of the selling points of microkernels has been the supposed stability due to the protection-ring structure preventing, say, floppy drivers from crashing the core kernel. Also, WinNT may say it uses a microkernel, but it doesn't. The whole kernel (and drivers) occupy the same address space, and can walk over each other quite easily.

The SGI box: That's exactly what they did; attached a big metal bar across the access plate. Sorry, probably a bad example.

Networking pipes: The point I was trying to make was the difference between functional and procedural programming languages.

Linking to KDE: But why does the clock program have to link to KDE at all?

samiam: There are so many specialised programs out there that don't need the other pieces gnome adds. Clock programs barely need any configuration.

Priyadi: I'm not saying that the shell should support networking by itself; it could quite easily be configured to launch a given utility to do it for it. Also, good modularity would mean that I wouldn't have to care about which protocol to use, nor would the shell. The shell should simply hand control over to a program that can decide which protocol/port is the best to use (after considering speed, network load, security, etc).

Okay, I admit that the networkable-pipe example was probably a bad choice. I should have made it more clear that I know it's currently possible; it's just not convenient or simple.