The Front Page

bear has been The Editor Gal's teddy bear for some time now. He has
only gotten into Linux recently, as he has been looking up to
xteddy and how
much he enjoyed the desktop, bear enjoys Fvwm button bars best
(though large dock icons are nice too) and has a snug $HOME in
/usr/share/xteddy.

Heather got started in computing before she quite got started learning
English. By 8 she was a happy programmer, by 15 the system administrator
for the home... Dad had finally broken down and gotten one of those personal
computers, only to find it needed regular care and feeding like any other
pet. Except it wasn't a Pet: it was one of those brands we find most
everywhere today...

Heather is a hardware agnostic, but has spent more hours as a tech in
Windows related tech support than most people have spent with their computers.
(Got the pin, got the Jacket, got about a zillion T-shirts.) When she
discovered Linux in 1993, it wasn't long before the home systems ran Linux
regardless of what was in use at work.

By 1995 she was training others in using Linux - and in charge of all the
"strange systems" at a (then) 90 million dollar company. Moving onwards, it's
safe to say, Linux has been an excellent companion and breadwinner... She
took over the HTML editing for "The Answer Guy" in issue 28, and has been
slowly improving the preprocessing scripts she uses ever since.

We have guidelines for asking and answering questions. Linux questions only, please.
We make no guarantees about answers, but you can be anonymous on request.See also: The Answer Gang's
Knowledge Base
and the LGSearch Engine

Contents:

Greetings from Heather Stern

Greetings, folks, and welcome once more to the world of The Answer Gang.

By the way, if you've any particularly marvelous answers (might we say, rants
of the enlightening and useful variety) - feel free to mail them to us - you
too could be part of The Answer Gang, yourself.

I often use this space to talk about whatever it is I'm up to. I have not
had a great month, really - you'd rather not know. Suffice to say I'm looking
forward to Chinese New Year on February 9th, so that I can declare the awful
mess as belonging to the previous year, and move on.

My Star Trek crew is expecting to run the Internet Lounge at my nearby music
convention,
Consonance, first weekend of March.
I do believe that our "Songs In The Key of Tux" series will be a big hit there
- thanks Jimmy!

Here's to hoping that you and yours are faring better than me and mine have.
If not, fear not - things do improve. Even if it's up to us at the last to
make it so - this is a world of choices, and malleable tools like Linux allow
us the freedom to make the most of them. 'Til next month -- Heather

Update vs Install, how best to manage /home?

From Edgar Howell

Answered By: Neil Youngman, Thomas Adam, Mike Orr, Benjamin Okopnik.

Before I go any further, here is the environment on the machine
in question, SuSE 9.2 on both drives, no other OS:

/dev/hda (non-Internet drive, system doesn't even know about a modem, /etc/fstab mounts /dev/hdb2)

1 swap
2 /
3 /home

/dev/hdb (the drive booted for Internet access, /etc/fstab has no information about /dev/hda)

1 swap
2 /

a) update vs install

In part because I tend to omit a couple of releases instead of just
blindly installing successive releases but also because I used to
install new software into a new partition and play with it for a
while before removing the previous version, in the past I have
always done a clean installation. With all that entailed, creating
the users again, /etc/hosts* (SOHO network) and the like.

[Thomas]
When you do a new install a distro (albeit a new one, or an upgrade to a
new point release of one you currently have) you can instruct it not to
touch certain partition (like /home). This means you don't have to
worry about loss of data. You mentioned UIDs. Backup
/etc/{passwd,shadow} beforehand.

Recently I experimented with update. It worked well, avoided lots
of questions and seemed really neat. But I had again skipped a
couple of releases and ultimately discovered some problems.

[Thomas]
I can't see where the contention is. An upgrade saves a lot of time,
since all you're doing is upgrading the software, nothing more.

[Mike]
I haven't used SuSE much, but that's the general problem you get when
updating through multiple OS releases. Debian has special scripts
you're supposed to run to switch releases; they try to automate the
tricky stuff but sometimes they can't foresee what your configuration
has turned into. And they have to be run in order without skipping. If
you don't update packages frequently and don't follow the user
forums/newsletters where potential breakages are discussed, I would
stick with the clean install and copy your configuration. That way you
know everything has the latest recommended configuration, whatever that
is. It also provides a chance to clear out the cruft that has
accumulated from packages you installed but never used; cruft that may
leave droppings in the filesystem when uninstalled since it wasn't built
by a clueful developer.

Alternatively, back up your entire system onto your second drive and
make sure you can boot into it, then update your primary system. That
way if something breaks you can just blow it away and go back to where
you were.

/home isn't a big deal. If you have it on a separate partition like you
do, just let the fresh install create its own home directory in the
root partition. You'll have to do everything as root anyway while
you're installing, so just pretend home doesn't exist. Then when
everything's settled, delete the bogus home and mount your real /home
partition. Same for /usr/local if you store stuff in there. I keep a
/mnt/data partition with my home and local stuff, and use symlinks to
get to them. That also lets me have multiple OS versions installed, all
sharing the same local data. And I can unmount the local data when I'm
afraid an upgrade might hurt it.

Under the old version the first user ID was 500 and under 9.2 it is
1000. That of course caused problems in the above environment:
/dev/hdb under a completely new installation got new user IDs,
/dev/hda under the update inherited the old ones. It was fun to
re-boot into /dev/hdbafter I wrote to it having booted from
/dev/hda...

[Mike]
The easiest way is to recreate the users with the same UIDs and GIDs
they previously had. You may have to run "useradd" manually to do it.
(Or "adduser" on some systems.) If your UID overlaps with an existing
UID on the new system, you'll have to compromise somehow. If you give
each user their own group rather than putting them all into "users",
you'll have to create the group first. On my Gentoo:

This is the best way if you want to boot back and forth between OS
versions, you have files with unexpected owners inside the home
directories, or you have programs that refer to users by UID rather than
name.

Alternatively, you can just go with the new UIDs and switch the existing
home directories with "chown -R USER.GROUP /home/USER". (Note that
chown is going through a syntax change from "USER.GROUP" to
"USER:GROUP"; you'll have to see which syntax your version supports.)

[Ben]
Being the tool-using critter that I am, things like this (the word
"manually", specifically) bring a shudder to the spine and a script to
mind.

[Thomas]
This presupposes that the users were added with "adduser" to begin with
(note that UIDs from 1000+ are indicative of this). But on some
systems, UIDs > 499 are used as a valid starting place for normal user
IDs.

[Ben]
I was walking past a haberdashery and happened to see a nice hat in the
window, so I extracted a '1000' from it. };> The number would have to
come from examining the password file on the current system and adapting
the existing number range to what is available - obviously, there's no
single 'right' answer. Sorry I didn't make that more explicit.

Oh, and the '<N>' in the '/mnt/hda<N>' isn't the explicit version
either. :)

What does the Answer Gang recommend, update or clean installation?

[Thomas]
An update.

[Neil]
As you've noted a clean install requires you to set the whole system up again,
whereas a good update should be seamless. Again as you note, there may be
circumstances where the update leaves the seams showing. A clean install will
normally leave you with a nice consistent system, with any cruft that was in
your configuration cleaned out and everything shiny and sparkly.

Obviously, if you're changing distributions, rather than just changing
versions of the same distribution then a clean install is the way to go.

Personally I incline towards doing a clean install every so often. If you're
only taking every 3rd release or so, then a clean install may be worth the
effort, but if you're putting every release on, then I would alternate
upgrades and clean installs, or even keep the clean installs to every 3rd
release.

In practice, I tend to have a number of old releases lying around in separate
partitions, so I wipe an old partition and install there, then when I'm happy
I've got it set up the way I like it, I copy /home across and change my
default boot. This means I also have a number of old copies of my home
directory left lying around.

b) managing /home etc.

I have read recommendations about distributing the various
directories but assume that they only apply to environments with
different physical drives (load-balancing). In this specific
installation there is only one hard drive (at a time) involved.

[Thomas]
This "load balancing" concept is a marketing myth, a band-wagon of
terminology that's thrown around the place which people latch on to.

[Neil]
Generally, I think it's about reliability more than load balancing. The point
being that if some eejit fills up one partition, e.g. /home, with junk,
there's still space in the other partitions, e.g. /var and /tmp, for the
system to go on about it's business until the problem is rectified. If it's
all in one big partition the the whole system is likely to fail.

In practice that's more applicable to big IT departments than simple home
systems. At home I install everything in one big partition. It keeps things
simple and I've had no problems with reliability, but I wouldn't recommend it
for my work systems.

How can one best deal with update or install in order to avoid
having to back up /home, waste the drive, install the software
and then restore /home?

[Thomas]
(see above about partitions and installation)

[Mike]
For ease of use, put everything in one partition. To guard against disk
I/O errors or stupid admins who don't look before they "rm -rf", put
/boot, /home/ and /usr on separate partitions, make /boot and /usr
readonly, and don't mount /boot by default. The reason is that if a
disk sector goes bad, it may take the entire filesystem with it, but it
won't disturb other partitions. Likewise, users can't mess with stuff
that's mounted readonly. The down side is managing the partitions and
predicting how big to make them. If one gets full but another is mostly
empty, you'll have to repartition or use symlinks.

Here /home is on a different partition than the partition with the
software. Will users (IDs) be created with respect to a partition
/home?

[Thomas]
An update doesn't concern itself with changing critical information such
as that -- the only way {U,G}IDs would be affected is if shadow or login
were updated -- and even then, the config files are not touched as a
result.

[Mike]
User IDs (UIDs) are created in /etc/passwd according to the existing
entries in /etc/passwd. The smallest unused number is assigned, subject
to a minimum and maximum. Where /home is mounted doesn't matter. /home
doesn't have to be mounted at all if you don't use the -m option
(useradd), which creates the home directory.

OK, I've been wanting to do a completely new installation on
/dev/hda, let's just try it...

A little later: it turns out that things can be quite simple
after all.

YaST was so unhappy at being told not to format / that I backed
up the one non-system-directory on it and let YaST reformat it.
Other than that and my dictating partitioning, a vanilla install,
basically just accepted whatever was suggested.

[Thomas]
Of course it would be unhappy about that. /etc, /lib, /bin./sbin -- are
all directories the installer will need access to. It's highly
unlikely they're as their own partition (you wouldn't need nor want them
to be so they're under "/".

[Mike]
I've never seen a Linux installer that sniffled and sulked if it
couldn't format something, but I guess there's always a first.
Usually they have options to let you mount a preformatted partition.

What was really neat is that YaST had absolutely no trouble using
a previously available /home! True, I had to re-create the users,
but that is normal and in the process YaST notes the presence of
corresponding home directories and asks whether they are to be used.

That pretty much solves that problem, for me at least. The Germans
have a saying, roughly: trying something out is better than studying
it.

But I'd still appreciate comments. Is there a gotcha somewhere?
Hmmm, this SOHO doesn't have many users. And what about all the
settings in /etc?/boot? Like would it have been possible to copy
/etc/passwd and /etc/shadow from a backup? Sounds like that
particular ice might be getting a bit thin...

[Neil]
Settings in /etc and /boot are best created from scratch when doing a clean
install IMO, especially when any of them are maintained by automated tools
like YaST. There's always the possibly of significant changes between
versions and just copying back your old settings can be a bit risky, although
9 times out of 10 you won't have a problem.

All IMO. Unlike many in the gang, I don't sysadmin, so others may have more
authoritative answers. NOTE: We do not guarantee a consensus ;-)

[Mike]
Copying the user entries from /etc/passwd is fine, as long as the
numbers don't overlap. Just make sure nothing else is editing
/etc/passwd simultaneously, or use "vipw" to lock it while you're
editing. /etc/shadow is probably fine too, just be aware that the
other distribution may have a different file location and different
syntax. If it
doesn't recognize the new password, you may have to restart... something.

(Actually, the UIDs can overlap if you really want two usernames treated
the same. Some people use this to have a second root login with a
static shell (/bin/sash). This is useful if you hose your dynamic
libraries; with a static shell you can repair the damage. Just copy
the root line, leave the UID 0, change the username to something else,
and set the password.)

after installing new kernel running lilo crushes system

From Ridder, Peter, AGU

Answered By: Neil Youngman, John Karns

Hallo,

I have a Knoppix 3.3 HD installation on a Dell Latitude 610.
The HD holds the following partitions:

After adding a new compiled kernel in /boot (/boot/bzImage-2.4.24 for example)
adding lines in /etc/lilo.conf like:

image=/boot/bzImage-2.4.24
label=Li_new

and running /sbin/lilo

results after a restart in: L 99 99 99 etc.

And the file /boot/boot-menu.b doesn't exist! Also there is no /boot/boot.b
Why does it work until I try to install another kernel?

[Neil]
According to the LILO man page

...............

"Errors 99 and 9A usually mean the map file(1,n) (-m or map=) is not readable,
likely because LILO was not re-run after some system change, or there is a
geometry mis-match between what LILO used (lilo(5,8) -v3 to display) and what
is actually being used by the BIOS (one of the lilo(5,8) diagnostic disks,
available in the source distribution, may be needed to diagnose this
problem)."

...............

[John]
I haven't taken the time to trace through the init scripts to track down
the specifics, but my hd install of knoppix 3.4 doesn't maintain the boot
partition mounted under the /boot dir, presumably unmounting it at some
point during system boot. It is accessible as an automounted partition
thereafter. The /boot directory you see is the mountpoint referenced by
lilo.conf.

Or by clicking on the hda6 desktop icon and looking at the files that come
up in konqueror as a result.

If you then see the "missing" files, then after having mounted the
partition under /boot as above, you can copy your compiled kernel there
and edit lilo.conf. It's usually a good idea to make a new entry in
lilo.conf for the new kernel, leaving the old on untouched for a fall-back
in case for some reason the system won't boot from the new kernel. Just
copy the existing lines referring the original kernel, and paste them in a
new stanza above or below the original. Then edit them making name
changes as necessary to refer to the new kernel. After saving, you can
then run 'lilo' (also as root).

Assuming this all works for you, then 'umount /boot' and delete the kernel
you had put there before. It isn't seen when there is a partition mounted
at /boot, and just takes up disk space.

finding then catting

I run a smallish mail server and am using Squirelmail for web-based email. I
use spamassassin/procmail to move emails that are borderline spam to
/home/username/Trash.

My users have been instructed to occasionaly log into squirelmail and look
through their Trash folder and empty it occassionaly even if they're popping
in with the dreaded Outlook Express. They of course don't, and it's becoming a
problem. I need to run a command that will find all the files called Trash in
the user's home directories and empty them. I can't simply delete this file as
it causes Squiremail to generate an error and I would get many many phone
calls even though squirelmail will fix this problem on their next login.

This is my third attempt at automating this procedure and my third failure.

I can do this:

find /home -name Trash -exec ls -al {} \;

and this:

find /home -name Trash -exec rm {} \;

but not this:

find /home -name Trash -exec cat /dev/null > {} \;

[Neil]

It's the redirection that's the problem. If you quote the '>' thus:

find /hometest -name Trash -exec cat /dev/null '>' {} \;

it will work, with the caveat that you may still hit some "trash" files in
subdirectories.

Check where you ran the command. You will find an empty file called "{}",
created by your redirection. The command you ran was equivalent to

find /hometest -name Trash -exec cat /dev/null \; > {}

That will empty anything called Trash in subdirectories as well as in the
login directories. To only hit files in the login directories you should use
a for loop, e.g.

for file in /home/*/Trash
do
echo -n > $file
done

Before trying this put another echo in front of echo echo -n > $file, so you
can see the commands it will run and sanity check them before running it for
real.

What errors are you getting? Do you have permissions to write to these files?

or this:

find /home -name Trash | xargs cat > /dev/null

[Neil]
That wouldn't work. You're just listing the files and directing the output
to /dev/null, which won't achieve what you want.

While root, when I do this:

find /hometest -name -Trash -exec cat /dev/null > {} \;

it runs and exists after a second giving me a new prompt (a carriage return)
and no errror messages.

Your "for" script worked great and is short and sweet. I'm very greatful,
however, for my own information, I'd still like to understand what's wrong
with my find syntax/structure. If you guys post this solution on the website
you should put in the keywords "empty files". I've googled for all kinds of
crazy things and never found a solution.

[Jason]
Look carefully at your command.

find /hometest -name -Trash -exec cat /dev/null > {} \;

This runs "find /hometest -name -Trash -exec cat /dev/null" and
redirects the output to a file named "{}".

Quoting the '>' doesn't help since find doesn't use the shell to expand
commands given with -exec. (That is, if you quoted the ">", cat would be
run with three arguments. The first would be a file named "/dev/null".
The second would a file named ">", which cat would probably complain
doesn't exist. It is possible you might actually have a file named ">",
but it's such a weird and confusing name that you probably don't. And
the third would be the name of the file you're trying to truncate.)

If, for some reason, you needed to use "find" (perhaps to only truncate
files with a certain mtime, or whatever), you could use a script like
this:

name it truncate.sh or something, make it executable, and save it
somewhere. Then you could do:

find /path/to/files -exec truncate.sh {} \;

...or use xargs, or whatever.

[Thomas]
There's nothing wrong in your implimentation, but it is worthy of note
that the test is simply going to add another "thing" for the script to
do. If the number of files are vast, this is just going to slow it down.
You could remove [1.] entirely and let find match the files beforehand:

find . -type f -exec ./truncate {} \;

[Jason]
Oh! I didn't think of that. That is better than silently dropping
non-existent and non-regular files.

[Thomas]
I could hash this argument out in any number of combinations involving
xargs, -exec, etc, with arguments as to whether you should use a shell
script, etc., etc.

[Jason]
Yes, and you probably would be wanting to use xargs if the number of
files is vast.

[Thomas]
Maybe. But that will still fail where a filename has spaces in it.
Example:

Ignoring the "ignore/" directory, I've got a file with spaces in the
filename [1], as well as a 'normal' file. If I wanted to truncate the
files in the CWD above, I might use:

find . -type f -maxdepth 1 -exec sh -c 'cat /dev/null > {}' \;

... which is fine, for the file with no spaces. Of course, the
truncate.sh script you wrote is fine for handling that (you actually
quoted the variable -- thousands do not). But just what is wrong with
that command above? Well, for each file that find finds, it has to spawn
a separate non-interactive shell to process it. That's slow.

xargs might improve things (I'll leave this as an exercise to the reader
to use 'time'):

Note the quoting. It's paramount that this is done, because even though
the '-print0' option to find splits file names ending '\0' (and xargs
re-interprets them again at the other end), we're still having to
quote the filename (this will still fail if the filename contains a
'"' character, though). Why? Because by the time it gets passed through to
the shell to handle it, we're back to the our old tricks of: '"\"use\" more
quo\"t\"es'.

So is using find(1) any better than using a plain shell script that
globs a given directory for files to truncate? No. Because find blindly
exec()'s whatever we pass to it (and we're having to use shell
redirection) we must invoke the shell for it to work. The only advantage
to using find is that it would handle some strange files, nothing more
(in this particular application of it, anyway).

This obliterates the need to fork a subshell to perform any redirection
-- and as with any "find .. | xargs" combination, it'll be quite fast,
too. But the main reason for using it is that by avoiding any
shell-redirection-mangle-filename techniques, we don't have to worry
about quoting. The delimiter of '\0' via find and xargs should be enough
to protect it.

Also note that cat'ting /dev/null is nonsensical in this instance.

[1] Remember that there is nothing "illegal" about using such
characters. Any character is a valid one for filenames at the filesystem
level. What defines them as being a pain is the shell. Nothing more.

[Ben]
Not quite; '/' can't be used as a filename. Although "\n" can, which
(along with any high-bit characters) can create lots of pain for anyone
trying to work with them...

[Jason]
But ASCII NUL is an illegal character, right? So this will always work?

find -print0 | xargs -0 command

Jason Creighton

[Ben]
Right; you can't use a NUL or a '/'. Other than those two, anything is
fair game... well, not really.
Mostly, it's a REALLY good way to
screw yourself up; in general, it's not a good idea to use anything outside
of [a-zA-Z0-9_] as part of a filename.

But then, we're talking about us.
"What do you mean, I can't jump off
this cliff? It doesn't look all that high!"

Now let us assume that I only want to tar the files a,b,c and exclude
the ./foo{,2} stuff. What you really want is to preprocess your results
with find. You can exclude one directory from a list. Here's an example:

find . -path './foo' -prune -o -print

.. and note the syntax. The "." assumes that we're already in the same
directory that we want the search to start from. In this case the
"-path" option to find matches a pattern, treating "." and "/" as
literal characters to match against. The -prune option excludes it (it
assumes a -depth level, and doesn't descend into the path given. Then
"-o" says to match everything else, and -print the results [1].

Now the fun stuff. How do you get tar to use the results given to
produce a tar file? For ease of use, we'll modify our find command to
show the filenames with the full path name, rather than "./" (which
isn't at all helpful to us):

... but, there are two things wrong with this. One, is that it's
specifying "/tmp/tar" as a valid entry to our tar file. That's not what
we want -- we *don't* want that recursive nature to tar -- so already
that's put pay to the whole of the find command (more about that in a
minute).

The second problem is that each time that tar command runs, it's
replacing the tar file with the new file, rather than appending it.
Ouch! So if you were to look at that tar file now, all you would see is
"/tmp/tar/c" since that was the last file created in the tar file.

Tar supports the "-A" option -- to append to a tar file. But that
presupposes that the tar file is already in existence -- and the
assumption here is that it isn't. So we can't use it.

Also, using -exec on find is a terrible idea in this case, since it runs
a copy of the same command (tar in this case) for every file
encountered, and since the tar file is never created...

So, we'll use xargs. That builds up command-line input on a chain so
that when it is run, we'll see something like this:

tar -czvf ./foofile.tar /tmp/tar /tmp/tar/a /tmp/tar/b /tmp/tar/c

Which is exactly what we want. But we first have to ensure that we
further disclude that "/tmp/tar" entry. And there's an option to tar to
do that: "--no-recursion".

The other consideration to take into account are filenames. Even if
you're sure that the filenames are valid, etc., it is still good
practise to assume the worst. Modifying our initial find command, we can
tell it to split filenames based on '\0' (rather than what $IFS defines
it as). The "print0" option to find defines this:

Which by itself is useless. But in this situation, we can tell xargs
to reinterpret that via "xargs -0", so that's not a problem. It's just
a means of protecting the filenames so that they're not mangled.

So if we piece my ramblings together the actual command you'll want to
use is:

News Bytes

Contents:

Submitters, send your News Bytes items in
PLAIN TEXT
format. Other formats may be rejected without reading. You have been
warned! A one- or two-paragraph summary plus URL gets you a better
announcement than an entire press release. Submit items to
bytes@lists.linuxgazette.net

Legislation and More Legislation

Software Patents

The proposed European Union directive on the patentability
of computer-implemented inventions (software patents
directive) is still in more or less the same limbo it was in
when last discussed here in November. As mentioned in
November, Poland has expressed serious reservations about
the proposed measures and
has blocked the European Council of Ministers
from adopting the relatively pro-patent proposal
currently before it. This would have have occurred, for
reasons of expediency, at a
meeting of the Council of Agriculture and Fisheries.

Tom Chance, writing at LWN,
has given
a clear and useful overview
of how these developments fit within the decision making
structure of the EU. He also presents a neat précis
of the realpolitik which will likely lead to Poland shelving
its objections in order to achieve agreement on matters
closer to its economic self-interest. Should this happen,
meaning the measure is adopted by the Council, it could
again be rejected by a Parliament keen to reinstate its
original amendments. However, under the relevant voting
rules this would require two thirds of all MEPs to vote
accordingly. This is unlikely to occur.

As we approach the time when decisions on the future of this
directive will be made, many individuals and organisations
are attempting to raise their political representatives'
awareness of these matters. Although it is valuable in this
process to achieve some understanding of the issues and
consequences of the proposed policy, it is perhaps even more
useful to impress on elected representatives that they will
be called to account in their own countries for their voting
and policy in Europe.

Linux Kernel

As of Christmas Eve 2004, the latest version of the stable
2.6.x series of
Linux Kernels is
2.6.10.
The new year brought an update to the older 2.4 series,
which has now been updated to version
2.4.29.

Distro News

Debian

The
Debian Project
has announced the release of the fourth update for the
current stable release of their GNU/Linux distribution.
Debian Woody GNU/Linux 3.0, (r4), released on January
1st 2005, comprises for the most part a collection of
security fixes accumulated (and addressed) over the past
months since the r3 release in October.

Knoppix

Screenshot tour
of
Games Knoppix, a Knoppix variant that comes loaded with
a selection of some of the best games available for
GNU/Linux, all on a live, bootable, CD.
There is a review of this distribution
at Linux.com.

LFS

The Linux From Scratch
2nd edition book is currently
on pre-order sale. This means you can order this book
for USD13.99, whereas in a couple of weeks when it starts
to ship it will be available at USD19.99.

Linux From Scratch provides a set of instructions allowing
you to build your GNU/Linux system entirely from source.
Even if you do not plan to use such a system, it is an
interesting exercise, and this book could provide useful
information and background to many non LFS uses.

Ubuntu

Looking towards the next release of this distribution,
it has been announced
that the first milestone live-CD preview of the next Ubuntu
release (Hoary Hedgehog) has been let loose.

Xandros

NewsForge has
reviewed
Xandros Desktop OS 3 Deluxe Edition.
This Debian-based GNU/Linux distribution aims to provide a
comfortable and familiar experience to users more accustomed
to the use of GUI environments, and perhaps new to GNU/Linux.

IBM

Acrobat

Adobe has
released
version 7 of its
Acrobat software
for GNU/Linux.
The stand-alone Acrobat Reader has also
been updated to version 7.0, and this update too is
available for GNU/Linux as a no-cost download.

Originally hailing from Ireland, Michael is currently living in Baden,
Switzerland. There he works with ABB Corporate Research as a
Marie-Curie fellow, developing software for the simulation and design
of electrical power-systems equipment.

Before this, Michael worked as a lecturer in the Department of
Mechanical Engineering, University College Dublin; the same
institution that awarded him his PhD. The topic of this PhD research
was the use of Lamb waves in nondestructive testing. GNU/Linux has
been very useful in his past work, and Michael has a strong interest
in applying free software solutions to other problems in engineering.

Are Your Servers Secure???

In a word, No. No machine connected to the internet is
100% secure. This doesn't mean that you are helpless. You can take
measures to avoid hacks, but you cannot avoid them completely. This
is like a house — when the windows and doors are open then
the probability of a thief coming in is high, but if the doors and
windows are closed and locked the probability of being robbed is
less, but still not nil.

1 What is Information Security?

For our purposes, Information Security means the methods we use
to protect sensitive data from unauthorized users.

2 Why do we need Information Security?

The entire world
is rapidly becoming IT enabled. Wherever you look, computer
technology has revolutionized the way things operate. Some examples
are airports, seaports, telecommunication industries, and TV
broadcasting, all of which are thriving as a result of the use of
IT. "IT is everywhere."

A lot of sensitive information passes through the Internet, such
as credit card data, mission critical server passwords, and
important files. There is always a chance of some one viewing
and/or modifying the data while it is in transmission. There are
countless horror stories of what happens when an outsider gets
someone's credit card or financial information. He or she can use
it in any way they like and could even destroy you and your
business by taking or destroying all your assets. As we all know
"An ounce of prevention beats a pound of cure," so to avoid such
critical situations, it is advisable to have a good security policy
and security implementation.

3 Security Framework

The following illustrates the framework needed to implement a
functioning security implementation:

This framework shows the basic steps in the life cycle of
securing a system. "Risk Analysis" deals with the risk associated
with the data in the server to be secured. "Business Requirements"
is the study which deals with the actual requirements for
conducting business. These two components cover the business
aspects of the security implementation.

The "Security Policy" covers 8 specific areas of the security
implementation, and is discussed in more detail in section 4
below. "Security Service, Mechanisms and Objects" is actually the
implementation part of security. "Security Management, Monitoring,
Detection and Response" is the operational face of security, where
we cover the specifics of how we find a security breach, and how we
react if a breach is found.

4 Security Policy

The Security Policy is a document which addresses the following
areas:

Authentication: This section deals with what methods are used
to determine if a user is real or not, which users can or cannot
access the system, the minimum length of password allowed, how long
can a user be idle before he is logged out, etc.

Authorization: This area deals with classifying user levels and
what each level is allowed to do on the system, which users can
become root, etc.

Data Protection: Data protection deals with the details like
what data should be protected and who can access which levels of
data on the system.

Internet Access: This area deals with the details of the users
having access to the internet and what they can do there.

Internet Services: This section deals with what services on the
server are accessible from the internet and which are not.

Security Audit: This area addresses how audit and review of
security related areas and processes will be done.

Incident Handling: This area addresses the steps and measures
to be taken if there is a breach of security. This also covers the
steps to find out the actual culprit and the methods to prevent
future incidents.

Responsibilities: This part covers who will be contacted at any
given stage of an incident and the responsibilities of the
administrator(s) during and after the incident. This is a very
important area, since the operation of the incident handling
mechanism is dependent on it.

5 Types of Information Security

There are 2 types of security. (1) Physical security / Host
Security and (2) Network security. Each of these sections has 3
parts:

Protection: Slow down or stop intrusions or damage

Detection: Alert someone if a breach (or attempted breach) of
security occurs, and quantify and qualify what sort of damage
occurred or would have occurred.

Recovery: Re-secure the system or data after the breach or
damage and where possible, undo whatever damage occurred

5.1 Host Security / Physical Security

Host Security / Physical Security means securing the server from
unauthorized access. For that we can password protect the box with
such steps as setting up a bios password, placing the computer box
in a locked room where only authorized users have access, applying
OS security patches, and checking logs on regular basis for any
intrusion and attacks. In Host security we check and correct the
permissions on all OS related files.

5.2 Network security

Network security is one of the most important aspects of overall
security. As I mentioned earlier, no machine connected to the
internet is completely secure, so security administrators and server
owners need to be alert, and make sure that they are informed of all
new bugs and exploits that are discovered. Failure to keep up with
these may leave you at the mercy of some script kiddy.

5.3 Which operating system is the most secure?

Every OS has its own pros and cons. There are ways to make
Windows more secure, but the implementation is quite costly. Linux
is stable and reasonably secure, but many companies perceive it as
having little vendor support. My vote for the best OS for security
purposes goes to FreeBSD, another free Unix-like OS, but not many
people are aware of its existence.

6 Is a firewall the final solution to the Network Security problem?

No, a firewall is just a part of the security implementation.
Again, we will use the example of a house. In a house all the
windows and doors can be closed but if the lock on the front door
of the house is so bad that someone can put just any key-like thing
in and open it, then what is the use of the house being all closed
up? Similarly, if we have a strong firewall policy, it will
restrict unauthorized access, but if the software running on the
box is outdated or full of bugs then crackers can use it to intrude
into the server and gain root access. This shows that a firewall is
not the final solution. A planned security implementation is the
only real quality solution to this issue.

7 Security is a continuous process

Continuing security is a on-going process. Security
administrators can only conduct their work on the basis of the
alerts and bugfixes released up to the date of securing, so in
order to accommodate all of the fixes for the latest bugs, security
work has to be done on a regular basis.

Yes, Security implementation creates a small amount of overhead,
but it need not reduce overall performance drastically. In order to
take care of such things, a well done security implementation has
an optimization section where the security administration gives
priority to both performance and security. While securing any
software, we should secure it in such a way that it provides
maximum performance.

9 Security Audits - What Should be Checked

A security audit is a part of security implementation where we
try to find out the vulnerabilities of the system and suggest actions
to improve the security. In a normal audit, the points below should
be checked, and a report with the results of that audit should be
created.

Check intrusion detection. Use chkrootkit or rkhunter for this
purpose.

Check for known bugs in the software installed on the server -
the kernel, openssl, openssh, etc.

Scan all network ports and find out which ports are open.
Report the ports that should not be open and what program is
listening on them.

Check whether /tmp is secured.

Check for hidden processes.

Check for bad disk blocks in all partitions. (This is just to
make sure that the system is reasonably healthy.)

Check for unsafe file permissions.

Check whether the kernel has a ptrace vulnerability.

Check the memory (Another system health check.)

Check if the server is an open e-mail relay.

Check if the partitions have enough free space.

Check the size of the log files. It's better that the log size
remains in megabytes.

10 How to know if you are being hacked?

To find out if your box is compromised or not, follow these
steps. These are the steps which I used to do and will be handy in
most of the situations.

10.1 Check your box to see if your performance has degraded or
if your machine is being over used.

For that, use the commands

vmstat

Displays information about memory, cpu and disk.

Ex: bash#vmstat 1 4 (where 1 is
delay and 4 is count)

mpstat

Displays statistics about cpu utilization. This will help us to
see if your cpu is over worked or not.

Ex: bash#mpstat 1 4 (where 1 is
delay and 4 is count)

iostat

This command displays statistics about the disk system.

Useful options:

-d - Gives the device utilization report.

-k - Display statistics in kilobytes per
second.

Ex: bash#iostat -dk 1 4 (where 1 is
delay and 4 is count)

sar

Displays overall system performance.

10.2 Check to see if your server has any hidden processes
running.

ps

Displays the status of all known processes.

lsof

List all open files. In Linux everything is considered a file,
so you will be able to see almost all of the activity on your
system with this command.

10.3 Use Intrusion Detection Tools

10.4 Check your machine's uptime.

If the uptime is less than it should be, this can mean that your
machine's resources are being used by someone. Linux doesn't crash
or reboot under normal conditions because it is such a stable OS.
If your machine has been rebooted try to find out the actual reason
behind it.

10.5 Determine what your unknown processes are and what they are
doing.

10.5.0.1 Use commands like the following to take apart unknown
programs

readelf

This command will display what the executable's program is
performing.

ldd

This command will show the details of libraries used by a
executable.

string

This command will display the strings in the binary.

strace

This command will display the system calls a program makes as
it runs.

11 Hardening Methodology

Read all security related sites and keep up to date. This is
one of the main things a security administrator or server owner
should do. Server owners should be made aware of security and its
importance. Security training is an important part of an overall
security package.

Create a good security policy. Conduct security audits on the
basis of this policy.

Keep your OS updated by applying all patches.

Install a custom kernel with all unwanted services removed and
patched with either grsecurity or openwall.

Disable all unwanted services and harden the services you leave
running; Change file and directory permissions so that security is
tightened.

Install and setup portsentry and configure it to use iptables
to block IPs.

Install mod_security and mod_dosevasive to safe guard
apache.

Delete files with nouser and nogroup.

Deleted unwanted files/folders in htdocs, disable directory
indexing.

Check for unwanted scripts in /root, /usr/local,
/var/spool/mbox.

Install BFD and FAF for additional security.

Disable open email relaying.

Submit a status report to management detailing all discovered
vulnerabilities and fixes.

12.5 Testing phase

Use tools like nessus, nikto, and nmap to do a penetration test
and see how well your server is secured. Also do a stress test.

Security is of utmost importance to a server, compromising
security is compromising the server itself. Hence, an understanding
of the same is a prerequisite to server ownership and
administration.

About this document...

This document was generated using the LaTeX2HTML
translator Version 2002 (1.62)

My name is Blessen and I prefer people calling me Bless. I got
interested in Linux when I joined the software firm, Poornam Info Vision Pvt Ltd also known as Bobcares. They gave me exposure to linux.

I am a B.Tech in Computer Science from the College of Engineering,
Chengannur. I passed out in the year 2001 and got into the company that
year. During my work, I was passionate with Linux security and I look
forward to grow in that field.

My hobbies are browsing net, learning new technologies and helping
others. In my free time I also develop open source softwares and one of
them is a scaled down version of formmail. The project is called "Smart
Mail" which is more secure than formmail.

Free as in Freedom: Part Two: Linux for the "Rest of Us"

I was fortunate to have found the perfect guide for my journey through
the politics of GNU/Linux past and present,
Ben Okopnik,
Editor-in-Chief of the Linux Gazette (LG). Yoga instructor and
practitioner, Unix instructor and practitioner, writer, editor and
Linux aficionado, Ben was both open to new ideas, familiar with "old"
ones, and willing and able to point me in the various directions I
needed to go to "explain the GNU/Linux model" to mostly non-technical
"left" and "progressive" readers.

Okopnik wrote, "Linux is inextricably political - and deliberately so,
from its very inception. The OS itself is a tool, as sharp, bright, and
beautiful as it may be; creating a better world, in which human beings
cooperate rather than fight each other 'to achieve the same exact ends'
is, from my perspective, the goal."

It was Okopnik who urged me to publish this article under a GPL-like
license and pointed me to the website where this could be done in
minutes.

Okopnik wrote,"Please consider releasing this interview under the Open
Publication License (the OPL is available at http://www.opencontent.org/openpub/),
or something similar. It's not a condition of the interview, but I'd
strongly prefer it. This license respects your commercial rights and can be
"adjusted" to suit your exact purposes; it's the license under which all LG
articles are published," he wrote.

[ This interview was originally slated for publication
in political venues rather than LG; however, I found the end result
interesting enough that I asked Adam to release it here as well. Besides,
I've never seen my name in print before, and wanted to seize a unique
opportunity. :) -- Ben ]

"If you release this interview under the OPL, you can
define whatever restrictions on distribution you choose; as an example,
Cory Doctorow, an excellent and highly-popular writer (see http://craphound.com/) recently released
several of his books under the OPL. He talks about his experience on the
site, and as a result of that experience has actually eased off on the
restrictions he originally imposed. Nor is he the first by far; take a look
at, e.g., MIT's OpenCourseWare site: http://ocw.mit.edu/index.html.
(Good resource to bookmark in any case.)"

"In order to license this article, I just have to declare it?" I asked.

"Yep, " wrote Okopnik. "Do a run-through of the 'wizard' at
http://creativecommons.org/license/
to make sure that you have exactly
the license you want - it only takes a few seconds - and it'll generate
the legal notice you need. Nice and neat."

I wrote, "I don't have to apply to anyone for approval?"

Okopnik wrote, "Nope; you're The Boss when it comes to licensing your
own stuff. Isn't that nice?"

Okopnik also gave me another perspective on the GNU revolution and its
major and minor aspects. For instance, the FSF's insistence on calling
"Linux" GNU/Linux though valid, violates the peoples' tendency to
abbreviate (hence the many abbreviations and acronyms in the Unix/GNU/Linux
operating systems themselves), and according to Software Developer and
Systems Designer Paul Ford (author of the popular Ftrain website), the 'GNU/' prefix came too late
in the game. All the "Linux" books and CDs had gone to print; the word
"Linux" came to mean both the kernel and operating system, though Linus
Torvalds and supporters developed the kernel and GNU the other essentials
of the OS.

"Looking through my 55 MB 'sent mail' archive (gads, but I do a lot of
writing! :), I find myself using the term 'GNU/Linux' exactly once,
back in 2000," wrote Okopnik.

Ford wrote, "Everyone I know calls it 'Linux.' Everyone appreciates
Richard
Stallman's extraordinary contributions. He's a genius, and has a
MacArthur genius grant to prove it. But the 'GNU/' prefix was added a
few years too late. I'll call it GNU/Linux in writing, sometimes,
though. Honestly, I don't care what people call it. That entire debate
seemed anathema to the open source ethos, too similar to someone
protecting their registered trademark."

Regardless of what it's called -- I'll call it GNU/Linux out of respect
for its GNU origins -- GNU/Linux is a political phenomenon, the
creation of user/developers for its own sake, or rather, for their own
sake. Rather than succumb to the Microsoft Monopoly, which places
Windows on virtually every PC sold in America, they created their own
free system and licensed it not for the benefit of an elite few, but
for anyone with the capability to alter the code, or learn how to alter
it (admittedly an elite group, but one based on merit and
intellectual, rather than corporate, "capital").

Okopnik describes the "typical" Linux user thus:

"I don't want to idealize Linuxers in my answers; none of us humans are
perfect saints, and the wild bunch who cleave to this OS and this
community are little different in that respect. However, just as
exaggeration in teaching is used to emphasize a point, isolating or
highlighting the common trends and traits in this case can provide an
interesting introduction. Keeping that in mind -

"The average Linux user, in my experience, is a product of a filtering
process - several of them, in fact. As in Darwin's description, the
selective pressure is not necessarily powerful, but is eventually
definitive; those who have, and propagate, the more effective traits are
better equipped to survive in a given environment.

"The first, and key, characteristic evident in Joe Linuxer is that he's
a maverick (Jane Linuxer is arguably even more so) - perhaps even somewhat
of a rebel; in any case, an individualist. The desire to not be one of the
herd, to not have to put up with chewing the same stale cud as everyone
else, often propels these people into experimentation with various facets
of their lives. Their computer-usage experience is just one of those
facets. Linux users tend strongly toward the non-usual - for all the good
and bad that this implies.

"Secondly, they tend to be capable -- not necessarily able to build a
spaceship out of a bicycle and cook a seven-course meal out of cupboard
scrapings, but willing to assume that they can do, build, repair,
"handle" things. (As a closely-coupled trait, they're often tinkerers,
always trying to unscrew the inscrutable. May the Great Ghu help the
Universe if one of us gets hold of the master key.) Larry Wall, the
creator of the Perl programming language, spoke of the three great
virtues of a programmer (Laziness, Impatience, and Hubris); impatience,
by Larry's definition, is the anger a programmer feels when the computer
is being lazy, which causes him to create long-term solutions rather
than short-term Band-Aids. Linuxers are often excellent examples of this
laudable kind of impatience - and Linux provides the opportunity to
exercise it effectively.

"As a result, Linuxers tend to feel a sense of ownership of
their system; they have not only gone out of the common path to install a
different OS but have tinkered with it, tweaked it, made it uniquely
"theirs" - and just as, e.g., long-distance sailors grant each other the
respect due to those who face the challenges of the ocean, Linuxers gather
in communities in which respect (specifically, respect for knowledge,
competence, and contribution to the community - but several other flavors
coexist as well) is the common thread as well as the coin of the realm."

KDE and GNOME: Linux for "The Rest of Us"

I first came to GNU/Linux in 1996, when it was already a world-wide
phenomenon, but still a 'techie/hacker' thing. I was on the command-line
for six months before I installed X-windows; nevertheless, I was surprised
at the number of alternatives a Unix-type system gave me, especially as a
writer in terms of text formatting, manipulation, etc.; Emacs, Vim and
though I didn't get into Tex or nroff, I used Applixware to do the
formatting one does in a WYSIWYG word-processor.

I wrote to Okopnik, "Because of various jobs I took as both a
free-lance and cubicle-bound ad-man and copywriter, I had to install
Windows merely to run 'Word' and 'Powerpoint' and gradually moved away from
Linux for about four years. When I came back to it with SuSE 9.0 in
December of 2003 I was astonished. KDE (KDE.org's free desktop environment), GNOME (GNU's
free desktop environment), Open Office (OpenOffice.org's free office suite), the
whole new GUI interface floored me.

Okopnik wrote, "This is not an uncommon reaction in this venue. The
rates of growth and development in Linux are simply phenomenal - and still
accelerating."

I wrote to Okopnik, "KDE and GNOME, especially KDE 3.1x, worked as
well as or better than the Win2000 I had installed four years ago, and
I've yet to experience a full crash -- that's par for the daily use of
Windows 2000. More significantly, I was turned onto the 'New Linux' by
someone who knew about as much about the GNU/Linux command-line as the
typical Windows user knows about Unix's retarded younger brother, DOS.

"Similarly, I turned someone else who was sick of Microsoft's shoddy
but expensive products to the mind-boggling array of free software programs
that run under GNU/Linux, though he had neither the time nor the
inclination to learn about the operating system. Like many users, all he
wanted and needed was a word-processor, a browser, a mail program, some
games, and as little trouble as possible. SuSE 9.0 provided him with all
of these things, and now, after a year on GNU/Linux, he knows slightly more
about the command-line than he did DOS. But no desire to go back to
Windows.

"Have you noticed more of an interest in Linux or an enhanced
readership for the Linux Gazette since Linux became both market and user
friendly? If so, are these new users less interested in the 'technical'
aspects than in having a stable GUI-based system to use for work and email
and net-surfing?"

Okopnik responded, "Actually, this is an issue that I brought up in an
involved discussion with the LG staff and the Answer Gangsters (The Answer Gang answers
Linux questions sent to us by our readers, and the discussions and the
answers become part of LG.) My viewpoint here is that it's actually a very
good thing - modulo the awareness that the command-line (CLI) exists. That
is, people are perfectly welcome to come to Linux and use only its GUI
capabilities as long as this serves their needs - but when the GUI proves
insufficient, the capabilities of the CLI are there, just underneath,
providing the perfect security blanket.

"In an article I wrote for Java Developers Journal, I related an example
of this. I had a client whose Web developer left them in the lurch with
several hundred HTML files without the '.html' extensions. This wouldn't
be too bad by itself - renaming a group of files isn't difficult - but
the thousands of HTML links within the files referred to those
extensionless names as well. With GUI-only tools, this is a
nearly-unsolvable disaster. From the CLI, it was a matter of a single
short line of code:

perl -i -wpe 's/(<a href="[^"]+)/$1.html/g' *

The readership of The Linux Gazette (LG) has certainly changed over
time. Where we used to get dozens of questions on fairly technical
topics in The Answer Gang, we now get only a few - and they tend to be
simpler, less technical. The email I get from our readers indicates that
there has indeed been a definite shift in the user base; the old Linuxer
who would bang on a problem for hours so that it could be reported (and
quickly fixed) is being... well, not replaced, but reduced,
percentage-wise, as the mainstay of the population. The new user is often
just that - a computer user who just wants that
email/web/document/spreadsheet processor and maybe a few games on the side.
There is, however, a cultural shift that occurs even in those users after a
while: you cannot live in a society based on a given moral premise and
ignore that premise, or even stop it from penetrating into your life (even
if you try to prevent it.) The original "hacker ethic" of Linux lives on,
strong as ever in those who use the full extent of this OS, and inherent
(and growing, however slowly) in those who use it even without that full
knowledge."

Paul Ford wrote, "I used to think there was too much emphasis in the
community on claiming the desktop, trying to compete with Windows, but the
latest GNOME is attractive and elegant, and works great, looks as good as
MacOS X, and doesn't feel like a thin skin over Unix at all. It's an
environment I could use every day. So I was wrong--the desktop was a good
aim, it just took a while to get things to a good point."

According to George Staikos, a KDE developer and spokesperson, "The KDE
project was formed by a small group of computer programmers and Linux and
UNIX users who were fed up with the lousy state of user interfaces
available for their operating systems. They wanted something that was
fast, powerful, featureful, and looked good. Notice that making money was
not one of the requirements. They set out to accomplish this task in the
most effective manner possible, which was to use the Qt toolkit (at the
time distributed free of charge for non-commercial use under a restrictive
license, but now distributed under the free GPL license for non-commercial
use). Because there were very many people around the world with similar
desires and compatible skills, because there was no risk of someone
hijacking the project and turning it into a business, and because there was
actually proof-of-concept working code already being produced, the project
quickly grew. After a few years, the core of the system was very solid and
new programmers could easily find a niche to work in, implementing that
feature they always wanted or fixing that bug that has bothered them for so
long. These individuals are what makes KDE work. They keep the project
evolving, bringing new ideas and new manpower. There is relatively no risk
involved in contributing, and the rewards are plenty. Developers
(including coders, translators, documenters, artists, and more) can
contribute whatever they have time for.

"Of course there are other requirements to keeping such a project
going. We need bandwidth, servers, funds for promotion and travel, and
more. This tends to come from corporate users who are kind enough to
contribute back to ensure the progress of the project, and from home
users who perhaps can't contribute in other ways. Some people also
contribute system administration time. This is all very vital to the
success of KDE.

"It's important to note, however, that KDE is indeed paid for, as much
as any other software is. KDE is paid for by individuals, and paid for
in a distributed manner. Our time, as KDE developers, is worth as much
money as any other software developer (More, if you ask me. KDE
developers tend to be one smart bunch!). KDE is indeed a very costly
project, and is paid for by society itself, as much as a result of the
lack of momentum of the commercial sector to create a useful solution
to existing problems.

"What is KDE 'worth'? The freely available SLOCCount tool gives me an
estimate of $22.6 million just for the KDE libraries alone, a small
fraction of what is KDE. Most of the code in the KDE libraries was
developed from 1999 through 2004, almost 6 years in total. Not
including the Qt toolkit, KDE must be worth well over $250 million.
This also doesn't include artwork, documentation and language
translations, which KDE is well known for.

I wrote, "Was KDE originally supposed to be free-ware? I remember when
I first saw the specs in the late nineties, I thought it seemed too good to
believe. Yet it's here and working and continues to grow in functionality
and popularity. In fact, the SuSE 9.x package uses KDE as its Graphic
base. Can this go on without some serious cash flow?"

Staikos wrote, "Yes, KDE was definitely supposed to be free, both in
cost, and in terms of speech. KDE was to be available free of cost to all,
and available for modification and learning as desired as long as it was
not abused for commercial gain by others who weren't interested in
contributing back. That is to say, the licensing prohibits using KDE code
in a non-free project, though you may use KDE for that project. For
example, you cannot copy source code from KDE and embed it in your
commercial, closed-source application. The KDE license actually requires
you to release the source for your application as well. However, you may
make calls to the KDE libraries from your application. In short, free to
use, yes, free to steal from, no.

"Indeed KDE is growing rapidly in popularity. We do need to find new
ways to support the project in terms of getting the hardware we need, the
administration help we need, the legal work done, and paying for
conferences and developer meetings. It's an ongoing struggle.

"Making money is not a bad thing, and I think making money off of KDE
is a good thing for KDE. Stealing code from KDE is however not a good
thing, and that's what the GPL protects us from. Most of KDE is licensed
under the GPL, while the libraries tend to use the LGPL in order to permit
commercial KDE applications to be developed. Some portions of KDE are
under more liberal licenses such as the Berkely Software Development (BSD)
license because the author did not have concerns with others using that
code in non-free software. KDE as a project maintains that our software
must be compatible with the GPL, but it need not be specifically licensed
under the GPL," wrote Staikos.

I wrote, "I know several people who, finally fed up with Windows, and
not wanting to deal with getting a new Mac, switched over to GNU/Linux even
though they know only the rudiments of command-line arguments and don't
plan on learning much more. Like many users, all they use their computers
for is word-processing, presentations, a web browser, and email. Since the
advent of KDE and GNOME, people can use GNU/Linux the way they use Mac or
Windows without spending the time and effort necessary to learn a Unix-like
OS. This would have been unthinkable a few years back, even with relatively
user-friendly Window Managers like Windowmaker and IceWM. One of the
people I'm interviewing for this article, the editor of the Linux Gazette,
confirmed this 'trend.' More and more of his readers are concerned with
'typical' user issues rather than the more technical aspects of Linux. Do
you think that with the advent of GUI-based Desktop Environments such as
KDE that GNU/Linux will appeal to a wider audience who want a choice other
than Mac or Windows?"

Staikos wrote, "Most definitely. This was the original goal of KDE,
and still remains one. However KDE does not have the resources to provide a
real end-user system. We only 'ship' source code, and it is up to the
distributors to set their preferred configuration defaults, customize the
menus, and determine which applications to add or remove. These things are
absolutely vital to creating a valuable end-user experience, and in fact
are different for each target market. I think Linux with KDE is already a
perfectly suitable solution for the average desktop system. The obstacles
in place are more monopolistic in nature. Users are accustomed to the way
MS Windows works. They learned this over many years, and expect that all
computers work this way, even if it's inefficient or poorly designed.
They're impatient to learn a new system, and balk at the idea of using
Linux. Furthermore, most commercial applications are designed only for MS
Windows. It's hard to justify using Linux when your favorite video game or
other software only runs on Windows. Hopefully we will change this over
time, as KDE becomes more popular and software developers can justify
porting to Linux.

"I think KDE is one of the driving factors that is pushing Linux into
the mainstream, and I do hope that it will one day be a serious competitor
in terms of user base to MS Windows. The world needs an alternative, and
Linux is here to provide it," wrote Staikos.

While GNU/Linux is "inextricably political," both Ford and Okopnik
admit that most users are less into the politics than the practical
applications of the system itself.

Ford wrote, "Linux has a pretty amazing advantage in that you get
something for free, or a very low cost, out of the movement. It's hard for
any movement based on ideas to compete with that -- it's not like you can
say, 'if you buy into Noam Chomsky's theory of foreign policy, we'll give
you a free Chomsky hat.' Whereas Linux can say, 'if you're willing to
believe that Open Sourced software works, we'll give you a free operating
system with all the trimmings and cranberry sauce.' So the two 'movements'
don't really compare."

While for the Stallman and the FSF the difference between the "free"
and "open source" movements is crucial, for Ford they are pretty much the
same.

"I think they're usually interchangeable, " Ford wrote. "And in truth,
I don't really care that much. If a license is similar to the GPL, I'll go
with it. For things like OCR or image editing, I don't mind buying
commercial tools. They tend to be in much better shape than their
open-sourced counterparts. They're very task-based -- I'm scanning a page,
or creating an image. If good replacement software comes along, I'll use
that. But in the meantime, the work is done--I've got my output....But for
any programming project, where people need to work together, and thousands
of hours go into developing code, I'm terrified of commercial software.
Lock-in is terrifying.... Basically, when I'm looking for a tool, I go
"shopping" for the open-sourced version first. Open-sourced software lets
me try out a huge number of solutions to find the best one--if I don't
like one package, I can see if there's a better one," wrote Ford.

Adam Engel's first book of poetry, Oil and Water, was
published by Maximum Capacity Press in 2001. His novel,
Topiary, will be published by Dandelion Books in the
Spring of 2005.

He has worked as a journalist, screenwriter, executive speechwriter,
systems administrator, and editorial consultant, and has taught writing at
New York University, Touro College and the Gotham Writer's Workshop in New
York City.

Compiling the Linux Kernel

This article will serve as a primer to people who are new to the
world of Linux hacking, and are attempting to compile the Linux kernel
from source. The various steps from downloading the kernel source to
booting from the new kernel image are explained. Also given are tips
on cleaning up the source code, doing verbose compilation etc.

1. Downloading the kernel source code

In order to compile a new kernel we have to download the source code of
the Linux kernel. We can download the source from www.kernel.org. Here we
can find all versions of the Linux kernel source code. Let's take an
example. Suppose we want to compile the 2.6.9 version of the linux kernel.
We have to download the 2.6.9 source code from:

It's better to download the bzipped version, as that will be more
compressed than its gzipped counterpart; hence will take less
time to download. A wget from the command line will look like:

wget http://www.kernel.org/pub/linux/kernel/v2.6/linux-2.6.9.tar.bz2

Once we download the required kernel version source, we need to bunzip
and untar it. We can do the following:

tar xvjf linux-2.6.9.tar.bz2

The 'x' option is to denote the untarring (e'x'traction), 'v' for
verbose, 'j' for specifying that we need to bunzip the file before
untarring and 'f' for stating the name of the input file.

The file will untar into the directory linux-2.6.9. Once it's
untarred 'cd' to linux-2.6.9.

2. Configuring the kernel

We have to configure the kernel before we start compiling it.
During the configuration phase, we will select the components which
we want to be part of the kernel. For example: suppose we are using
the ext3 filesystem. Then we need to select the ext3 filesystem support
while configuring the kernel. Typically we have to run a

make menuconfig

This will bring up the ncurses interface for configuring the kernel.
There are other options such as 'make xconfig' and 'make config'.
The former will bring up the configuration menu in graphical mode and
the latter in text mode.

Once we select the different components we want for our kernel, we
can exit the configuration interface. We should select the option
to save the configuration from the configuration menu, before exiting.

After we have configured the kernel as mentioned above, we can find
a file named '.config' in the top level directory of the source.
This file is the configuration file. It contains various options and
their states (whether they are selected or not). For example, if we choose
to have the PCI support in our kernel we can find an entry of the form:

CONFIG_PCI=y

in the .config file. Similarly, options which are selected as not
required will appear as not set. Suppose we have not selected
the XFS filesystem support in our kernel we will find the following
in the .config

# CONFIG_XFS_FS is not set

A great feature of 2.6 kernels is that if we are running
make menuconfig (or
xconfig or config) for the first time, then the configuration menu we are
presented with is based on our current kernel configuration. In my case,
I have a Fedora Core 1 system. The kernel which I run is '2.4.22-1.2115.nptl'.
Hence when I run a 'make menuconfig' for the first time on the source then the
configuration menu presented will contain the options as given in
'/boot/config-2.4.22-1.2115.nptl'.

3. Building Dependencies

This step is required in kernels prior to 2.6 series (here I am only
referring to the stable series kernels). For example if we are using a
2.4 kernel then we have to build the dependencies explicitly. We have
to run the following:

make dep

This will build the dependencies. But for a 2.6 kernel we can skip
this step. The dependencies are automatically created when making the
final image with a 2.6 kernel.

4. Creating the final image

We can build various types of kernel binary images.
We can build a plain kernel image, or a compressed version of it; the usual
choice is compressed, or the 'bzImage'. We can create the bzImage by running

make bzImage

In 2.6 kernels this step will also resolve the dependencies and proceed
to create a bzImage image.

After the compilation is over we can find the kernel image at the path
arch/i386/boot/bzImage in case of an image for a 386 based processor
(Pentium, AMD etc.).

5. Compiling and Installing the modules

In the configuring section if we have selected some components to be
built as kernel modules then we need to compile those modules.
To compile the modules we should run the command:

make modules

This command will compile the components (which are selected
for module compilation) to modules. In a 2.4 kernel the result
will be .o files of the corresponding components. But in a
2.6 kernel the output file will be a .ko module. For example
if we have given the option for the Network driver of Realtek
cards to be built as modules then after giving a 'make modules'
we can find in 'driver/net/' a file named 8139too.o
in the case of a 2.4 kernel and 8139too.ko in the case of a
2.6 kernel.

After we have compiled the modules, it's time now to install the modules.
To install the modules run:

make modules_install

as root. This will install the modules and other necessary files into the
/lib/modules/2.6.9 directory.

6. Booting from the new kernel

Once we are done with the installation of modules, we
can go for an automatic installation procedure for the
kernel binary. We just have to run

make install

This will update the kernel image on to the /boot area, update
the configuration file of the bootloader (lilo.conf or
grub.conf) and then do the necessary actions to make the new
kernel bootable.

After this we need to reboot the machine. When the machine boots
next time the boot menu will present us with the option to boot
from the new kernel we built. We choose that option and voila!!
boot into a kernel we built all by ourselves!

7. Manual installation of the kernel

In case 'make install' does not work, or if we cannot perform an
automatic installation due to some other reason, we can go for a manual
installation of the kernel. For example, if we are using the grub boot
loader then we have to copy the bzImage into the boot partition and then
change the '/etc/grub.conf' to reflect the presence of the new
image. If we are having lilo boot loader then we have to copy the bzImage
to the boot location and then modify the lilo.conf and then run
the 'lilo' command to make sure that next time we boot we will
have our new image as a choice to boot from. The following are the steps we
should perform as root user if we are using lilo boot loader:

cp -a arch/i386/boot/bzImage /boot/bzImage-2.6.9

After this we add the following entry to /etc/lilo.conf

image=/boot/bzImage-2.6.9
label=2.6.9-kernel
root=your_root_disk

We should run lilo after this

lilo -v

We will reboot the machine after this. When we are prompted at
the lilo prompt enter '2.6.9-kernel' as the boot option and we
will be booting to the new custom built kernel.

8. Verbose compilation

We find that the compilation of the kernel is very quiet. Much less
information on what is getting compiled is shown on the screen while the
compilation proceeds.

9. Cleaning the kernel source

After we have initiated compilation once on the source if we want
to clean the object files and other temporary files then
we have to run the following:

make clean

This will remove most generated files but will keep the configuration
file.

If we need an absolute cleaning, i.e. if we want to return the source to
the state in which it was before we started the compilation, then do a

make mrproper

This command will delete all generated files, the configuration file as
well as various backup files. This will in effect unwind all the changes
we made to the source. The source after this step will be as good as it
was just after the download and untar.

10. Conclusion

We have seen how to obtain the linux kernel source, how to configure it,
how to build the kernel image and modules, how to boot from the newly compiled
kernel and how to do a verbose compilation. Also we have seen how to clean up
the temporary files and configuration files which were created during the
compilation. The next step for a budding kernel hacker would be to modify the
kernel source and try experimenting with it.

Krishnakumar loves to hack the Linux kernel. He works
for Hewlett-Packard and is a BTech from Govt. Engg. College
Thrissur.

Introduction to Shell Scripting - The Basics

Here's a hint. When you think your code to exec a shell function is
just not working, never, repeat NEVER send it "/etc/reboot" just to see
what happens.
-- Elliott Evans

Introduction

Shell scripting is a fascinating combination of art and science that
gives you access to the incredible flexibility and power of Linux with very
simple tools. Back in the early days of PCs, I was considered quite an
expert with DOS's "batch files", something I now realize was a weak and
gutless imitation of Unix's shell scripts. I'm not usually much given to
Microsoft-bashing - I believe that they have done some good things in their
time, although their attempts to create an operating system have been
pretty sad - but their BFL ("Batch File Language") was a joke by
comparison. It wasn't even particularly funny.

Since scripting is an inextricable part of understanding shell usage in
general, quite a bit of the material in here will deal with shell quirks,
methods, and specifics. Be patient; it's all a part of the knowledge that
is necessary for writing good scripts.

Philosophy of Scripting

Linux - Unix in general - is not a warm and fuzzy,
non-knowledgeable-user oriented system. Rather than specifying exact
motions and operations that you must perform (and thus imiting you
only to the operations described), it provides you with a myriad
of small tools which can be connected in a literally infinite number of
combinations to achieve almost any result (I find Perl's motto of
"TMTOWTDI" - There's More Than One Way To Do It - highly apropos for all of
Unix). That sort of power and flexibility, of course, carries a price -
increased complexity and a requirement for higher competence in the user.
Just as there is an enormous difference between operating, say, a bicycle
versus a super-sonic jet fighter, so is there an enormous difference
between blindly following the rigid dictates of a standardized GUI and
creating your own program, or shell script, that performs exactly the
functions you need in exactly the way you need them.

Shell scripting is programming - but it is programming made easy, with
little, if any, formal structure. It is an interpreted language, with its
own syntax - but it is only the syntax that you use when invoking programs
from your command line; something I refer to as "recyclable knowledge".
This, in fact, is what makes shell scripts so useful: in the process of
writing them, you continually learn more about the specifics of your shell
and the operation of your system - and this is knowledge that truly pays
for itself in the long run as well as the short.

Requirements

Since I have a strong preference for Bash, and it happens to be the
default shell in Linux, that's what these scripts are written for (although
I've tried to keep Bash-isms down to a minimum - most if not all of these
scripts should run under plain old "sh".) Even if you use something else,
that's still fine: as long as you have Bash installed, these scripts will
execute correctly. As you will see, scripts invoke the shell that they
need; it's part of what a well-written script does.

I'm going to assume that you're going to do all these exercises in your
home directory - you don't want these files scattered all over the place
where you can't find them later. I'm also going to assume that you know
enough to hit the "Enter" key after each line that you type in, and that,
before selecting a name for your shell script, you will check that you do
not have an executable with that same name in your path (type "which
bkup" to check for an executable called "bkup"). You also shouldn't
call your script "test"; that's a Unix FAQ ("why doesn't my shell
script/program do anything?") There's an executable in /usr/bin called
"test" that does nothing - nothing obvious, that is - when invoked...

It goes without saying that you have to know the basics of file operations
- copying, moving, etc. - as well as being familiar with the basic assumptions
of the file system, i.e., "." is the current directory, ".." is the parent
(the one above the current), "~" is your home directory, etc. You didn't
know that? You do now!

Whatever editor you use, whether 'vi', 'emacs', 'mcedit' (the DOS-like
editor in Midnight Commander), or
any other text editor is fine; just don't save this work in some
word-processing format - it must be plain text. If you're not sure, or keep
getting "line noise" when you try to run your script, you can check the raw
contents of the file you've created with "catscript_name" to be sure.

In order to avoid constant repetition of material, I'm going to number
the lines as we go through and discuss different parts of a script file.
The line numbers will not, of course, be there in the actual script.

Building a Script

Let's go over the basics of creating a script. Those of you who find
this obvious and simplistic are invited to follow along anyway; as we
progress, the material will become more complex - and a "refresher" never
hurts. The projected audience for this article is a Linux newbie, someone
who has never created a shell script before - but wishes to become a Script
Guru in 834,657 easy steps. :)

In its simplest form, a shell script is nothing more than a shortcut
- a list of commands that you would normally type in, one after another,
to be executed at your shell prompt - plus a bit of "magic" to notify the
shell that it is indeed a script.

The "magic" consists of two simple things:

A notation at the beginning of the script that specifies the program
that is used to execute it, and

A change in the permissions of the file containing the script in order
to make it executable.

As a practical example, let's create a script that will "back up" a
specified file to a selected directory; we'll go through the steps and the
logic that makes it all happen.

First, let's create the script. Start your editor with the filename you
want to create:

mcedit bkup

The first line in all of the script files we create will be this one (again,
remember to ignore the number and the colon at the start of the line):

1: #!/bin/bash

This line is referred to as the 'shebang'. The interesting thing about it
is that the pound character is actually a comment marker - everything
following a '#' on a line is supposed to be ignored by the shell - but the
'#!' construct is unique in that respect, and is interpreted as a prefix to
the name of the executable that will actually process the lines which
follow it.

The shebang must:

Be on the first line of the script, and

There cannot be any whitespace before the '#!'.

There's a subtle but important point to all of this, by the way: when a
script runs, it actually starts an additional bash process that runs under
the current one; that process executes the script and exits, dropping you
back in the original shell that spawned it. This is why a script that, for
example, changes directories as it executes will not leave you in that new
directory when it exits: the original shell has not been told to change
directories, and you're right where you were when you started - even though
the change is effective while the script runs.

As I've mentioned, the '#' character is a comment marker. It's a good idea,
since you'll probably create a number of shell scripts in the future, to
insert some comments in each one to indicate what it does - or at some
point, you'll be scratching your head and trying to remember why you wrote
it. In later columns, we'll explore ways to make that reminder a bit more
automatic... but let's go on.

4: cp -i $1 ~/Backup

The "-i" syntax of the 'cp' command makes it interactive; that is,
if we run "bkup file.txt" and a file called "file.txt" already exists in
the ~/Backup directory, 'cp' will ask you if you want to overwrite
it - and will abort the operation if you hit anything but the 'y' key.

The "$1" is a "positional parameter" - it denotes the first thing that
you type after the script name. In fact, there's an entire list of
these variables:

$0 - The name of the script being executed - in this case, "bkup".
$1 - The first parameter - in this case, "file.txt"; any parameter may
be referred to by $<number> in this manner.
#@ - The entire list of parameters - "$1 $2 $3..."
$# - The number of parameters.

There are several other ways to address and manipulate positional parameters
(see the Bash man page) - but these will do us for now.

Making it Smarter

So far, our script doesn't do very much; hardly worth bothering, right?
All right; let's make it a bit more useful. What if you wanted to both keep
the file in the ~/Backup directory and save the new one - perhaps by
adding an extension to show the "version"? Let's try that; we'll just add a
line, and modify the last line as follows:

4: a=$(date +'%Y%m%d%H%M%S')
5: cp -i $1 ~/Backup/$1.$a

Here, we are beginning to see a little of the real power of shell scripts:
the ability to use the results of other Linux tools, called "command
substitution". The effect of the $(command) construct is to execute the
command inside the parentheses and replace the entire "$(command)" string
with the result. In this case, we have asked 'date' to print the current
date and time, down to the seconds, and pass the result to a variable
called 'a'; then we appended that variable to the filename to be saved in
~/Backup. Note that when we assign a value to a variable, we use its name
( a=xxx ), but when we want to use that value, we must prepend a '$' to that
name ($a). The names of variables may be almost anything except the
reserved words in the shell, i.e.

case do done elif else esac fi for function if in select then until while time

and may not contain unquoted metacharacters or reserved characters, i.e.

! { } | & * ; ( ) < > space tab

It also should not unintentionally be a standard system variable, such as

PATH PS1 PWD RANDOM SECONDS (see "man bash" for many others)

The effect of the last two lines of this script is to create a unique
filename - something like file.txt.20000117221714
- that should not conflict with anything else in ~/Backup. Note that I've
left in the "-i" switch as a "sanity" check: if, for some truly strange
reason, two file names do conflict, "cp" will give you a last-ditch chance
to abort. Otherwise, it won't make any difference - like dead yeast in
beer, it causes no harm even if it does nothing useful.

By the way, the older version of the $(command) construct - the
`command` (note that "back-ticks" are being used rather than single quotes)
- is more or less deprecated. $()s are easily nested - $(cat
$($2$(basename file1 txt))), for example; something that cannot be
done with back-ticks, since the second back-tick would "close" the first
one and the command would fail, or do something unexpected. You can still
use them, though - in single, non-nested substitutions (the most common
kind), or as the innermost or outermost pair of the nested set - but if you
use the new method exclusively, you'll always avoid that error.

So, let's see what we have so far, with whitespace added for readability
and the line numbers removed (hey, an actual script!):

Yes, it's only a two-line script - but one that's starting to become useful.
The last thing we need to do to make it into an executable program -
although we can execute it already with "bash bkup" - is to
change its mode to executable:

chmod +x bkup

Oh yes, there is one last thing; another "Unix FAQ". Should you try to
execute your newly-created script by typing bkup at the
prompt, you'll get this familiar reproof:

bash: bkup: command not found

-- "HEY! Didn't we just sweat, and struggle, and labor... What happened?"

Unlike DOS, the execution of commands and scripts in the current directory
is disabled by default - as a security feature. Imagine what would happen
if someone created a script called "ls", containing "rm -rf *" ("erase
everything") in your home directory and you typed "ls"! If the current
directory (".") came before "/bin" in your PATH variable, you'd be in a
sorry state indeed...

Due to this, and a number of similar "exploits" that can be pulled off,
you have to specify the path to all executables that you wish to run there
- a wise restriction. You can also move your script into a directory that
is in your path, once you're done tinkering with it; "/usr/local/bin" is a
good candidate for this (Hint: type "echo $PATH" to see which directories
are listed).

Meanwhile, in order to execute it, simply type

./bkup file.txt

- the "./" just says that the file to be run is in the current directory.
Use "~/", instead, if you're calling it from anywhere else; the point here
is that you have to give a complete path to the executable, since it is not
in any of the directories listed in your PATH variable.

This assumes, of course, that you have a file in your current directory
called "file.txt", and that you have created a subdirectory
called "Backup" in your home directory. Otherwise, you'll get an error.
We'll continue playing with this script in the next issue.

Review

In this article, we've looked at some of the basics involved in creating
a shell script, as well as some specifics:

File creation

Permissions

Spawned subshells

Execution in a non-PATHed directory

The shebang

Comments

Positional parameters

Command substitution

Variables

Wrap-up

Well, that's a good bit of information for a start. Play with it,
experiment; shell scripting is a large part of the fun and power of Linux.
Next month, we'll talk about error checking - the things your script should
do if the person using it makes an error in syntax, for example
- as well as getting into loops and conditional execution, and maybe
dealing with a few of the "power tools" that are commonly used in shell
scripts.

Please feel free to send me suggestions for any corrections or
improvements, as well as your own favorite shell-scripting tips or any
really neat scripting tricks you've discovered; just like anyone whose ego
hasn't swamped their good sense, I consider myself a student, always ready
to learn something new. If I use any of your material, you will be
credited.

Until then -

Happy Linuxing!

REFERENCES

"man" pages for 'bash', 'cp', 'chmod'

I read the Bash man page each day like a Jehovah's Witness reads the
Bible. No wait, the Bash man page IS the bible. Excuse me...
-- More on confusing aliases, taken from comp.os.linux.misc

Ben is the Editor-in-Chief for Linux Gazette and a member of The Answer Gang.

Ben was born in Moscow, Russia in 1962. He became interested in electricity
at the tender age of six, promptly demonstrated it by sticking a fork into
a socket and starting a fire, and has been falling down technological
mineshafts ever since. He has been working with computers since the Elder
Days, when they had to be built by soldering parts onto printed circuit
boards and programs had to fit into 4k of memory. He would gladly pay good
money to any psychologist who can cure him of the recurrent nightmares.

His subsequent experiences include creating software in nearly a dozen
languages, network and database maintenance during the approach of a
hurricane, and writing articles for publications ranging from sailing
magazines to technological journals. After a seven-year Atlantic/Caribbean
cruise under sail and passages up and down the East coast of the US, he is
currently anchored in St. Augustine, Florida. He works as a technical
instructor for Sun Microsystems and a private Open Source consultant/Web
developer. His current set of hobbies includes flying, yoga, martial arts,
motorcycles, writing, and Roman history; his Palm Pilot is crammed full of
alarms, many of which contain exclamation points.

He has been working with Linux since 1997, and credits it with his complete
loss of interest in waging nuclear warfare on parts of the Pacific Northwest.

Songs in the Key of Tux: Recording with Audacity

I haven't been very good at keeping up this series, have I?
Apologies to those of you who have been waiting for more in this
series, and thank you to those who wrote asking for more.

So, what gives? Didn't I say in my
last article that I was "chomping at the bit to start
recording"? (Thanks to Michael Cox for pointing out that 'It's
"champing," actually. As in "the horse was champing at the bit."
Horses champ; they don't chomp.') Well... I have to admit that I
got so bogged down in the whys and wherefores that I never got
around to even trying to record. It was particularly disappointing
for me to find myself unable to write a single word on the subject
in time for December's issue, as Christmas Day marked the 10th
anniversary of my learning to play guitar. Oh well.

Getting Started

OK, I'll come clean: most of this article will be about getting
ready to record, rather than actually recording. As with my other
articles, there will be a heavy emphasis on recording guitar,
because I don't play any other instruments.

Every musician should record themselves playing, regularly. It
doesn't matter if you have no intentions of allowing anyone else to
hear the recordings, the act of recording in itself is helpful.
Musicians and audience alike get caught up in the performance as
well as the music, and many errors end up going unnoticed. A sound
recording allows you to hear the shortcomings in your playing, and
to hear what the audience hears: before I started this article, for
example, I was unaware that when fingerpicking, the note I picked
with my thumb was drowning out the rest of the sound.

The first thing to do in Audacity is to check your input. With
some software out there, the only thing you can do is press
'record', hit the strings (or whatever is appropriate for your
instrument), press 'play' and hope for the best. Audacity
conveniently provides a way to keep your eye on your input levels
at all times: there are two volume indicators, and we want the one
that's furthest to the right (it conveniently has a microphone
icon). Click on it, and you should see something like this:

That's the default for 'no input'. Hit the strings: if the
levels jump, you're ready to record.

Tuning up

One of the best habits a guitarist can get into when recording
is to tune the instrument before recording: not before a recording
session, but before each press of the 'record' button. Even if you
don't have perfect pitch, you will start to notice the difference
in pitch after a few overdubs.

Enter gtkguitune! (Or similar). Digital tuners are a godsend for
guitarists. Jack Endino (producer of Nirvana, Soundgarden, etc.)
has a quite long article
on the subject of tuning, but in short: tune the guitar in the way
you expect to play it. I have met guitarists who have had
'opinions' about digital tuners, preferring to tune by holding the
same note on the string below, or by using harmonics, but neither
of these methods is reliable, and you should do these people a
favor by beating them out of this opinion :).

See how the 'A' is tuned slightly flat in the picture? That's
intentional.

Are we ready yet?

So... your guitar is in tune, Audacity is hearing the guitar,
you're ready to record... and more than likely, it sounds bad.

First of all, PC sound equipment is terrible for recording. I
had to set my input levels to 0.4 to avoid an unacceptable level of
gain on the input. If at all possible, use an amplifier or
pre-amplifier, no matter what the instrument. As it is, I just DId
(direct injected: plugged the guitar straight into the mic
socket).

Fortunately, Audacity comes with plugins to help compensate for
things like this: the 'Amplify' plugin can add quite a lot of
volume without adding gain; and the 'Compression' plugin can limit
some of the gain.

There's no point in trying to explain sound: you have to hear
for yourself. I came up with a simple riff this afternoon, and
recorded it: "Things". The
whole recording is downsampled to 8KHz, and set at the lowest
bitrate oggenc could offer, but the recording is clear
enough to make out what I was playing (complete with mistakes).

things-raw.ogg: This
is a sample from the original recording, without re-sampling, to
use as the basis for comparison.

things-compressed.ogg: This
is things-amplify, run through the Compressor, with "Threshold" set
to -29 dB, "Ratio" at 6.5:1, and "Attack Time" set to 0.2

I need to go back to the drawing board a little, because there
is very little difference between the raw recording and the version
that has been both amplified and compressed (though, honestly,
that's to be expected). Next month, I'll continue the process, but
using an amplifier. Until then, take care.

Jimmy is a single father of one, who enjoys long walks... Oh, right.

Jimmy has been using computers from the tender age of seven, when his father
inherited an Amstrad PCW8256. After a few brief flirtations with an Atari ST
and numerous versions of DOS and Windows, Jimmy was introduced to Linux in 1998
and hasn't looked back.

In his spare time, Jimmy likes to play guitar and read: not at the same time,
but the picks make handy bookmarks.

Experimental Physics with Phoenix and Python

Many of us who had our schooling in the distant past
will be having fond memories of time spent in the
science laboratories learning fun and exciting things
about the world around us. I had a fascination for
Chemistry and still recollect doing mildly dangerous
experiments with thoroughly user-unfriendly stuff like
sulfuric acid behind the closed doors of my `private'
home laboratory. Physics was a bit too `intellectual' for
my taste, but I was nonetheless excited when I came
across the
Phoenix Project which aims to bring modern
techniques of `computer based' experimental physics within
the reach of students and the hobbyist in developing
countries. This article is written with the intention of
getting teachers and GNU/Linux enthusiasts involved with the
project thereby initiating a `community building' effort.

What is Phoenix?

The Phoenix project is the brainchild of B. P. Ajith Kumar,
a researcher working with the
Nuclear Science Centre of India. Ajith describes Phoenix as
Physics with Homemade Equipments and Innovative Experiments.
Modern experimental physics makes use of a vast array of complex
equipments interfaced with general purpose computers. The data
gathered by these equipments is fed into the machine using
expensive `Data Acquisition Hardware' (mostly high speed Analog-to-Digital
converters) where it is analyzed by sophisticated mathematical tools.
The current practice of undergraduate Physics education (at least in the
part of the world I live) pays only lip service to this
important aspect of the training of a Physicist by incorporating
a paper on `C programming' and/or `Microprocessors' in the
syllabus. The objective
of the Phoenix project is to change the situation for the better
by giving school/college students an opportunity to use the computer
for observing and analyzing real-world phenomena.

Let me present two simple examples. Measuring time is
something which is central to many experiments. Say you wish
to measure time of flight of an object under gravity. You note
the exact time at which it is dropped; you also measure the time at which
it hits the ground. A small steel ball can be easily gripped by
an electromagnet which is activated by the PC parallel port. The
ball can be dropped by deactivating the coil (just a simple `out'
instruction to the parallel port). When the ball hits the `ground',
it can be made to close/open some kind of `contact' connected to
an input pin of the parallel port. A few lines of code which measures
the time at which a parallel port output pin deactivates the coil and
the time at which an input pin changes state should be sufficient to
verify an important physics principle
(Check this out
).

What are the major stumbling blocks behind presenting such an
experiment?

The time/effort involved in building additional hardware
(in this case, buffer the parallel port pins to make it capable
of driving relay coils).

The effort involved in writing low-level code. Parallel port
I/O and time measurements are trivial to the experienced programmer,
but may overwhelm students who have had very little previous
exposure to both programming and electronics.

As another example, in basic circuit theory, we learn about the
way resistor-capacitor networks behave. A low-cost 8-bit analog
to digital converter connected to the parallel port and sampling the
voltage on the capacitor periodically will yield lots of numbers
which can be analyzed to get a better feel of the way RC networks
behave. Again, the major constraints involved here are setting up
the ADC circuit and writing measurement programs. There is a solution
- purchase commercially available data acquisition hardware and
software; this may not be attractive because of the expenses involved.

The Phoenix Approach

For computer based physics experiments to be a reality in our
school laboratories, we need:

Inexpensive hardware built with components available in the
local market. The design should be freely available and should be
simple enough for students to understand in full, if they wish to.

A set of experiments which the students can try out
without building additional electronic circuits. Too much friction
in the beginning may discourage some of the target audience
(while some others might relish it as a challenge!).

A set of software tools which offers a smooth learning curve;
the source code must be freely available and motivated students encouraged to
read and tinker with it. The platform of choice is undoubtedly GNU/Linux
and the programming languages will be C/Python.

Phoenix Hardware

Here is a picture of the fully assembled kit:

The hardware consists of:

Eight digital output pins.

Four digital input pins.

Four stepper motor driver pins. They can be used
for driving relay coils also.

An 8 bit, 8 channel analog-to-digital converter.

A programmable voltage supply, capable of generating
voltages in the +5V to -5V range.

Besides these `programmable' elements, the device also
contains the following units which makes interfacing
simpler:

Amplifier blocks

A constant current supply

Low frequency function generator (sine, triangular, square)

A block diagram is available
here.
You can follow
this link to read a detailed description of the working of
each of the functional units. Full circuit schematics, with ready
to use PCB layout, are also available from the project
home page.
The design is almost stable; a few boxes have been fabricated
and distributed for testing purposes.

Phoenix Software

All the programmable units of the Phoenix kit can be
manipulated by doing a few in/out instructions on the
parallel port. Two of the issues to be handled were:

Users shouldn't need any special privilege to access
the hardware.

It should be possible to do reasonably precise time
measurements at the sub-millisecond level.

Three approaches were identified:

Do everything in user-space, non real-time.

Do everything in user-space, but in real-time (say RTAI LXRT)

Put the code in the kernel, as a proper `driver'

The third approach was found to be relatively better.
Users won't need any special privileges, and kernel code
is guaranteed not to be disturbed by other processes (busy
timing loops in the kernel will freeze the system, but in
almost all cases, we need very short sub-second loops; so
this is not a big problem).

A sample driver program (still in the experimental
phase!) can be downloaded from
here.
You will observe that most of the code is one giant
`ioctl' which does things like setting the digital output
pins, reading the value on the input pins, setting the
voltage on the programmable voltage supply, reading the
ADC, measuring the time between a rising edge and a falling
edge on a digital I/O pin etc.

Phoenix and Python

One of the advantages of putting the timing-sensitive
driver code in the kernel is that it now becomes possible
for us to interact with the hardware through a language like
Python, which can't be normally used for real-time work.
Being a Python fan, one of the first things I did with the
Phoenix box was to try and write a simple Python library
for talking with the hardware. My intention was to be able to
do things like:

Such an interaction, which gives immediate feedback, might
be the beginning student's best introduction to the device.

As an aside, it's interesting to see how easily (and naturally)
Python can be used for doing sophisticated CS stuff (functional
programming, lambda calculus, closures... ) and regular, down
to earth, OS interaction as shown in the snippet above. The source
code for the (experimental!) Python module can be obtained from
here. Readers
might be interested in the use of the `fcntl' module and the
`struct' and `array' modules for performing the `ioctl' operations
required to control the device.

Experiments!

The Phoenix hardware as well as software provides a general
framework for easy interfacing and experimentation. I
shall describe two very simple experiments I did with the
box. The project
home page describes some more - one of which is an interesting simple
pendulum experiment; check it out
here.
Designing more experiments is one area where a
community of teachers and developers can contribute a lot.

Time of flight

An electromagnet (a disassembled relay coil) is connected to
one of the stepper motor driver pins of the Phoenix box and is
used to hold a small iron ball tightly. The kernel driver code
contains a routine which de-energizes the relay coil, notes
down the time and sits in a tight loop till an input pins goes
from low to high, noting down the time when this happens.
The two timestamps are transmitted back to userland where a
Python program captures them and returns the difference.

The iron ball drops down when the relay coil is de-energized
and hits the table with a bang. A speaker is placed face down
on the table; it picks up the vibrations and converts them to
electrical signals. The weak electrical signals are amplified
by the amplifier blocks on the Phoenix box and fed to a digital
input pin which registers a transition from low to high. The
kernel driver code can very easily compute the difference in
time between the ball being released and the instance it hits
the table, giving rise to an electrical transition on the input
pin. This time difference can be plugged into a simple equation
and the distance travelled can be computed.

Here is a picture of the setup which I used for conducting
this experiment:

Discharging capacitor

The Phoenix kernel driver contains code to take repeated
samples from the ADC (with an optional delay in between).
It is possible to start taking samples when a particular
trigger is received - the trigger is in the form of a
`falling edge' on digital input pin 0. This feature can
be used to plot the voltage across a discharging capacitor
(I used a 10K resistor and 100uF capacitor). Let's look
at the Python code segment:

The trig_read_block function collects 1000 samples from the
ADC with an in-between delay of 2000 micro seconds. The
sample collection starts only when the trigger is received,
the trigger being a falling edge on digital input pin 0.
The value returned by the function is a list, each element of
which is a tuple of the form (timestamp, adval) where `timestamp'
is the instant of time when the sample was taken. The `for'
loop simply prints these numbers onto the screen; the output
can be redirected to a file and plotted using the powerful `gnuplot'
utility (a simple command at the gnuplot prompt of the
form plot "a.dat"). Here is the graph which I obtained:

Future Direction

The Phoenix project has just come out of the lab. To
become a success, like any Free Software/Hardware project, it
must be able to attract a community of enthusiasts -
science teachers and students, electronics hobbyists, Linux
geeks, hardware hackers... Plenty of software has to be
written and new experiments designed. I hope that this article
will go a small way toward bringing in community
involvement. Please feel free to get in touch with me
if you are interested in knowing more about the project or
wish to arrange a workshop/demonstration. Follow
this link to get in touch with the Phoenix developer and
learn more about getting the hardware/software.

As a student, I am constantly on the lookout for fun
and exciting things to do with my GNU/Linux machine. As
a teacher, I try to convey the joy of experimentation,
exploration, and discovery to my students. You can read about
my adventures with teaching and learning here.

Writing Your Own Shell

Introduction

This is not another programming language tutorial. Surprise! A few days
ago, I was trying to explain one of my friends about the implementation of
the 'ls' command, though I had never thought of going beyond the fact that
'ls' simply lists all files and directories. But my friend happened to
make me think about everything that happens from typing 'ls' to the point
when we see the output of the 'ls' command. As a result, I came up with the
idea of putting the stuff into some piece of code that will work similarly.
Finally, I ended up in trying to write my own shell which allows my program
to run in a way similar to the Linux shell.

Shells

On system boot up, one can see the login screen. We log in to the system using our
user name, followed by our password. The login name is looked up in the
system password file (usually /etc/passwd). If the login name is found, the password is
verified. The encrypted password for a user can be seen in the file /etc/shadow,
immediately preceded by the user name and a colon. Once the password is verified,
we are logged into the system.

Once we log in, we can see the command shell where we usually enter our commands to
execute. The shell, as described by Richard Stevens in his book
Advanced Programming in the Unix Environment, is a command-line interpreter
that reads user input and execute commands.

This was the entry point for me. One program (our shell) executing another program
(what the user types at the prompt). I knew that execve and its family of functions
could do this, but never thought about its practical use.

A note on execve()

Briefly, execve and its family of functions helps to initiate new programs.
The family consists of the functions:

is the prototype as given in the man page for execve. The filename
is the complete path of the executable, argv and envp are the
array of strings containing argument variables and environment variables respectively.

In fact, the actual system call is sys_execve (for execve function) and other functions
in this family are just C wrapper functions around execve. Now, let us
write a small program using execve.
See listing below:

Compiling and running the a.out for the above program gives the output of /bin/ls
command. Now try this. Put a printf statement soon after the execve call and
run the code.

I will not go in to the details of wrappers of execve. There are good books, one
of which I have already mentioned (from Richard Stevens), which explains the execve family
in detail.

Some basics

Before we start writing our shell, we shall look at the sequence of
events that occur, from the point when user types something at the shell to
the point when he sees the output of the command that he typed. One would
have never guessed that so much processing happens even for listing of
files.

When the user hits the 'Enter' key after typing "/bin/ls", the program
which runs the command (the shell) forks a new process. This process
invokes the execve system call for running "/bin/ls". The complete
path, "/bin/ls" is passed as a parameter to execve along with the
command line argument (argv) and environment variables (envp). The system
call handler sys_execve checks for existence of the file. If the
file exists, then it checks whether it is in the executable file format.
Guess why? If the file is in executable file format, the execution context
of the current process is altered. Finally, when the system call
sys_execve terminates, "/bin/ls" is executed and the user sees the
directory listing. Ooh!

Let's Start

Had enough of theories? Let us start with some basic features of the command shell. The listing
below tries to interpret the 'Enter' key being pressed by the user at the command prompt.

This is simple. Something like the mandatory "hello world" program that a programmer writes
while learning a new programming language. Whenever user hits the 'Enter' key, the
command shell appears again. On running this code, if user hits Ctrl+D, the program
terminates. This is similar to your default shell. When you hit Ctrl+D, you will log out of
the system.

Let us add another feature to interpret a Ctrl+C input also. It can be done simply by registering
the signal handler for SIGINT. And what should the signal handler do? Let us see the code in listing 3.

Run the program and hit Ctrl+C. What happens? You will see the command prompt again. Something that
we see when we hit Ctrl+C in the shell that we use.

Now try this. Remove the statement fflush(stdout) and run the program. For those
who cannot predict the output, the hint is fflush forces the execution of underlying write
function for the standard output.

Command Execution

Let us expand the features of our shell to execute some basic commands. Primarily we will read
user inputs, check if such a command exists, and execute it.

I am reading the user inputs using getchar(). Every character read is placed in a temporary
array. The temporary array will be parsed later to frame the complete command, along with its
command line options. Reading characters should go on until the user hits the 'Enter' key.
This is shown in listing 4.

Now we have the string which consists of characters that the user has typed at our command
prompt. Now we have to parse it, to separate the command and the command options.
To make it more clear, let us assume that the user types the command

gcc -o hello hello.c

We will then have the command line arguments as

argv[0] = "gcc"
argv[1] = "-o"
argv[2] = "hello"
argv[3] = "hello.c"

Instead of using argv, we will create our own data structure (array of strings) to store
command line arguments. The listing below defines the function fill_argv. It takes the user input
string as a parameter and parses it to fill my_argv data structure. We distinguish the command and
the command line options with intermediate blank spaces (' ').

The user input string is scanned one character at a time. Characters
between the blanks are copied into my_argv data structure. I have limited
the number of arguments to 10, an arbitrary decision: we can have more
that 10.

Finally we will have the whole user input string in my_argv[0] to my_argv[9]. The command will be
my_argv[0] and the command options (if any) will be from my_argv[1] to my_argv[k] where k<9.
What next?

After parsing, we have to find out if the command exists. Calls to
execve will fail if the command does not exist. Note that the
command passed should be the complete path. The environment variable
PATH stores the different paths where the binaries could be present.
The paths (one or more) are stored in PATH and are separated by a
colon. These paths has to be searched for the command.

The search can be avoided by use of execlp or execvp which I am trying to
purposely avoid. execlp and execvp do this search automatically.

The listing below defines a function that checks for the existence of the command.

attach_path function in the listing 6 will be called if its parameter cmd
does not have a '/' character. When the command has a '/', it means that the user is specifying
a path for the command. So, we have:

if(index(cmd, '/') == NULL) {
attach_path(cmd);
.....
}

The function attach_path uses an array of strings, which is initialized with the paths
defined by the environment variable PATH. This initialization is given in
the listing below:

The the function uses strstr from the standard library to get the pointer to the beginning of
the complete string. This is used by the function insert_path_str_to_search in listing 7
to parse different paths and store them in a variable which is used to determine existence of
paths. There are other, more efficient methods for parsing, but for now I could only think of this.

After the function attach_path determines the command's existence, it invokes
execve for executing the command. Note that attach_path copies the complete
path with the command. For example, if the user inputs 'ls', then attach_path modifies
it to '/bin/ls'. This string is then passed while calling execve along with the command line
arguments (if any) and the environment variables. The listing below shows this:

Complete Code and Incompleteness

Compile and run the code to see [MY_SHELL ]. Try to run some basic commands; it should
work. This should also support compiling and running small programs. Do not get surprised
if 'cd' does not work. This and several other commands are built-in with the shell.

You can make this shell the default by editing /etc/passwd or
using the 'chsh' command. The next time you login, you will see
[MY_SHELL ] instead of your previous default shell.

Conclusion

The primary idea was to make readers familiar with what Linux does when
it executes a command. The code given here does not support all the
features that bash, csh and ksh do. Support for 'Tab',
'Page Up/Down' as seen in bash (but not in ksh) can be
implemented. Other features like support for shell programming, modifying
environment variables during runtime, etc. are essential. A thorough look
at the source code for bash is not an easy task because of the
various complexities involved, but would help you develop a full featured
command interpreter. Of course, reading the source code is not the complete
answer. I am also trying to find other ways, but lack of time does not
permit me. Have fun and enjoy......

I completed my B. Tech in Computer Science & Engineering from a small town
called Trichur, in Kerala, God's Own Country in India. Presently I am
working in Naturesoft Pvt. Ltd, Chennai, India as a Programmer. I spend my
free time reading books on Linux and exploring the same. Also, I have a
good appetite for Physics. My motive in life is to go forward, onward and
upward.

Design Awareness

Making Snow Angels

There's nothing like a foot of snow on the ground (as I write this) to
make you think of winter, so this month we'll again work on the website for
our winter sports equipment company:

Last issue we created a colorway, a color logotype, and a logotype
variant for use on a color background (go here to refresh your memory), along
with the home page for the company.
We also created a first-level internal directory page.

Your browser does not support frames; in order to see the linked page,
you'll need to click here.

This month we'll extend the line to include a raw product line page.

Your browser does not support frames; in order to see the linked page,
you'll need to click here.

It doesn't look like much, and isn't very useful, without images of the
products. In addition, it will be problematic to put a lot of informative
text (product specs, etc.) on top of that pretty gradient.

To put some
flesh on the bones of the line, here's the revised version (I'd like to thank
the Grivel equipment company for the ice axes loosely described in these
pages; see their site for accurate
information.)

Your browser does not support frames; in order to see the linked page,
you'll need to click here.

Note that, in order for the images to 'read' against the gradient, we've
limited them to small, high contrast, black-and-white photographs. Adding
the space for the photos also allowed us to include some additional
information for each of the axes, allowing the viewer to chose more readily
which one fits their needs.

That leads us to a specific product page:

Your browser does not support frames; in order to see the linked page,
you'll need to click here.

Note that we've gone to an all-white background, rather than the gradient,
in order to free up our use of color images and text. The layout is heavily
gridded (which we talked about in the September 2004 column) to visually match the
previous, more general, pages.

These pages have also been sized to be non-scrolling and the photos have
been kept small for quick opening. The specification link on each product
page, if active, would take the viewer to a different page grid, allowing
for much more text or detailed photos. There are also links to industry
certification of the product, along with one to a QuickTime video of
the product being made in the notional AlpineGear factory. Note that we've
also allowed for non-US customers by providing a currency conversion page
that, using pop-ups (like those here, or you could avoid all
the programming and just let the customer use that site in a new window),
would automatically convert the price of the product into any of the 180 or
so major currencies (assuming your credit card company allows you to accept
payments in colons, leks, pulas, or takas). There are also direct links to
the shopping cart, allowing the customer to add or remove the product
without having to view the cart.

While this has been a look at a very simply styled website, and presents
only a few pages of what could be a several-hundred page site, I hope it
has helped you visualize how choices of color and style will shape the
presentation of information.

Next month we'll return to the basics and talk about building a logo or
a logotype. As ever, if there's something specific you're interested in,
let me know.

I started doing graphic design in junior high school, when it was still
the Dark Ages of technology. Bill Gates and Steve Jobs were both eleven
years old, and the state of the art was typing copy on Gestetner masters.
I've worked on every new technology since, but I still own an X-acto knife
and know how to use it.

I've been a freelancer, and worked in advertising agencies, printing
companies, publishing houses, and marketing organizations in major
corporations. I also did a dozen years [1985-1997] at Apple Computer; my
first Macintosh was a Lisa with an astounding 1MB of memory, and my current
one is a Cube with a flat screen.

I've had a website up since 1997, and created my latest one in 2004. I'm
still, painfully, learning how web design is different from, but not
necessarily better than, print.

HelpDex

Part computer programmer, part cartoonist, part Mars Bar. At night, he runs
around in a pair of colorful tights fighting criminals. During the day... well,
he just runs around. He eats when he's hungry and sleeps when he's sleepy.

Ecol

The Ecol comic strip is written for escomposlinux.org (ECOL), the web site that
supports es.comp.os.linux, the Spanish USENET newsgroup for Linux. The
strips are drawn in Spanish and then translated to English by the author.

These images are scaled down to minimize horizontal scrolling.
To see a panel in all its clarity, click on it.

These cartoons are copyright Javier Malonda. They may be copied,
linked or distributed by any means. However, you may not distribute
modifications. If you link to a cartoon, please notify Javier, who would appreciate
hearing from you.

Qubism

Jon is the creator of the Qubism cartoon strip and current
Editor-in-Chief of the
CORE News Site.
Somewhere along the early stages of
his life he picked up a pencil and started drawing on the wallpaper. Now
his cartoons appear 5 days a week on-line, go figure. He confesses to
owning a Mac but swears it is for "personal use".

The Backpage

The Ghoulzette Attacks Again: Guns of Cardboard, Balls of Wax

It seems that SSC - our old webhost, who contrary to all sense and
practice, decided to keep our name when we gave up that hosting arrangement - is up to
their old tricks again. During the past week, I was contacted by somebody
named Taran Rampersad who claimed to be "in charge at LinuxGazette.com",
and who bluffed, blustered, and threatened me, unless we gave up our name
as the Linux Gazette, with mysterious "[sic] others who have been
preparing handle this situation" while at the same time insisting that we
are "a part of [his] community". Was he confused, stoned, or crazy? I
didn't know, and - after the first exchange, in which he threatened court
action and insisted that "LinuxGazette is a trademark of SSC", despite the
fact that no trademark can exist without commercial
trade - didn't care. (In his second email, he became so
abusive and irrational that I simply killfiled him.) SSC had done this before, including having their
lawyer send us a threatening letter... the simple fact remains that LG is
a volunteer effort, always has been, and when there's no money at stake,
lawyers (and courts) tend to shrug and lose interest. Add to it the fact of
their latest attack chihuahua's complete ignorance of the law -
and of his employer's previous attempts to force
the issue by threats - and the picture adds up to the usual null rhetoric,
a complete waste of time.

So... if that's the case, why am I bothering to talk about it here?

The issue involves a strong moral principle. One of my earliest
self-imposed rules, one of the iron-hard lessons learned as a lone Jewish
kid in a Russian boarding school where the students felt safe in beating
the hell out of a "Christ-killing Jew" because they had the teachers' tacit
approval, was, and remains:

Never let a bully's intimidation attempt pass unchallenged.

There were times when this rule cost me; blood, and pain, and
throwing myself at the attacker no matter what his size, over and over
until I could not get up any longer. But the attacks stopped in very short
order... and being known as "that crazy Jew" kept me alive through the
years following that insane time.

Many years have passed, but the principle remains. Even in the small
things - especially in the small things.

The heroic hours of life do not announce their presence by drum and
trumpet, challenging us to be true to ourselves by appeals to the martial
spirit that keeps the blood at heat. Some little, unassuming, unobtrusive
choice presents itself before us slyly and craftily, glib and insinuating,
in the modest garb of innocence. To yield to its blandishments is so easy.
The wrong, it seems, is venial... Then it is that you will be summoned to
show the courage of adventurous youth.
-- Benjamin Cardozo

SSC, namely Phil Hughes, had tried these scare tactics before; he had
managed to intimidate several of our people, who faded away or minimized
their involvement with LG for fear of legal retribution. Let me underscore
this here and now - I do not blame any of them in any way, but feel
compassion for their pain, sorrow that they should have been exposed to
those strong-arm tactics because they tried to give their effort to the
Linux community, and anger toward those who have harmed them. ("Where lies
the danger?", cry many Linux pundits. "Is it Microsoft? Is it SCO?" Oh, if
it was only those external attacks we had to guard against...)

However, in the main, Hughes failed, and failed miserably: there was a
core of people here who refused to kowtow to his threats, whom attacks and
intimidation made only more stubborn and willing to resist and fight - and,
as is often the case with bullies, all his threats turned out to be nothing
more than hot air. More than that, when Phil's tactics were exposed to the
public, we received many emails supporting our position and excoriating
SSC, and several of their subscribers (they publish the Linux Journal,
among other things) contacted us to say that they were cancelling their
subscriptions in protest (something we advised them not to do, since that
action would harm SSC as a whole rather than just the one person
responsible.)

At the time, I believed in the above principle just as much as I always
have - but the balance of the opinions here at LG was that "we should be
the nice guys in this conflict", and thus the exposure of the above
bullying was minimal. Now, threatened by the tremendous popularity of LG -
the initial email cited a Google hit statistic that showed LG.net gaining
on LG.com at a tremendous rate, despite a large differential in length of
existence - they're cranking up the threat machine again.

The stakes in this conflict remain the same; our response to strong-arm
tactics will be the same disdain and disgust with which we treated it
previously. But this time, I will not be silent. From this point forward, I
will grant the bullies neither the comforting shield of obscurity nor the
undeserved respect of privacy for their actions. That time is over.

And, just as with those other cowards in the past, I believe that it
will end here and now, or at least in very short order. As a wise man said
once, echoing my belief in different and perhaps better words, every
one that doeth evil hateth the light, neither cometh to the light, lest his
deeds should be reproved.

Let there be light.

Ben is the Editor-in-Chief for Linux Gazette and a member of The Answer Gang.

Ben was born in Moscow, Russia in 1962. He became interested in electricity
at the tender age of six, promptly demonstrated it by sticking a fork into
a socket and starting a fire, and has been falling down technological
mineshafts ever since. He has been working with computers since the Elder
Days, when they had to be built by soldering parts onto printed circuit
boards and programs had to fit into 4k of memory. He would gladly pay good
money to any psychologist who can cure him of the recurrent nightmares.

His subsequent experiences include creating software in nearly a dozen
languages, network and database maintenance during the approach of a
hurricane, and writing articles for publications ranging from sailing
magazines to technological journals. After a seven-year Atlantic/Caribbean
cruise under sail and passages up and down the East coast of the US, he is
currently anchored in St. Augustine, Florida. He works as a technical
instructor for Sun Microsystems and a private Open Source consultant/Web
developer. His current set of hobbies includes flying, yoga, martial arts,
motorcycles, writing, and Roman history; his Palm Pilot is crammed full of
alarms, many of which contain exclamation points.

He has been working with Linux since 1997, and credits it with his complete
loss of interest in waging nuclear warfare on parts of the Pacific Northwest.