I have never been particularly fond of the concept of Daylight Saving Time (cutting one off of a blanket and sewing to the other end does not make a longer blanket.) This time around, though, I ran into an issue involving the perfect combination of a monthly cron job, a server set to local time and the switch from Daylight Saving to Standard Time on the first of the month.

At precisely 1:14 am on the first day of the month the cron job ran, as it does the first day of every month, and picked a raffle winner for one of our client’s monthly contests. At 2:00 am the time on the server rolled back to 1:00 am in accordance with the switch to Standard Time for the US. Fourteen minutes later the job ran again, and picked another winner.

Whoops. Now our system has awarded two people a single prize. Telling the second one to get the prize that they didn’t really win would not get us any points with the client, as their customer would be upset. Likewise, charging the client for the second prize is a non-starter, as it is, in fact, our fault. When I inherited these systems I looked through all the cron jobs to get a feel for what the system is doing and when. What didn’t occur to me, however, was that jobs scheduled at the wrong time of day could fall victim to Daylight Saving/Standard Time change-overs.

Any daily job that runs between 1:00 am and 2:00 am will fail to run once a year (Standard -> Daylight Saving when clocks jump ahead an hour) and will run twice once a year (Daylight Saving -> Standard Time when clocks fall back from 2:00 am to 1:00 am). Weekly jobs that run between 1:00 am and 2:00 am on Sundays will likewise misbehave, while monthly jobs, regardless of day of the month, have a small chance of experiencing one of these issues. In this case, the job runs on the 1st, which happened to be the first Sunday in November, and bang: error.

Needless to say, we modified all the cron jobs to ensure than none of them start between 1:00 am and 2:00 am.

I had a converstion with a friend about Linux distros earlier today, and I was asked why I choose to run Gentoo on my web server. He told me that Gentoo was too hard to maintain on a server, and that when it came time to upgrade something (like Apache or PHP) due to security patches that it took too long, and too often failed. I was confused by this so I asked for clarification. What he described was the pain of updating anything on a “stale” Gentoo machine.

Unlike so many of the other popular distros, Gentoo does not, by default, use pre-compiled packages. So unlike doing rpm -i or apt-get install doing emerge on Gentoo requires that the package you are installing, and any missing dependencies, are pulled in as source code and compiled. When you think about adding packages like, say, Lynx, the process takes only a few minutes on a moderately decent machine. (Mine is a PII 966 and Lynx took about 4 minutes start to finish). When you talk about upgrading something like Apache, however, the length of time it takes depends not only on the speed of the machine, but how many of its dependencies are out of date. In fact, if you fail to update regularly you can run into an issue where not only are most of your files out of date, but your system profile is out of date and you need to do some serious wrenching to get the whole thing working again. In the times that this has happened to me (twice) I was able to get the system up-to-date once, and just gave up and reinstalled a newer version the second time. (These were both rarely used VMs, and not production boxes.) However, updating the profile on a “fresh” Gentoo is (in my experience) a painless procedure of rm /etc/make.profile && ln -sf /usr/portage/profiles/profile_name /etc/make.profile && emerge -uND world (uND : update newuse deep: update, take into account new use settings from the profile and make.conf, and include deep dependencies).

So how do I avoid the “stale” Gentoo syndrome? I take a three-step approach.

A daily cron job runs emerge -puvD world (puvD : pretend update verbose deep : just tells you what would be emerged, in an update, verbosely, and include deep dependencies) and emails me the output. This enables me to see each morning which packages have updates available.

Every day that I have the time for it I log into the machine and run emerge -uD world and follow it up with etc-update (if needed) and revdep-rebuild if any libraries were included in the updates. (I save building new kernels for Sundays, and that doesn’t happen all that often, but I do like to always run the latest.)

I check the messages from emerge to see if there are any special configuration changes that need to happen post-install that cannot be handled by etc-update. For instance, changing configurations in /etc/conf.d/packagename, new profiles or anything of that sort.

Ok, so I like to keep my system on the latest and keep a shiny new everything on it. How does that compare with something like, say, Debian? In Debian (and Debian-based distros) you can update packages to a certain point, after which the package for that version of Debian is no longer supported or updated. So you need to upgrade your version, and your kernel, which you do with apt-get upgrade dist. Seems easy enough. And how does Gentoo handle version upgrades? It doesn’t need to. If you keep your system up-to-date in the way I described your system will match whatever the latest Gentoo release has. In fact, I built my web server using Gentoo 2006.0 and have been keeping it up-to-date since then. (Gentoo seems to have stopped doing the biannual releases, btw – they are now releasing updated minimal install CDs nearly weekly for each architecture.)

While I have been a Gentoo fan for a while (Portage hooked me) I have been trying out Xubuntu Edgy Eft 6.10 with Beryl 0.2.0. Here’s my take:

I have always like the XFCE environment almost as much as I like KDE, for opposite reasons (my tastes are nothing if not eclectic.) XFCE 4.4 is smooth, as always the response is quick, its lightweight nature makes it the perfect desktop layer for Beryl. Even with Beryl running, and Emerald Themes, the desktop is still quicker on my laptop than KDE 3.5 alone, and much faster than KDE on Beryl.

Under Beryl 0.2.0 my ATI Mobility Radeon 9700 actually runs with AIGLX without any glitches. With Beryl 0.1.4 I was only able to run under XGL. (I am not sure whether that was due to the drivers included with Sabayon 3.2 compared to the Xubuntu drivers, or if it is entirely the changes in Beryl.) Even with modifying Emerald themes and running with transparencies, and all the animation bells and whistles, it is a very nimble, usable system.

As far as the base desktop system installed, I am actually quite pleased with the default applications installed. While some may find the choices of programs rather limited, I prefer to have the basics and install the other things I want as I want them.

Installing was simple – in fact, this is the first distro that I have been able to install using my Atheros wireless card. (Thanks for including the ath_pci module guys!) I did run into one glitch – immediately after installing and rebooting my wireless card wasn’t found. Checked lsmod – no ath_pci. So , I tried to modprobe ath_pci but no luck. It turns out this is a know bug and I found the fix at ubuntuforums: put the install cd in tray, run sudo apt-get install linux-restricted-modules-`uname -r` and then sudo modprobe ath_pci. Once that was taken care of it was a complete breeze. I hate to admit it, but I think I am starting to really enjoy a Debian derivation. We’ll see over the next few weeks how system administration shakes out before I make my final decision regarding whether I will stick to this as my primary OS or not. Obligatory screenshots follow.

For linux fans the ‘Real’ Big Event yesterday wasn’t the superbowl, but the latest Linux kernel relase (2.6.20). This release includes 2 different virtualization implementations, KVM and paravirtualization as well as PS3 support.

Before downloading the actual new kernel, most avid kernel hackers have
been involved in a 2-hour pre-kernel-compilation count-down, with some
even spending the preceding week doing typing exercises and reciting PI
to a thousand decimal places.

The half-time entertainment is provided by randomly inserted trivial
syntax errors that nerds are expected to fix at home before completing
the compile, but most people actually seem to mostly enjoy watching the
compile warnings, sponsored by Anheuser-Busch, scroll past.

I have been looking all over for a way to format an external drive so that I can use it under Linux, Windows and OS X. The reason for this is simple, I currently use Windows and Linux all the time, and I am planning on upgrading my rig to a MacBook Pro just as soon as I can. Since I expect to be running OS X, Windows and Linux I needed to find a format for my 300GB external drive that would work with all of them.

While FAT32 is an option, it has some serious limitations. Like a maximum file size of 1 byte less than 4 GB. That and the way that FAT32 partitions over 32 GB (while supported under Windows) tend to get a little, shall we say, flaky.

Before today what I had found was as follows:

OS

File System

Read

Write

Windows XP

Ext2 / Ext3

application

no

HFS+

application

no

NTFS

native

native

Linux

Ext2 / Ext3

native

native

HFS+

in kernel

in kernel

NTFS

in kernel

no

OS X

Ext2 / Ext3

no

no

HFS+

native

native

NTFS

in kernel

no

Note: native = default or standard in a “vanilla” install | in kernel = modules available for kernel insertion, although not default.

Well, that was before I found these today: kernel modules for both OS X and Windows for full read and write support of Ext2 / Ext3 file systems. I have installed Ext2 IFS for Windows and pounded on it already. It works (so far) like a charm. I don’t yet have a Mac to test the Mac OS X Ext2 Filesystem but I will do so as soon as I can. Assuming they are building this as a loadable module for the Darwin kernel (does the OS X Darwin kernel allow insmodding?) then it should be a snap. What surprised me is that the Ext2 IFS for Windows is an actual NT Kernel module, not an app or service. It’s actually kind of cool to see my Linux partitions show up under XP as lettered drives!

While the debates carry on over what can be done to make Linux more feasible in the desktop market (in other words desirable enough that average users say “I want that!”) the one argument that seems to rise to the top is eye candy. Does it affect how an OS works? No. Does it change the way programs behave? Maybe superficially. Does it change the way users interact with the OS and the programs? You bet!

I had a chance to play with Kororaa, a Gentoo-based live CD with AIGL/Xgl and a great install-to-disk tool. And while Xgl is not quite ready for prime-time (I encountered a couple crashes where xdm would completely exit and restart) it is getting close. And the eye candy features (adjustable transparency on windows, the rotating cube desktop, the “liquid-ish” movement of the windows) add a certain amount of “ooh factor.” But the biggest thing I found myself using were three very handy tools: [Ctrl+Shift+Alt+ left or right arrow] to rotate the desktop cube with the active window following, the hot-corner to display all the open windows as tiles, and the [Ctrl+Alt+PgDown] to “flatten” the cube, allowing you to see all the sides at once and use the arrow keys to select one of the desktops to switch to. While many will consider this to still be nothing more than eye-candy, I found it so utile that I am (a little too) eagerly awaiting the next Xgl implementation.

So who, besides me, thinks that these are as useful as they are eye candy-ish? Well, Apple, for one. They already have the hot-corner to display all the open windows, the ability to show all the open windows of one application, and (with Parallels at least – and rumor has it in the next OSX version) the cube concept of the multiple desktops.

Anyone who has ever had need of bootable recovery tools knows what a pain it is to try to build a bootable CD containing all the needed tools. Why do it all the hard way? There is a very handy one already built and ready for download at Ultimate Boot CD. This is a Linux-based live CD with lots of Linux tools. There is a Windows-based version Ultimate Boot CD for Windows as well. While the Linux-based version comes with its own kernel, and allows for adding modules (available at SourceForge) the Windows version requires that you have your own WindowsXP CD with SP1 (and preferably 2) – although they also have a utility to help you slipstream the service packs if your disk doesn’t have them.

If, like me you spend a lot of time on SourceForge and wish you could harness the SF functionality in your own development environment then this is for you. SourceForge has released SourceForge Enterprise Edition 4.3.

This is a VMWare Virtual Appliance that allows for up to 15 free users. I haven’t put it to use yet, but I will be implementing a test of it (hopefully) sometime this summer at work to see how well it meets our development, project and bug-tracking needs.

If it is all it promises to be this may be one of the most useful tools for small-team distributed development ever.

An article at reallylinux.com points out the issue of Linux user elitism and snobbery – which seems to be putting some Windows users off of trying Linux.

I know it is something I’ve said before (although I’m not sure I have said it here) but it bears repeating: we all start out as a n00b but somewhere along the way we had some kind of assistance to get where we are. It is only if we are willing to share what we know that we can spread our knowledge for the “greater good.” Besides, if you don’t tell anyone what you know, you get no claim to being 1337.

Well, I tried the gtk+ based graphical installer on a VMWare virtual machine. I am sad to say it failed painfully – and did so after many hours of emerging and compiling. Part of the problem was in the fact that I had selected to install enlightenment, fluxbox and blackbox (to play around with some different wm’s I hadn’t messed with previously) and the installer chose to install those and gnome and kde. Needless to say, it was many hours to go. Thankfully (?) after about 4 hours the installer failed on some ebuild or other (I don’t recall what it was right now) and that was that.

I tried it again. With the exact same results. The definition of insanity: doing the same thing over and over and expecting different results. So, now I have a working VMWare install of Gentoo 2006.0 using the minimal install disk and am building enlightenment, fluxbox and blackbox the “older fashioned” Gentoo way – via a simple emerge call.