I remember reading a funny story about a group of university sysadmins who "lost" a system
on their network. At first, no one noticed, but after a while they realized that it had
been running for an insanely long time. As it hummed away in some unknown location,
people continued to log into it, jobs continued to run, and while it became something of a
support dilemma for the group, it didn't seem to require any physical maintenance, so they
just left it in production :-). Turns out, the system had been accidentally dry-walled
into a wall, somewhere among the many forgotten corridors of their CS building. They
finally stumbled across it one day when someone noticed an ethernet cable running out of a
wall.

In the grand scheme of things, uptimes aren't really that big of a deal--these days
anybody can set up a Linux box, sit it in a corner, and do nothing with it except watch
its uptime creep upwards and upwards. But for some reason it's still exciting to see a
vital box stay up for a long time. A combination of factors: hardware reliability, OS
stability, good OS security, local power reliability, and maintainer patience (i.e. not
Linux-kernel-of-the-day types) all contribute. My primary server at home, sumatra, runs
good old, rock-solid FreeBSD. It provides SMTP (Postfix), IMAP (Courier), NFS, NIS,
Samba, DNS, syslog, and SSH access for me. In any case, 401 days is a personal record for
me, so here it is: