Today I’m updating my webserver, which this blog as well as a couple of other sites, to Maverick Meerkat. The Ubuntu 10.10 release, released on the 10th of October. Here’s how the distributors describe it:

Some time ago a group of hyper-intelligent pan dimensional beings
decided to finally answer the great question of Life, The Universe and
Everything. To this end, a small band of these Debians built an
incredibly powerful distribution, Ubuntu. After this great computer
programme had run (a very quick 3 million minutes…or 6 years) the
answer was announced. The Ultimate answer to Life, the Universe and
Everything is…42, and in its’ purest form 101010. […]

On my desktop(s) I’ve been running the beta/release candidate for a couple of weeks having only minor troubles. Nothing like when the boot procedure was switched to upstart etc. (in 8.x or 9.x something?).

In either case, while writing this post the update is already finished, server rebooted and no problem have arised yet. Not even once like that with apache and php-cgi that I had with my latest distribution update.

On September 28, 2010 members of the Open Office Project formed a new group called The Document Foundation, and made available a rebranded fork of OpenOffice provisionally named “LibreOffice”. The Foundation stated that it will coordinate and oversee the development of LibreOffice.

My fears of this ordeal is that Oracle sticks to their hardcore innovation killing plans and do not support TDF. By keeping the trademark for OpenOffice.org they may keep a marketing advantage and thus cause great confusion for users around the world who seek the license/patent free desktop office suite.

A well remembered fork of free software in recent times was that of the X.org foundation breaking free from XFree86. XFree86 and X.org are both software that implement the X11 protocol, which is responsible for giving your unix/linux software a way to display graphical user interfaces. Nothing the ordinary end-user cares about how its done and thus the transition went smoothly. Wikipedia sums it up like this:

As mentioned, this was a fork that never came to affect the end-user in terms of choice, installation and distribution. People who used computers never had to consider which X11 implementation to use and, in either case, just about every developer hopped on the X.org train to ensure future compatibility and possibility to continue developing it as the free software community.

With OpenOffice.org it’s different. Much different in fact, because end-users have gotten used to the name. Users of OpenOffice.org believe that it’s the alternative to Microsoft Office and that anything new – despite sharing the same codebase – is written from scratch. What I mean is the old marketing trick of “we made this first”, despite the fact that TDF consists of OOo old-timers.

Oracle may also, which is different from the XF86 case, employ enough programmers to keep up with LibreOffice features, under a possibly (likely) proprietary future license. This will hurt LibreOffice’s ability to compete given the lack of an established name with the end-users.

On the opposite side of the table Oracle may very well stop distribution whatsoever of OpenOffice.org. This would cause headlines which may scare people into using any proprietary “future safe” office suite. It would make it easier for LibreOffice to establish itself, but would be a big slap of FUD on the entire free software community.

Because of the above reasons, supported by possibilities which free software has uniquely contributed to our world, Oracle should definitely donate the trademark of OpenOffice.org to The Document Foundation. Anything else will at least temporarily hurt the distribution and support of the most competent open source, free software, office suite.

My reasons to believe that TDF will be able to succeed in moving users to LibreOffice however is simply the list of supporters including Novell, RedHat, Canonical and Google. Being the default office suite in the most widely used GNU/Linux distributions, at least free software users won’t be led astray.

Update 2010-10-08: I noticed that StarOffice, OOo’s proprietary soulsucker, has changed its name to – wait for it – Oracle Open Office. I think it’s quite clear now what will happen to OpenOffice.org. I did not explicitly know this until now, but I guess the founders of The Document Foundation did…

So, not very impressive performance-wise or storage wise for that matter. And only the standard free & Free software components. Rather ordinary and pretty much out of the box.

What might be a bit out of the ordinary is mod_perl, which enables me to run a MySQL backend for the VHost configurations. What it does is that it dynamically adds configuration snippets when Apache loads its configuration. Using my knowledge in Perl, it’s quite easy to make advanced, unique configurations without very much administration and file handling. Only problem is that I have to rely on the MySQL database being up and running… But then again, most sites require databases anyhow. And a fail-safe system simply seems too overkill.

Anyhow, I mentioned mpm-worker too. Which leads me to the topic for this post – my webserver update checklist. I’m a rather strong opponent of dumbification and prefer that tasks seldom should be automated. Certain, irrelevant or unnecessary, tasks may very well be automated (such as with my mod_perl configuration). You see, assuming and guessing has never been a machine’s strong side. Brute-force and tediousness for all it’s worth, but – for the love of Ada – not guessing!

Assumption, I guess (…hah…), led apt to reinstall libapache2-mod-php5 and in turn mpm-prefork. Without asking. This of course overrided the suEXEC + php-cgi setting, causing all my previously suEXECd websites running as user ‘www-data’, effectively blocking write capabilites for vhosts (chown/chmod stuff) in their respective folder and opening up all vhosts to be readable by any PHP script running on any vhosts. Pöh.

After disablinglibapache2-mod-php5 and regaining some sense of security, I was content for a while. With the minor amount of visitors me and other hosted sites get, I didn’t notice any other problems until the next morning. Soon I noticed that sites were inaccessible, slow and the load was pacing at about 50. Fortunately this was something I learned to configure after the noticable downgrade I had to do when my office was raided. At that point I could, at least, take the same machine but only half the amount of RAM. At the same point I switched back to Apache2 from lighttpd. This led to many hours of configuring and tweaking and the following conclusion:

Apache2 has the default setting to use libapache2-mod-php5. Suck my balls. Sure it’s fast, easy to configure and works for your average Joe-user, but it’s a severe security hazard for anyone hosting for a third-party. Also, libapache2-mod-php5 seems (based on apt package rules) incompatible with mpm-worker, causing apt to uninstall it and replace it with mpm-prefork. And for those of you who haven’t tried the two under a heavy load (or even just as a hobby), I can tell you the performance increase of mpm-worker is huge. Heck, it can even make you accept Apache2 over lighttpd without having to choose between configurability and speed too harshly.

So my personal confusion lies in why Ubuntu’s (and Debian’s?) repository forces back a lousy, insecure configuration when doing apt dist-upgrade. I agree that it’s partly my fault for not inspecting the package lists better or for that matter double-checking.

In any case, now my webserver update checklist has “reinstall mpm-worker if removed” (which happily removes both mpm-prefork and libapache2-mod-php5) right next to “ignore any php.ini updates” and “MySQL’s best the way with my own my.cnf”.

PS. Unfortunatelymod_fcgid appears to have a new bug since v2.2 that Karmic used (Lucid uses v2.3.4 as of writing). Any file that’s larger than the value of FcgidMaxRequestInMem may (has in all of my cases) be corrupted on upload. It was a long time ago since I had this much problems with an upgrade, besides other peoples’ Windows related stuff of course. A workaround is to set the FcgidMaxRequestInMem value higher – though the problem has been fixed in mod_fcgid v2.3.5. However, a fix has been released already in the package libapache2-mod-fcgid – 1:2.3.4-2ubuntu0.1.

Wouldn’t it be awesome if open source software was the de facto standard in state-funded organisations? Not only because of costs and easy licensing, but mostly the general idea of an open and free infrastructure. Something which is especially necessary within information technology – and principally even more so within government related work. Transparency is a keyword for trust.

So I thought, yesterday when I fiddled around with a school laptop, that “wouldn’t it be neat to run Ubuntu on these?”. The laptop I played around with was a Lenovo Thinkpad 7440 something running the official Umeå school configured Windows XP install with access to a heavily filtered wireless network and stuff. Interestingly enough the machine also had a Vista Basic license tag with a CD key underneath… (have they paid for Vista Basic licensing as well?)

To run Ubuntu you have to be able to boot a USB key or install somehow, for example with Wubi. Booting is practically impossible since Lenovo has delivered the laptops with TPM chips and thus you can’t select another boot device without the correct password. And unfortunately you can’t merely reset the CMOS… When installing with wubi there was a random error I didn’t bother looking into more closely. Instead I figured it might be more fun to actually install it with a legitimate reason and official support from the schools…

There are some hardware difficulties which are easy enough to manage. Either your computer stops/reboots every once in a while – check the PSU or faulty memory. Or maybe your harddrive is making the click of death. It might even be as obvious as a bulgy, leaking capacitor on the motherboard.

But when you start noticing new ways for a computer to silently fail, you’re up for an interesting – though frustrating – night without sleep.

The other night the server for Asian DVD Club stopped responding. Sure enough I hopped on my bike and got to the server just to see that it still reacted on Num Lock switching, but there was no VGA output.

Bah! Humbug.

Having gone through a memory check, PSU change, Live USB boot, and even a motherboard switch etc… I had broken it down to the harddrives. Granted, I had my suspicions to start with because the disk activity LED was kept lit whenever the computer crashed or stopped responding.

But there were no DMA errors. S.M.A.R.T. didn’t notice anything peculiar. It was as fast as usual and wouldn’t immediatly crash on a high load. The server _seemed_ to be running ok (for my tests) when I unplugged the disks from my secondary IDE controller – but lo, I was fooled. I just didn’t try it hard enough

A couple of hours later, I dd if=/dev/sda of=/dev/sdb to a different drive (fortunately only 20GB system drive) and popped it in. Smeg me sideways, it “didn’t work”. I booted from the new disk and all, even fscked all the filesystems and stuff.

So I was confused. I took my mind off things and came back after a while. What I didn’t think could have been a problem apparently was. The old system drive was connected to the secondary IDE controller. The new system disk to the primary. Then why on Earth would Linux freeze up when accessing the old drive?

Sigh. By this time I had already reinstalled Ubuntu server 9.04 (jaunty) to make sure it wasn’t ReiserFS spooking up. Now everything is ext4 and cleanly installed – aka barely configured.

And it seems to all be working fine. The only problem I actually had was a harddrive that would lock up the IDE controller (both of them even!). Even though it seemed perfectly healthy.