So, after some time away from IT, I’m mucking around in it again to see what seems interesting.

Got a pointer to Docker from a friend and have been up to my ears ever since. It’s in a similar vein to the stuff I got to play with when working with Nebula at NASA in the sense that it is a PaaS enabling environment. It uses thinly provisioned servers (containers) with AUFS (and some support for a few others), so every operation during the image build process resides in a new, thin, COW layer.

This gives great advantages for caching and speeding up the build process. Also, say you have 2 images that share the same common base build (i.e. apache server), but for one application, you need PHP and the other you need RUBY … build 2 different images from the same base, and instead of wasting 2x the disk space and build time, you have 1x + the difference for PHP in one image and RUBY in the other — and the build time is severely reduced!

Need 5x Apache+PHP servers? Just run:

docker run -d -t geekmush/apache-php

5x or as many as you want. There are lots of other options and Docker is being developed actively … and they have some great staff and supporters on #docker@irc.freenode.net!

It’s not worth trying to describe Docker here when they can do a much better job … go check out their website and the Googles!

I’m going to use my space for notes and tips and tricks that I’ve run into during my adventures this week.

Background

Normally, we run our servers with RAID1 with a pair of disks added at a time. Since we also use DRBD on top of our LVM LVs, we have 3 servers in play (3-way DRBD replication) — this means adding new disks in groups of 6, which is sort of spendy.

So, with our new servers, we are looking into switching out the RAID1s (grouped into a single VG) to a single RAID5 under the LVM PV (in a single VG). Linux RAID5s can be expanded on the fly, this lets us grow the data disk by adding only 1 disk at a time (3 across the 3 servers). The bulk of the application servers are not really I/O intensive, so we’re not really worried about the RAID5 performance hit.

Here’s the setup on our test of this process:

Ubuntu 8.04 LTS Server

AMD 64 x2 5600+ with 8GB

3 x 1TB SATA2 drives, 2 x 750GB SATA2 drives

Xen’ified 2.6.18 kernel with Xen 3.3.1-rc4

Linux RAID, LVM2, etc, etc.

We configured the partitions on the data disks as 2GB partitions, just so the sync doesn’t take *forever*.

In another test, growing 3 x 200GB partitions by 1 more 200GB partition on our system took around 150 minutes, so the reshaping process is not super speedy. Even though our test showed that you can still perform I/O against the back end data store (RAID5) while it is reshaping, it would probably be best to keep I/O to a minimum.

UPDATE: We repeated the test with 3 x 700GB partitions and added a 4th 700GB partition — reshaping time took about 8.5h with no external I/O performed to the LVM/RAID5 device.

If I don’t start blogging some of the stuff I’m doing, I’ll never remember how I did it in the first place! So, time to try to come out from the darkness and see if I can keep discipline to post more.

Problem: One of our servers has a Linux md RAID1 device (md1), which would keep falling out of sync, especially after a reboot. Upon nearing the end of they re-sync, it would pause, fail, and start over. Entries from /var/log/messages

Then, the RAID1 re-sync would start all over again … only to fail a few hours later … and again.

Since there appear to be some bad spots on sda at the end of the disk, the solution seemed to be to shrink the partition a bit to avoid the bad spots.

Our md RAID1 (md1) consists of 2 x 694GB partitions on sda3 and sdb3. On top of md1 lives an LVM PV, VG, and lots of LVs. The PV is not 100% allocated, so we have some wiggle room to shrink everything.

Here’s the procedure we followed:

Shrink the LVM PV

“pvdisplay” showed the physical disk had 693.98 GB, so we just trimmed it down to an even 693GB.

Hey, what a coincidence … the Array Size is the same as the PV was! 🙂 Now, we need to calculate the new size. The new size of the PV is 1453325952 sectors, which is 726662976 blocks (even *you* can divide by 2!!).

Now, technically, we should shrink the partition, too. Here’s where I ran into a bit of trouble (ok, I’m too lazy to do the math). “fdisk” on linux doesn’t seem to want to let you specify a size in blocks or sectors, so you have to keep shrinking the ending cylinder number until you get in the range of the new block size. I’ll leave this as an exercise for the reader, and feel free to post a comment with the actual procedure. 🙂

After these steps were completed, we were able to perform the raid1 re-sync successfully.

OK, I don’t know what’s going on with Firefox (currently at 2.0.0.4), but it has been sucking ass lately … and I’m a huge Mozilla fan.

Symptoms include:

Rapidly climbing to >1GB RAM utilization

Sucking up a full CPU … sometimes randomly

Generally hanging all the time

I’m hoping it’s something to do with the Web 2.0 type stuff that I’m encountering more sites using (namely Google, Google Apps, etc) … that combined with some good old fashioned memory leaks in the browser and the add-ons I use.

A lot of the sites now have auto-refresh, plus those that don’t that I need refreshing, I do through Tab Mix Plus. I’m sure all of this reloading of pages is amplifying the memory leaks.

In conversation, aside from “Get a Mac”, someone commented that I should give Apple’s Safari Beta 3 for Windoze a shot … so I just grabbed and installed it.

So far, it appears to be a real Windoze app because now I must reboot. See you later!