In this part I will create four GlusterFS nodes that will stripe the files across each server. GlusterFS defines a minimum hardware of 1 GB of RAM and 8 GB of disk space. Soooo… about the requirements: I’ll add a new disk of 8GB, but the memory limitations. I like a challenge, 64 Megs ought to be enough (for now).

So just copy the base system. And let’s rename it to ‘glusternode1’. Add a new 8 GB disk (or more) to it. Again IDE! and boot the machine.

Remember the ip script? Time to put it to some use. The first thing I like to do is assign a static IP so I can use putty to connect to it. After boot just type:

/home/base/ip.sh glusternode1 192.168.1 2211

/home/base/ip.sh glusternode1 192.168.1 221 1

Reboot the machine and voila. For the gluster nodes I intend to use the following ip’s:

I noticed that it seems impossible to generate a PID for this executable. Luckily GlusterFS can make it’s own with the ‘–pid-file’ option. So before it can be used as a startup deamon you need to pass this as an optional paramater. These are found in daemons.conf.

Also this first start generates the ‘/var/lib/glusterd’ which we need.

To identify each gluster node they have to have their own unique uuid. Because if we copy the glusternode1 3 times, each server will list the same uuid, causing gluster to think all four machines are localhost!
This uuid can be found in the file ‘/var/lib/glusterd/glusterd.info’. Let’s assign a random one for ‘glusternode1’.

echoUUID=`uuidgen -r`>/var/lib/glusterd/glusterd.info

echo UUID=`uuidgen -r` > /var/lib/glusterd/glusterd.info

Finished with glusternode1! Phew. 3 to go. Shutdown the glusternode1 (tip:halt) and copy it three times(glusternode2,3,4).

Now let’s start assigning the IP’s and random uuid’s beginning with the last cluster. (To avoid ip conflicts.)

Offcourse one of the drawbacks of using such a small amount of RAM (64 MB) is that gluster will shit himself trying to load. Default glusterfs tries to use 64MB, wich is too much for the machine to handle. So we’ll need to tweak the cache-size. I tested a few values and 16MB seems to work best. (4MB and 8MB are too small.)

gluster volume set slitaz-volume cache-size 16MB

gluster volume set slitaz-volume cache-size 16MB

So normally everything should be ok now and a ‘volume info’ command should succeed.