Tuesday, February 5, 2013

DBRD mirrored disk

So today I'm gonna talk about how to set up a replication between two servers. In my case I've created one LVM on both servers that is a RAID10 and now I want to use DRBD to keep the block devices synchronized and therefore protect me against a server failure.

We start with installing the DRBD packages. I've found the easiest to just use the elrepo repository: http://elrepo.org/ follow the instructions there to set up the repository in Yum.

You then install the package and kernel module. If you have a recent enough kernel the kmod package may not be necessary. Look it up from drbd website.

yum install -y drbd84-utils kmod-drbd84

Next up we need to configure the drbd resource that we want to mirror. In our case we want to keep /home mirrored so we create a new resource file in /etc/drbd.d/ called home.res with the following contents:

What it says is that it will use the local device /dev/vg0/home and create a new block device /dev/drbd1 that you should then use. In addition we ask to use internal metadata, which means that the metainfo is kept on the same device as data. This however means that you will have to re-create the filesystem on the device. If you want to create a drbd mirror of an already existing filesystem, then you have to use external metadata and I suggest you consule DRBD website Chapter 17 DRBD internals that discusses the metadata usage. There are downsides to using metadata on the same block device if the block device is not a raid volume, but a single disk so do take this into consideration.

So the above configuration describes a drbd resource called home that has two servers with their respective IP-s given and the port on which to talk to the other host. It is preferred to use a separate heartbeat channel if possible, but it works quite well over a single network interface that is also used for other things.

Now that we have described the resource we have to enable it the first time. Choose one of the nodes and ask drbd to create the resources metadata:

# drbdadm create-md home

as we use internal metadata it'll warn you that you are possibly destroying existing data. Remember that you are indeed if you had anything previously stored. In that case you better consult drbd manual how to set up with external metadata.

Now that we have the metadata created we can bring the resource up. For it we first load the dbrd kernel module and then ask the resource to be brought up:

You can now already create a filesystem on the new device and mount it as well as start using, but remember, that it's still under sync so until that is not completed the performance might be degraded.

What I have not covered here is what the default settings are and what they mean. I recommend reading up on drbd website, but basically the default protocol that drbd uses requires synchronized writes. So if the devices are connected any write to the primary device will also initiate a write on the secondary and the OK will be given back only once both have completed. It can be set up as asynchronous write if needed, but that requires changing the protocol. Also, one can set up an active-active (i.e. primary/primary) setup allowing both devices to be used at the same time, but caution should be observed in such setups to avoid split brain situations and issues. In our case we want an active-passive configuration with NFS server failover, but that we will discuss in a separate post.