my digital notebook

Main menu

Post navigation

Featured

So you have a failed disk in a ZFS pool and you want to fix it? Routine disk failures are really a non-event with ZFS because the volume management makes replacing them so dang easy. In many cases, unlike hardware RAID or older volume management solutions, the replacement disk doesn’t even need to be exactly the same as the original. So let’s get started replacing our failed disk. These instructions will be for a Solaris 10 system, so a few of the particulars related to unconfiguring the disk and device paths will vary with different flavors of UNIX.

First, take a look at the zpools to see if there are any errors. The -x flag will only display status for pools that are exhibiting errors or are otherwise unavailable.
Note: If the disk is actively failing (a process that sometimes takes a while as the OS offlines it), any commands that use storage related system calls will hang and take a long time to return. These include “zpool” and “format”, so just be patient; they will eventually return.

Get your hands on a replacement disk that is as similar as possible to a SEAGATE-ST914602SSUN146G-0603-136.73GB. I was only able to dig up a HITACHI-H103014SCSUN146G-A2A8-136.73GB, so I’ll be using that instead of a direct replacement.

Next, use “cfgadm” to look at the disks you have and their configuration status:

We want to replace t5, so we prepare it for removal by unconfiguring it:

# cfgadm -c unconfigure c1::dsk/c1t5d0

The “safe to remove” led should turn on and you can pull the disk, remembering to allow it several seconds to spin down. Replace it with the new disk and take a look at “cfgadm -al” output again to ensure that it has been automatically configured. If it has not, you can manually configure it like below:

# cfgadm -c configure c1::dsk/c1t5d0

Now, it’s a simple matter of a quick “zpool replace” to get things rebuilding:

I wrote about how replace a failed disk in a zpool, but never got around to writing up the process for a boot disk. In this case, I’m just creating a bootable mirror, but the process is pretty much the same for replacing a disk. Generally, just think of /dev/rdsk/c3t4d0s0 as a replacement. You can follow the directions in http://spiralbound.net/blog/2012/01/09/how-to-replace-a-failed-drive-in-a-zfs-pool/ to find the commands to replace the physical disk.

Copy the partition table from the working disk to the new disk:prtvtoc /dev/rdsk/c3t1d0s0 | fmthard -s - /dev/rdsk/c3t4d0s0

Attach the new disk to the root pool:zpool attach rpool c3t1d0s0 c3t4d0s0

Install the boot blocks on the new disk:installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c3t4d0s0

The most basic network troubleshooting trick in the book is a simple test to make sure that a daemon is listening on its respective port. This is easy with TCP connections because you can simply setup the daemon on the destination and telnet to the port. It’s harder for UDP connections because there is no ACK.

A while back Casey started complaining that his Drobo storage robot was no longer being awesome. This got me thinking about how easy it would be to build a nice ZFS storage appliance that provide massive storage, constant data protection and self-healing against bit rot. I have wanted to build something like this for some time, but just never had the storage needs at home to justify it. Well, data needs grow and my discussion with Casey while stumbling around Fry’s was all it took to get me moving.

What is ZFS? Well, put simply, ZFS is Jeff Bonwick and The Bonwick Youth’s answer to every filesystem annoyance the world has ever known. It is the pinnacle of human achievement in filesystem development, and quite honestly, the only commonly available storage option that will truly protect your data. Now, before you start leaving angry comments explaining how [insert RAID solution here] does a perfectly good job of protecting data, I’m not talking about RAID. I’m talking about leveraging copy on write transactions and checksums at the block level to ensure data integrity, and an implemented strategy for self-healing against bit rot, current spikes, bugs in disk firmware, ghost writes, etc. I’m also talking about a dead simple and logical volume management layer and wealth of features too numerous to list here.

Anyhow, I cobbled together the items listed below, installed Openindiana Illumos on it and configured Netatalk. It works wonderfully, and I can’t say enough about how pleased I am with Illumos, and how happy I am to have an industrial strength, feature-rich UNIX in the open source community.

PROTIP: If you want to use the Subversion features in BBedit, and you also like using v1.7+ of svn, you have to change the default location. Obviously this assumes that you have MacPorts installed and have used it to build and install the Subversion port.

If you’ve spent any time at all around Solaris 10, you know that Sun has invested a fair amount of effort developing a pretty snazzy Service Management Facility (SMF). It is extremely flexible and feature rich, but it’s not quite as strait forward as the old legacy /etc/init.d scripts. If you’re running the OpenCSW Apache package, it installs a Service Manifest into the SMF, so you’ll have to edit this to run Apache SSL… Here’s how: