I don't often ask for OpenBSD administrative advice, I usually provide it...whether or not it is correct advice.

One of my servers uses RAIDframe in a root-on-RAID configuration. I've had bad sectors on one of the drives multiple times, and with RAIDframe have had no unscheduled downtime. I've taken drives out of service, "repaired" bad sectors by having the drive replace them from spare, and placed them back into service, without rebooting. No sector failures in the last 18 months, though.

I use Softraid on my netbook, with the CRYPTO discipline, to encrypt /home at rest. I use it with a DUID so that it mounts regardless of sd # assignment. I like Softraid, it is easier to provision and manage than RAIDframe. But it is not quite yet "production RAID" ready, as root-on-RAID is not yet possible. Altroot provides a method to recover from a failed root drive, but the root drive remains a single point of failure.

That said, root-on-RAID is coming to -current. Soon. I am thinking about becoming an early adopter. As such, I want to know if I'll be in the minority, or among the majority.

An informal poll is attached to this thread, with questions for the community regarding your administration of software RAID implementations. If you administrate any OpenBSD system, even if its just a single workstation for personal use, please respond to the poll, whether or not you use any RAID implementations at all.

I'm also seeking advice from anyone who has gone down the path of softraid for RAID management (as opposed to CRYPTO) already, before root-on-RAID is available.

Thanks!

Edit: You may select multiple choices, if you manage multiple RAID environments.
Edit: please consider storage that is an externally managed RAID subsystem equivalent to BIOS-managed, when responding to the poll.

I would of replied but I didn't know how. I used to used softraid. And in a couple of weeks I will be again (raid 5). I found it good. I also wouldn't call myself a OpenBSD admin. I only administer my desktop and server. But to make you numbers look higher I will put down softraid because that's my norm when I need it.

Bumping to put an end to this thread. There were 10 voters who responded.

3 do not use RAID. They might use CARP instead, of course, but I am focused on storage redundancy, rather than system redundancy.

1 uses BIOS-managed RAID.

3 use bioctl(8) to manage RAID controllers.

1 used to use RAIDframe, and has already switched to Softraid without root-on-RAID capability.

3 people are using Softraid without having come from RAIDframe, and don't require root-on-RAID capability.

Two users responded that they were both a softraid user and a bioctl hardware RAID controller user, neither requiring root-on-RAID. I believe my poll was not understood, since bioctl(8) is used to manage both softraid disciplines and hardware RAID controllers. I assume the count of hardware RAID users in this poll is invalid.

I'm the only respondent to the poll who requires root-on-RAID. Perhaps, since no one but me has used it (for software RAID), I should explain its value:

Without root-on-RAID, the root partition must be located on a standard PATA, SATA, or SCSI device. In the event of a failure of that drive, the OS will come down with a panic or a hang. An /altroot partition, if established, can be used on system reboot, but that boot and usage require an on-site admin.

Reviving this old thread to say that as of 19 September, root-on-RAID is now possible with softraid(4) on -current, and will be in 5.1-release next spring. I have tested, and will be converting my one RAIDframe server to softraid(4) tomorrow.

It operates in similar fashion to RAIDframe, requiring boot of a kernel from a non-RAIO partition. In this implementation, if that partition is on a drive containing one of the RAID chunks, the "a" partition will be used as the root partition by the kernel. This avoids one of the reasons for a RAIDframe custom kernel.

The bioctl(8) tool is certainly simpler to use than raidctl(8), there is no need to have complex configuration files -- RAIDframe was designed for the testing of complex RAID array designs, rather than production use.

Restoration after a chunk failure is simple and easy. RAIDframe has many more options, including the ability to manage mirrors for data replication and instant recovery -- such as intentionally breaking mirrors and later rejoining them and possibly doing an instant recovery from a "saved" mirror -- but I no longer need those advanced features.

For those who would like to try this out on their own hardware, note that this is using source code committed mid-September. This is available now to -current users installing/updating from recent snapshots. For followers of -release & -stable, this will not be available until OpenBSD 5.1 which tentatively will be available 1 May 2012.

Congratulations, jggimi! You are a true asset to the OpenBSD community!