SS4200 NAS Installation

...also applicable to any other x86 multi-drive SATA box that you want to use as a NAS running Ubuntu Server. This is an old old article and the machines themselves are long since decommissioned but the instructions are still relevant for other boxen. Checkout the small HP Microservers for a more modern equivalent that doesn't require the dancing on the keyboard to get it started. I've always found Simon Webb at Servers Plus to give really good service - no connection, just a happy customer.

These instructions are to install a highly resilient RAID
configuration to an Intel SS-4200 storage system.
These things are cheap but highly capable NAS carcasses - £155 from
eBuyer at the time of writing. If you can find one
get the SS-4200EHW variant as the only difference between the
SS-4200E and SS-4200EHW is that the E comes with an IDE module
containing
the EDS software. That is pretty limited, IMHO, and the EHW is only
£131. Running Ubuntu 9.10 Server on the hardware makes this thing a
complete steal - a 4TB NAS with very sophisticated features for £370 is
good value in anyones book!

These are not step by step instructions - if you need that
level of detail then you probably shouldn't be building a highly
available RAID system; please go buy one of the many devices that are
available on the open market.

I
had a punt at booting from the IDE with a CF card adaptor (working
with Gorgone at http://ss4200.homelinux.com)
but the problem is that you've still got a single point of failure –
not to mention that the IDE isn't recognised as something that can do
DMA by Linux, so performance is pretty poor at the moment.

These
instructions
will give you a system which can boot from any one of
the four hard drives in the system (i.e. 3 of 4 drives failed) although
your data is long dead at that point. It will retain data with any one
drive failed (and, in some cases,
with two drives failed since they're in RAID10 configuration). I used
Samsung HD103SJ drives which are nice and quiet and seem to run quite
cool (Average 35C, max 45C so far).

Booting into Installer

This is the only bit that's SS4200 specific. For most 'normal'
machines you won't have to bother with this serial boot stuff, so skip
directly to the next section.

The CDROM will now start booting. If it
doesn't then switch off and re-start. Note that the BIOS sometimes
re-orders the boot sequence on reboot if you've not got all the way
out of BIOS to program control, so you might have to go re-set it
again.

The Serial Interface will now appear
for setup. Use the serial keyboard to interact with it.

Configuring Disks for Reliability

For a 'normal' machine this is where you start reading....

My arrangement is four one TB drives.
Each drive is configured as;

1.
200MB Boot (RAID1, 200MB in total)

2.
Extended Partition containing;

5.
20GB Root (RAID10,f4, 20GB in total)

6.
730GB Storage (RAID10, 1.56TB in total)

7.
250GB Scratch (Not RAIDed, 1TB in total)

Come out of the installer at the
'Partition Disks' step by starting a shell and fdisk the drives
manually. Once the drives are fdisk'ed, they look like this;

Device

Start

End

Boot

Blocks

Id

System

/dev/sda1

1

26

208813

83

Linux

/dev/sda2

27

121601

976551187

5

Extended

/dev/sda5

27

2638

20980858

fd

Linux
raid autodetect

/dev/sda6

2639

97934

765465088

fd

Linux
raid autodetect

/dev/sda7

97935

121601

190105146

fd

Linux
raid autodetect

Create the /boot RAID 1;

mdadm
--create /dev/md0 -c 256 -n4 -l 1 /dev/sd[abcd]1

Create the system RAID10 volume;

mdadm
--create /dev/md1 -c 256 -n4 -l 10 -p f4 /dev/sd[abcd]5

(This is created with -p f4 to create 4
far copies – 3 drives may fail and this system will still boot!)

Create the storage RAID10 volume;

mdadm
--create /dev/md2 -c 256 -n4 -l 10 -p f2 /dev/sd[abcd]6

Now
cat /proc/mdstat and wait for all the drives to be fully synced, or
carry on regardless.

Configure Disk arrays for manageability

Create
the physical volumes;

pvcreate
/dev/md1

pvcreate
/dev/md2

pvcreate
/dev/sd[abcd]7

Note
that /dev/md0 is just 'clean' RAID1 and doesn't have a LVM
configuration.

Now
create the Volume Groups;

vgcreate
vg_system /dev/md1

vgcreate
vg_storage /dev/md2

vgcreate
vg_scratch /dev/sd[abcd]7

...and
then the Logical Volumes;

lvcreate
-L 2048 -n lv_swap vg_system

lvcreate
-L 15000 -n lv_system vg_system

(This
leaves vg_system with 3.36GB free for snapshots etc.)

lvcreate
-L1.2T vg_storage -n lv_storage

(This
leaves 231GB free for snapshots etc.)

Note
that the scratch space is left unallocated for the present.

Install the System

Go
back into the installer and re-detect the disks. Then go into the LVM
configuration so that it is all saved. Overall configuration looks
like;

Volume
groups:

*
vg_scratch (778664MB)

-
Uses physical volume: /dev/sda7 (194666MB)

-
Uses physical volume: /dev/sdb7 (194666MB)

-
Uses physical volume: /dev/sdc7 (194666MB)

-
Uses physical volume: /dev/sdd7 (194666MB)

*
vg_storage (1567667MB)

-
Uses physical volume: /dev/md2 (1567667MB)

-
Provides logical volume: lv_storage (1319414MB)

*
vg_system (21483MB)

-
Uses physical volume: /dev/md1 (21483MB)

-
Provides logical volume: lv_swap (2147MB)

-
Provides logical volume: lv_system (15728MB)

...and
perform
the installation. Put /boot on /dev/md0, / on vg_system-lv_system, swap
on vg_system-lv_swap and mount vg_storage-lv_storage at /storage.
Create a default user called adminuser or
similar, since we'll make sure that home directory stays on the
highly available partition so we can log in even when the system is
badly degraded.

Configure the System

Once
you've done don't forget to add bootdegraded=trueon/etc/default/grubkernel options line – even though it claims to be set by the
installer it isn't, and you'll only get as far as a console if you
don't do it.

Also
worth adding 'console=ttyS0,115200n8'
to the/etc/default/grubkernel options are deleting the quiet options there – that way you
can boot via a serial link under almost all circumstances.

In
/etc/mdadm/mdadm.conf
add a sensible address for MAILADDR
to indicate who the mail is to go to.

Install
smartmontools and add a line something like the following to see when
drives misbehave;

Move
the home directories for new users under /storage, so they get access
to the significant amounts of space – don't forget to edit
/etc/passwd to reflect the change.

In Operation

You'll get email
messages when disks fail. When they recover they are not
automatically added back into the arrays, you have to add them with
mdadm –add /dev/mdX
/dev/sdX. If you intend to remove a drive from an array
don't forget to –stop it first!

If
a drive needs to be replaced the replacement needs to be able to
support at least the same partition size as the existing array
members (one of the reasons for leaving some empty space at the end
of the disk dedicated to non-array LVM).