Meta

Someone asked on WHTAU about SAN stuff and I figured, well, I’ll post it here too.

Ok, talking about SANs this is an area I don’t know much about it… If you grab yourself a fiber channel SAN hock it up to a few of your servers which have the required cards, than the servers are running completely off the SAN and not the internal drives? If that’s the case what sort of options are available to backup the entire first SAN to a second SAN for fail over if the primary SAN dies?

Hmm, SAN discussion.. Yum.

Fibre channel SAN’s typically ship with 2 fibre channel uplink ports (you can get some which have more). There’s a few ways about integrating with your new found space & speed freedom.

In the situation where you don’t buy a set of switches but you do get a set of redundant SANs you generally look at getting 2 servers in failover and direct attaching things in there then sharing the space as required from those host nodes. There’s a number of distributions specifically for this but for performance I’ve always leaned towards Solaris. Nexenta do a Solaris derivation specifically targeted at this market and they allow you to do nfs/smb/iscsi from there. When it comes to CPanel I guess you’d go the iscsi option here and either:

a) Get a iscsi initiator card in there and initiate to your storage pair.
b) Put a bootstrap startup on the hardware itself which then does an iscsi initiation and starts your real OS.

Guess I should also state that iscsi support from Solaris & Linux has shown to be far more stable then iscsi support direct from the storage kit itself. That’s mainly cause the storage kit usually has a crappy CPU and the ‘i’ part of iscsi makes it relatively costly.

Now, the other method (the ‘Enterprise’ way) is if you DO get a pair of fibre channel switches. These cost around 10K each which relative to your SAN purchase isn’t THAT much. You setup your SAN’s so that their connections are redundantly split across your two FC switches (or ‘Fabrics’ as the storage people like to say). Now you’ve got 2 SAN’s connected via diverse paths to two switches.

From there you take your actual hardware and install a dual port FC card (~$700 each) in each host. You set that up to terminate one to each switch (diverse paths to both SAN’s via both fabrics).

For both the iscsi and FC option basically any decent card will offer a BIOS booting option. That’s where your hard drives onbox become irrelevent (except maybe for swap) and the BIOS treats the SAN/FC volumes like a normal hard drive.

Replication wise, as usual, you have the costly and cheap option. The costly option is to get SAN’s with replication capabilities out of the box. They take care of each other and you never have to think about it again (hopefully). Vendors used to charge a lot for this but the new series stuff (Hitachi & Equalogic for instance) have this shipping as standard. If you’re going down this path you want to look for a SAN which has the capability to make ‘virtual fabrics’ seamlessly migrating between the two on failure. Otherwise your boxes are going to need a reboot or reconfiguration if your backend fails.

The cheaper option is to export two independent LUN’s (1 from each SAN) then use whatever replication you want on box. On Linux this is software raid or mirrored LVM, on Solaris this is ZFS (and it’s awesome :)). Then if you have a SAN failure the idea is that all your boxes will register a ‘dead disk’ and recovering involves getting the SAN back to normal and resyncing from the good disk.

Hopefully that answers a few of your questions. There’s some more tidbits in a Cluster 101 doc I did.
Stu