I was wondering if anyone has any advice or thoughts on the idea of building your own SAN. We defiantly could use a SAN in our environment for lots of reasons but the price for most decent SAN's are a no do with management. Also the ridiculous costs of hard drive for most SAN’s are crazy too. I understand the befits of a highly reliable SAS drive etc but we just don’t have the need for that. We need something that is reliable and redundant with lots of storage but not necessarily a "high traffic" device. We need enterprise features but don’t have an enterprise budget. Any thoughts?

A SAN is a solution, not a product. The issue with these 'big' purchases is business keeps thinking it is buying a product when in truth this is really not the case. You are buying not only the product, but the installation and setup, initial support, the tools to manage and back up the SAN, and then after sales support and warranty if something goes wrong.

The other major point is that you need the management features on this sort of thing in order to run and maintain it. While you could just go out and buy one of those massive 10U cases that fits 60 SATA hard disks in it, then add a 8-port SAS RAID card with expanders etc, you are still looking at alot of money. After that you get the issue of what do you run on it... Solaris with ZFS and split everything into two pools? The you run into driver issues, HDD reliability issues (consumer drives are a magnitude more likely to fail than an enterprise rated drive), and finally no real management and reporting.

A SAN is fast, a SAN is reliable AND redundant, a SAN is expensive. I will go out on a limb here and say that if you cannot afford a SAN, then you really do not need one.

You say you need "lots of storage" that is reliable but "not necessarily high traffic". How much is lots of storage? How much reliability do you need? How much do you think you have to spend?

I think a number of enterprise-level NAS devices may just do the trick. Thecus now has an 8-bay rackmount device, coupled with the Seagate 2TB enterprise SATA drives coming in about 6 months, this would give you approx 15TB unformatted, and I would only recommend RAID6 at 8 drives, which would give you around 11TB formatted. I wish I had a need for 11TB!!

The best thing about these is you can cascade them 4 high IIRC, but I would suggest that 1 should be pretty good for most people, then run a second in a mirror config in a more secure place - possibly next door or a very secure place in the building. You may need to run fibre to it. You may have a second office you can run it at and just Rsync across your external connectivity or VPN. In case of failure, you only need to grab the other unit and replace the old one.

In any case, if you give us some more info about what you are trying to achieve with the situation you are in, we can maybe give you some better suggestions.

Well the biggest push for a SAN solution is we want to move towards virtualization. In order to do the type of virtualization with vmotion etc we would need to have centralized storage, somewhere between 4-8 TB. We are an aerospace company that deals with A LOT of large CAM/CAM drawings with many revisions/versions etc (we make somewhere around 200 different parts) that take up a great deal of space. This is why despite being a company of a 150 or so we need a large amount of space.

The book is geared for tech consultants building SANS on a budget for small businesses, but there's no reason you can't apply the information to building the SAN yourself. It has a a lot of good information on choosing the hardware and software, as well as scaling the SAN to the amount of storage that you actually need.

We just did what you're describing. We have a Dell EqualLogic storage array with 16 HDD's and 2 Dell PowerEdge 2950's with 2 quad-core processors and 32GB RAM. We are running 13 productions VM's on two LUN's created on the EqualLogic. We have full VMotion (either of the PowerEdges can run all VM's without any problem.) After a ton of research, I found this was the most cost effective way of moving to a VM solution.

Next we are looking at one more EqualLogic and one more PowerEdge to replicate to at another site for Disaster Recovery.

I was looking into (before I got sidetracked by 50 other projects) using this as a SAN target and setting up a RAID of the SAN itself for redundancy across a fiber/gigabit IP network.

It seems to me that since you can purchase OPENFILER support it wouldn't be 'too' risky, although I do understand the hesitancy of throwing so much data onto what seems to be ethereal combinations of so many disks.

But if you think about it, isn't data on one single disk almost as ethereal? Of course it's so much simpler to access a mirrored disk after a crash than a RAIDed SAN.

Well the biggest push for a SAN solution is we want to move towards virtualization. In order to do the type of virtualization with vmotion etc we would need to have centralized storage, somewhere between 4-8 TB. We are an aerospace company that deals with A LOT of large CAM/CAM drawings with many revisions/versions etc (we make somewhere around 200 different parts) that take up a great deal of space. This is why despite being a company of a 150 or so we need a large amount of space.

From what you're describing I agree completely with Mitsimonsta. Don't muck around with it - spend the money and by the absolute best SAN you can afford. This one device holds the entirety of your company's assets, IP, and records.

While making your own SAN may be an appropriate option if you're only using it for Teir 3 offline storage for, say, a DDT backup system, or for your home system, it is inappropriate for a mission critical device.

If you're planning on switching to a virtualised system, do not even think about going cheap.

I have 28 virtual servers on 2 networks running on 4 physical servers and a SAN. All this is brand new equipment from IBM - expensive, but worth it. It has fully redundant everything and I've already had multiple problems with the SAN. Just normal stuff - controllers going off line, Out-of-Bounds errors on the array, switch failures (old switches), and a problem with the UPS. Because this was a good SAN, this resulted in zero downtime.

Equipment was installed and fully tested by IBM techs, and every time there was a problem I couldn't fix I picked up the phone and the problem was solved within 4 hours - even if that ment getting a tech onsite.

You get what you pay for - I cannot stress this enough. There is nothing and i mean nothing worse than having an array burst on you on mission critical systems, and in a virtualised environment, if your SAN breaks, you lose *everything* until it is fixed.

In my previous life running my own specialist IT consultancy I've been called many times in the middle of the night by paniced CIOs and flown from one side of the country to another to fix SAN stuff. In every case they were problems caused by home-built systems that were inadequately designed, or branded systems that an out-going IT tech knobbled on the way out.

While making your own SAN may be an appropriate option if you're only using it for Teir 3 offline storage for, say, a DDT backup system, or for your home system, it is inappropriate for a mission critical device.

If you're planning on switching to a virtualised system, do not even think about going cheap.

I will go much further than Mick and state the following (and even in bold):

THERE IS NO REASON TO EVER, EVER BUILD YOUR OWN SAN.

Okay, now I have said it, I will explain.

What most of the guys above me have been advocating is really a server box filled with disks in some sort of RAID configuration, connected to the network by one, possibly two GigE ports. What they have suggested is that NAS-style storage is the way to go.

The only problem with this is the bandwidth is severely limited, and you are basing all your company's files on something you built. Warranty on parts is great, doesn't get you up and running today when you need to send the defective part away for replacement. You will not get it back up today, or even tomorrow. Can the business survive multiple days of no work because a HBA or FC switch failed?

SAN's are a totally different animal to NAS systems. They are basically disks directly connected to the network fabric, not via a GigE port, then a motherboard, then some sort of hardware or software RAID, and then to the disk itself.

With SAN's, most of the interconnects are Fibre Channel. You buy Fibre-Channel interface enterprise level hard disks. There's all sorts of fibre channel switches, and then you usually connect everything to the network (now) by 10GigE uplinks, which have multiple Fibre Channel over Ethernet (FCoE) channels going over it for multiple SANs.

If you come to this type of level of storage, you don't just buy one SAN either. You buy at least two. If you data is that important (and it sounds like it is) then you mirror the two SANs for redundancy and high availability. The third level would be a SAN at a nearby datacentre for that next layer of reliability, and some seriously fat connectivity (local fiber loop??)

I am not even going to try and tell you what you need to do for backups, if you are putting a system in like this and you have no idea how you can get a full backup of this amount of data in under 48 hours (a weekend) then you need to get a consultant on site and discuss the entire storage ecosystem with them. It WILL save you money, and possibily alot of heartache and downtime if it all breaks. Forget anything like SCSI tape drives, you are going to need a tape library with at least two drives in it, plus fibre channel interfaces.

There is however some merit in a NAS-style device for backup of this amount of data - not a primary backup, but as a 'nearline' and emergency datastore if you do only run the one SAN. In the event of the SAN going down, you would be able to bring this online and keep things moving while a fix is done on the failed SAN. They are also good to plug in once a week to snapshot the main datastore and take offsite, and one always there that only just stores your VM images so you can bring up more machines if the SAN is not functioning.

It sounds like there is alot of dollars involved in this project and in the business as a whole... I really suggest you work with a few vendors to come up with some options for you. Engage a good storange and backup consultant too.

-----

EDIT: Michael made a good point in this thread, kind of what I was trying to say but he does it much more elegantly.

Great thread with very astute contributions. I agree with virtually all that has been said. Here is my 2 cents' worth:

If you need a SAN then get a professional vendor supplied system, from any of the big boys in town, e.g. IBM, HP, Dell Corporate, etc. Make sure you have redundancy, support and backup all sorted. It will cost, but the alternatives are potentially limitless exposure.

If you really, really can't do that, then a NAS solution is next best. I recommend 2x Netgear ReadyNAS PRO boxes with 6 TB drives on a good gigabit network. I have been using one of these since Christmas and highly recommend it. Key features include: