We have money in this year's budget to upgrade the shared storage we use with XenServer.

We are currently using a Dell PowerEdge 2950 with an MD1000. It runs CentOS 5.7 and uses IET as its iSCSI implementation. It shares out approximately 6TB over iSCSI to a pool of three XenServers (running 6.0.2 at the moment). This has worked really well for us for several years--the only problems we've had with it have been disk failures in the MD1000. We purchased the 2950 in late 2006/early 2007 and the MD1000 in 2008. The hardware is getting older, and since the MD1000 storage is configured as RAID-5 (I know, RAID-5 isn't cool anymore), the thoughts of multiple disk failures are never far from my mind.

Since we also have money this year to buy the Enterprise licenses for XenServer (we're currently using the free licenses), I think I'd really like something that will work with StorageLink. (I've never actually used StorageLink--I read somewhere that it changed a lot with XenServer 6, so I'm not sure if StorageLink compatibility is as big of a deal as it once was. Anyone have any opinions on this?)

Our budget for the new storage system is $25,000.

I've thought about the Dell MD3200, but that doesn't appear to be StorageLink compatible. Our Dell rep is pushing EqualLogic, which is compatible with StorageLink, but I'm not sure how much of an EqualLogic box $25,000 will buy. I've also been thinking about some sort of system running NexentaStor (which I think is StorageLink compatible if you buy the appropriate plugin).

I'm really not sure which direction I want to go with this. Does anyone have any opinions or advice?

I think a NexentaStor system could fall into the SAM-SD category. Actually what we're running now could be considered a SAM-SD. (Maybe a "proto" SAM-SD?) I'm glad that this type of storage system has finally been given a name and some credibility.

The systems I list below are supported hardware, but check, check, and check again for StorageLink. In fact, CALL. and EMAIL your rep at Citrix to get confirmation. For $25k, you can get an excellent entry-level system - The Equallogic line of products from Dell are wickedly versatile. By the same token, you can also get a Cybernetics system. While I've had my ins and outs with Cybernetics over the years, they've matured a great deal.

From Dell, the 4100 series (link) is a great entry point if you need Gb interfacing iSCSI.

From Cybernetics, the Mi-SAN D series (link) is similar. Not on the StorageLink site though.

There are lots of pros and cons to each, but notably, the Cybernetics will give you more features for the dollar. Bear in mind that they are not as refined as Equalogic's products and will have more issues than Dell's line (from my own experience, not any statistical analysis). That said, I use them, and these days I love them.

HP, EMC, Nextenta, and a whole host of others are in the game as well.

FWIW - I don't recommend SAM-SD for mission-critical infrastructure (and I beleive SAM will echo that, IIRC from his posts). I ran my own SAM-SD a long time ago (before it was called SAM-SD even), much like you do, and had a horrible time getting support for it from VMware. Maybe Citrix is different, but... wow. It hurt a lot. Finally I had to break down for them each software compnent behind OpenFiler, and prove that IET was supported They relented, but it was a nightmare. Also, IET had some known issues with VMware (or vice versa) in earlier versions that the techs kept referring back to to try and close the case as "unsupported-config", even though we weren't running that version, and what we were running was long-since supported.

This person is a verified professional.

Call me a grumpy SAN admin, but how hard is it to carve a LUN, mask it to a host, and create a volume/datastore/CSV? Honestly if you need a wizard crutch to do this I feel like your likely going to be doing something wrong in the network/fabric and end up not doing MPIO through the same switch or something crazy. Then again I get called into too many broken environments so I'm likely jaded on this matter.

I feel like if you need tools to do this for you its far safer to just have a SAN guy on retainer that you call and say "make the LUN so". Generally with these small SAN's most people buy all their storage up front for 2-3 years, carve out a few big LUNs on RAID 10 and call it a day. Maybe have a second tier, but your looking at a day for setup and then years of ignoring outside of part swaps.

Call me a grumpy SAN admin, but how hard is it to carve a LUN, mask it to a host, and create a volume/datastore/CSV?

That's not what I'm wanting from StorageLink. Like I said earlier, I've been under the impression that it improved iSCSI throughput with XenServer. (Maybe it's just my setup, but I've always felt there was plenty of room for improvement in my guest VMs' disk performance. Moving a virtual disk from one SR to another is always slow in my environment too.)

This person is a verified professional.

How much SAN management do you do that you would actually need storagelink?

I was under the impression that there were noteworthy performance improvements to be had by using StorageLink. If that turns out not to be the case, then I guess I probably don't need it.

It does improve performance, but the more salient question would be - "How much performance do you need". The correct answer is, of ocurse "As much as I can get", and so you should get StorageLink if you can. I wouldn't make or break a decision on SL if it isn't mission-critical, however.

In your case, you can absolutely afford it. Dell/Equallogic or Nexenta really sound like your best choices at this point, given your budget and needs. That said, there's lots of options if you want to spend the time looking.

In your case, you can absolutely afford it. Dell/Equallogic or Nexenta really sound like your best choices at this point, given your budget and needs. That said, there's lots of options if you want to spend the time looking.

Thanks for the info. It's good to know that (theoretically) I can afford an EqualLogic system. My Dell rep is supposed to bring a demo unit later this week, so we'll see how that goes.

In the meantime I'll take a closer look at the Nexenta stuff. I may even give Cybernetics a call. (I bought one of their systems back in 2003/2004--I think it was a miSAN v8--for a disk-based backup system. It worked well enough, but I eventually got tired of the virtual tape stuff. I replaced the boot drive and set it up like the PowerEdge 2950 I described in my first post. I still use it as backup storage for PHD Virtual Backup.)

This person is a verified professional.

How much SAN management do you do that you would actually need storagelink?

I was under the impression that there were noteworthy performance improvements to be had by using StorageLink. If that turns out not to be the case, then I guess I probably don't need it.

It does improve performance, but the more salient question would be - "How much performance do you need". The correct answer is, of ocurse "As much as I can get", and so you should get StorageLink if you can. I wouldn't make or break a decision on SL if it isn't mission-critical, however.

In your case, you can absolutely afford it. Dell/Equallogic or Nexenta really sound like your best choices at this point, given your budget and needs. That said, there's lots of options if you want to spend the time looking.

Why not profile your workloads, monitor them over time, and size hardware based on needs?

It improves performance on cloning operations, but how often do you do that? A few extra spindles on an array that doesn't support it might be a lot more useful (like you said, a nice to have).

Honestly I like VMware's approach. Let vendors write their own damn plugins and not try to kludge SMI-S into the hypervisor.

How much SAN management do you do that you would actually need storagelink?

As of XenServer 6.0, Citrix dropped StorageLink Gateway and opted to re-write it as integrated StorageLink (iSL). The storage devices on the HCL are supported for regular SR types but to use StorageLink in XenServer 6.0 you need a NetApp or EqualLogic box.

The StorageLink HCL for XenServer 5.6 is larger but still doesn't support the MD3200, and you'll also need a windows box to run the gateway software/service.

@lunchingfriar - Is there a particular reason you want to use StorageLink? If you're going to be creating one large LUN to hold multiple VDIs then there isn't so much of a benefit, if you're wanting VDI per LUN then it does make management easier for a small team.

Or since you're not paying anything for a hypervisor right now, migrate to vSphere 5.1 with VSA included for under $6K.

I've had shared storage for a while now, and live migration is one of the big reasons I switched from free VMWare to free XenServer. (The other two big reasons were having the ability to manage multiple hosts with XenCenter and being able to use commercial backup software.) It comes in handy. It's not something I use every day (or even every week), but it's great when you need to apply hypervisor patches or do hardware maintenance. It's nice to be able to move machines around to evenly distribute the workload, too. I do understand and appreciate what you're saying about introducing a single point of failure, but that is not the only single point of failure in our environment. (The one that has caused us the most trouble is power, by way of the old, inadequate wiring in the building. This has bit us several times over the years despite having multiple battery backup systems and a generator.)

I didn't know DRBD was available for XenServer. Once upon a time I looked into using DRBD to replicate my shared storage repositories, but DRBD would make the kernel panic on my target replication machine, so I stopped working on that.

It's funny you mentioned moving to vSphere, because that's been on my mind lately--mostly because we're about to spend $7500 on XenServer licenses.

@lunchingfriar - Is there a particular reason you want to use StorageLink? If you're going to be creating one large LUN to hold multiple VDIs then there isn't so much of a benefit, if you're wanting VDI per LUN then it does make management easier for a small team.

I've never been satisfied with my XenServer iSCSI throughput. (It's not horrible, but it's not that great either. It seemed to be faster with VMWare on the same hardware.) I've been under the impression that StorageLink, when used with a compatible SAN, made it faster. That's why I wanted it.

Obviously VMWare has changed their pricing scheme since I switched to XenServer. The last time I remember looking it was going to cost in excess of $10,000 to license VMWare on two hosts (we only had two at the time), and that price didn't include all of the stuff you get with Essentials Plus. Looks like I need to revisit VMWare. I'm still planning on buying a SAN, though--two of my three XenServer hosts don't have much direct-attached storage.

I would not recommend SAN for virtualization, I have tried to think about a benefit from going SAN in a virtual environment, but I have failed miserably, NFS makes lot more sense (don't make me get started with Hyper-V, yuk!).

For $25k you can build a SAM-SD with all the toppings, if your business don't like the idea of "home made" storage devices, I would go for a FAS2240.

Stay away from StorageLink, it gives more troubles than it solves, just take a look at the Citrix forums.

Great points as always John. Since this is a conversation involving Citrix, I felt the need to contribute. We work with an organization who is one of the only gold level Citrix partners in the country. They have been deploying the Dell C2100’s or C6100’s for Citrix XenServer and XenApp installations all over the nation. These server are a kind of a Server and SAN all in on. No need to go SAN attached when one can have 12 x 3.5” drives or 24 x 2.5” on board the system. The 6100 can even be configured to have 2 or four micro-blade nodes within it. Check them out on our site and tool around with different configurations a bit.