We are strongly considering getting an EqualLogic PS6100XV. Can anyone tell me their experience with this model or similar models of EqualLogic? It will be at our primary site with to VMware 4.1 hosts hooked up to it with approximately 14 VM's between the two. We will also add another EqualLogic in the future at our DR site to replicate to.

24 Replies

I've loved them for years, although after dell bought them, the tech support went to hell. I haven't tried the latest bios yet (which gives you nas functionality), but of the 3 ps100s, plus the sumo I tested (yeah that's right, I got to beta test the sumo... AHAHAHAHA), and recently a sales guy brought over 2 not quite to market boxes (both running ssd's, which was fast... omg fast), I really came to find their products dead simple to use, with a uniform interface across models that made it simple to work with.

Grouping san's together into what amounts to one massive storage device with massive network pipes is nice as well, however getting vmware to play nice was a hassle until 3.5.

Replication across sites works fairly well, but slow- then again anything less than 1Gb is going to seem slow these days.

All in all, find an equallogic rep, tell them you are interested and see if they have a box you can use for a bit! getting play with the hardware first hand is the best way I have to decide if I want something or not, and equallogic has never been shy about their toys!

I deployed a PS6000e in July. I love it. It is easy to setup and manage. I only had to make a couple of calls to support during the setup process and those calls went very smooth.

I too am using VMware 4.1. I have a total of 22 VMs running on a single host right now with no IOPs issues at all. Looking at the SAN stats I see that I am running the thing at about 30%-40% of it's total capacity on IOPS. Mine has 16, 1TB, SATA drives in it.

I do have some recommendations for you (and you may already be doing all this):

1. Work directly with a Dell storage rep. (you can still buy through your reseller)

2. Run the IOPs analyzer software from Dell on your current VMware deployment. It gives you and them the right feedback to size your SAN correctly for IOPS. The Dell rep can get it to you.

3. Get the Dell guy to get you the white paper on setting up multi-path iSCSI with VMware ESXi 4.1. It is smoking fast and easy to setup using their white paper.

4. Get a Dell switch to run your iSCSI network on and let the Dell storage guy tell you which model is sized right for you. I'm not a huge fan of Dell switches but I went this way because if I have iSCSI network issues I want to make sure that Dell can't blame it on the switch.

We have a 6100XV on evaluation basis right now, and have at least one of pretty much every 6000/6500 series array they offer.

It should work well for your vSphere 4.1 environment. Beware firmware 5.1.1 versions, as they're known to have some annoying bugs.

Good to know, I'm doing my damndest to hold off on the ps100s, and I figure if the company buys some new ones, I'll worry about it then. but then, equallogic's tech and sales have been telling me that the ps100 is EOL anyway, so... *shrug*

We have 7 Sumos and 4 SAS units across 4 sites, personal I look after 4 Sumos and 4 SAS units.

My experience with Equallogics is that generally the hardware is ok but the software and firmware of late have left a lot to be desired.

The current firmware seems to be stable but all the version 5 stuff up to now has had bugs that caused some big issues. I am not a fan of the HIT kit, it slow and flaky and may stop working if you upgrade it, which is blow if you rely on it to drive your replication. Also you have to ensure you are on the right hard drive firmware as some of them are buggy as well.

Replication works quite well, best if you send multiple streams up a your pipe, with Jumbo frames we get almost a full 1Gb to our replication site.

Support in UK is patchy but that is true of Dell in general. Their phone system doesn't recognise their own express service codes, they take ages to respond to phone home alerts, we can't send diagnostics directly to them they have to be emailed. Call out engineers are not that knowledgeable as they work for Unisys and deal with a bunch of stuff other than Dell. When you manage to get to speak to someone in supoort they generally know their stuff, it just all more difficult than it should be to get there.

We had the SATA card die in one of our Sumos which was OK till we tried to replace it and all the drives went AWOL. Fortunately the engineering department in the US managed to fix it with out losing any data.

I have had more drives fail in the last year than I have ever had with HP EVA in the last 6 years, It's a different class of beast of course but the SAS drives should be the same sort of quality but haven't lasted so well.

We are strongly considering getting an EqualLogic PS6100XV. Can anyone tell me their experience with this model or similar models of EqualLogic? It will be at our primary site with to VMware 4.1 hosts hooked up to it with approximately 14 VM's between the two. We will also add another EqualLogic in the future at our DR site to replicate to.

For a two node vSphere application, iSCSI shared storage probably isn't what you want to buy. NFS is the way to go, not block storage.

Here's my take:

I think the Dell EqualLogic line is neat for what it is, expandable block storage with a unified management interface. But there are some weaknesses.

iSCSI
The EqualLogic line is a block storage line; iSCSI is what you're getting unless you purchase a separate NAS head to install and run CIFS or NFS. When looking for shared storage, iSCSI is usually the last type organizations should be looking for. Of the leading virtualization platforms out there, Xen and vSphere prefer NFS storage. Hyper-V is the only one which requires iSCSI.

For a two node vSphere application, iSCSI shared storage probably isn't what you want to buy. NFS is the way to go, not block storage.

Here's my take:

I think the Dell EqualLogic line is neat for what it is, expandable block storage with a unified management interface. But there are some weaknesses.

iSCSI
The EqualLogic line is a block storage line; iSCSI is what you're getting unless you purchase a separate NAS head to install and run CIFS or NFS. When looking for shared storage, iSCSI is usually the last type organizations should be looking for. Of the leading virtualization platforms out there, Xen and vSphere prefer NFS storage. Hyper-V is the only one which requires iSCSI.

We've debated this at great length in other threads, and we agree on many points, but I still feel like dismissing iSCSI because people do stupid things with it due to their own technical inadequacies is bad advice. At the same time, an NFS share is a more mature protocol but requires a different skill set to manage well. But the bottom line is, you don't get much easier than EL, and there's no need for an additional software layer to install/manage the storage.

Also, you mentioned a couple standalone servers with 12 drive bays. You do realize that the PS6100 comes with 24 drives, right?

As free as Nexenta and others like it are to procure, 18TB isn't much storage nowadays. Go over that capacity necessitating the purchase of the Enterprise Silver edition and add the plug-ins, and all of a sudden your cost looks a lot like a unified, purpose built piece of equipment.

I still feel like dismissing iSCSI because people do stupid things with it due to their own technical inadequacies is bad advice. At the same time, an NFS share is a more mature protocol but requires a different skill set to manage well. But the bottom line is, you don't get much easier than EL, and there's no need for an additional software layer to install/manage the storage.

That's a fair position, I think. I'd just suggest that paying $20-30K for a box that only does iSCSI, when NFS is the preferred protocol wouldn't be my first choice.

I still feel like dismissing iSCSI because people do stupid things with it due to their own technical inadequacies is bad advice. At the same time, an NFS share is a more mature protocol but requires a different skill set to manage well. But the bottom line is, you don't get much easier than EL, and there's no need for an additional software layer to install/manage the storage.

That's a fair position, I think. I'd just suggest that paying $20-30K for a box that only does iSCSI, when NFS is the preferred protocol wouldn't be my first choice.

Not to pick nits too severely, as I still haven't tested it yet, but the latest firmware for the eql boxes DO (supposedly) cifs/nfs/something_else_I_can't_remember_because_I'm_too_tired.

Also, you mentioned a couple standalone servers with 12 drive bays. You do realize that the PS6100 comes with 24 drives, right?

Then I suppose the proper comparison would be to the HP DL180 G6 which comes with 25 SFF drives.

It's a fair comparison for # spindles, but those boxes are some of the biggest POS I've ever laid hands on. You'd be better off with a Supermicro machine at that point. But that's just our experience with the pile of them we have running in our datacenters. :)

I still feel like dismissing iSCSI because people do stupid things with it due to their own technical inadequacies is bad advice. At the same time, an NFS share is a more mature protocol but requires a different skill set to manage well. But the bottom line is, you don't get much easier than EL, and there's no need for an additional software layer to install/manage the storage.

That's a fair position, I think. I'd just suggest that paying $20-30K for a box that only does iSCSI, when NFS is the preferred protocol wouldn't be my first choice.

Again, as we all know you only get block storage with EL without the FS7500 we've been talking about over in the other thread. And I really think the "preferred protocol" discussion is rather dated. That may have been very true in the ESX 3.x days, and early on in the 4.0 days, but a lot of things have changed since. And let's be realistic, a poorly configured, half-assed NFS solution is not a good alternative to a solid, easy to set up iSCSI solution. For free, there are very few NFS or iSCSI solutions worthwhile. By solution, I mean the whole deal end-to-end and not bits and pieces.

But again, I'm not disagreeing that NFS can be awesome and better than iSCSI in the right situations. I'm just not a big believer in the roll-your-own filer solutions I keep hearing about. Been there, done that and it often creates more issues than it solves.

Though not all iSCSI solutions do it, the EqualLogic does come with replication built in which can be a killer app for some if you have the luxury of a DR site.

Also the EL scales up well and expanding them is a doddle. We will be going over 100TB at our main site shortly.

Storage should not be just about capacity and ultimate performance but cost, in house skill set and features that actually fit with your requirements all should play a big part in your selection.

Replication is awesome, but I'll "Pay" for the feature if it doesn't use a massive 15MB page size that can result in replication traffic being up to 50x the actual delta of my data. Bandwidth is expensive. This is part of the reason WAN acceleration really helps these things. Every client I have that uses their replication has at least a 100Mbps of bandwidth to their DR site so the replication will work.

These days , Veeam, replay4, unitrend's are solid products and so efficient I'd argue doing backup and replication are best done outside the array. If your going to do a bunch of voodoo in the array you need to do it better than it can be done outside, otherwise give me fast reliable storage thats easy and stable. (Maybe I've spent too much time listening to Jon Toigo rant).

-Disclaimer I got woken up to "fix" an EL today and currently want to defenestrate it for a particularly nasty bug that means MPIO may get turned off. Anytime VMware says "Rumored to fix it" in a KB you know stuff's weird...

Also has anyone been able to get HIT to work with VMware? I've never seen it actually working....

That sounds like a pretty nasty bug, John. I've read the VMware KB, but they don't specify which EL array or firmware versions this applies to, which is weird within itself. There were some known issues in recent previous firmware versions that exhibited this exact behavior if I recall. Fortunately, none of my 1000+ VMs riding EL ever had this issue.

As far as I know, the HIT kit for ESX only consists of a cheesy UI to integrate snapshots and other stuff with questionable value. It only supports a single vCenter/storage group so it's absolutely useless in my shop. The MEM module works, however and it works extremely well for us, but again it requires array firmware to be 5.0 and above.

We use vReplicator for a handful of "special" VMs rather than rely on the storage to do the replication for us. We don't have the need to replicate much else between sites, so I've only used native array replication on these to migrate between storage groups within the same datacenter, and quite honestly I probably could have done the same thing via other means.

Just curious without HIT how do you make sure the VM's on VMFS are quiescent during an EL snapshot? The client freaked out and tried restoring to a snapshot and since it wasn't quiescent VMFS was corrupt and throwing errors. (Can you tell I had a fun morning). With HDS I know I can setup disk groups in the HORCM config so they will all have VSS triggered when I perform a SAN snapshot.

Client has PS6500E's (pair of them clustered). Firmware is 5.1.1 (R189834 (H2).supposedly 1 behind the newest. (rushing it into testing array).

reading through the KB, logs and looking at wireshark issue is that to load balance it doesn't seem use the PSP, but instead keeps sending iSCSI log out commands until initiator connects to the port it wants it to connect to. This combined with high load on the SAN (well 6k iops, high for this frame) causes it takes so long to talk to the disk that windows goes crazy and blue screens or linux mounts the file system read only.

This KB is only like 2 days old, but I'll post a follow up once we hear back from dell. They've been fighting issues with their EL's for a while and this is bringing a massive company to its knees at the moment. (However with a billion ERP transactions a month 96 SATA drives isn't go to cut it, VMware issues or not).

The core of the issue is the EL load balancing doesn't use the Path Selection Policy but is instead sending logout commands repeatedly to VMware until it ends up on the port it wants it to talk to.

I don't quiesce the VMFS volumes, nor do I have the need to often perform snapshots on VMFS. Anything that needs to be backed up regularly generally has a backup agent running on it, or its data (at the application layer) is replicated elsewhere. It's so quick and easy to stand up a VM with a new OS that we don't bother backing up OS bits on a lot of stuff.

Client needs to get off of 5.1.1 H2 and either roll back to a previous version, go with 5.0.8 (which is actually newer), make the decision to jump to 5.1.2 or just deal with having problems. All versions of 5.1.1 are garbage and should be avoided IMO. The client should also find a better solution for their workload, as 6K IOPS is very well doable, but at their own peril. Most of EL's problem are exposed when the arrays are under high load and pushing the upper limits of their capabilities. That client is one that would have probably been better of with an HDS for that purpose. For Pete's sake, at LEAST something with some SAS drives would be a better choice.

Regarding your DR plans - are you completely Virtualized, or do you have some physical clients in your environment? Do you plan to have a backup software in play?

Assuming this was asking me...

Today, we're around 55% virtual, and everything is spread across multiple hot sites. We have the same data in every site, which negates a lot of the traditional backup needs. However, we do send around 80TB per day to tape for (relatively) quick recovery and compliance purposes.

Today, we're around 55% virtual, and everything is spread across multiple hot sites. We have the same data in every site, which negates a lot of the traditional backup needs. However, we do send around 80TB per day to tape for (relatively) quick recovery and compliance purposes.

Unitrends can easily backup multiple locations, as long as they are connected by the LAN/WAN, from a single appliance, or group of appliances, all under the same management GUI. Since you have multiple sites that have the same data, our adaptive deduplication & incremental forever features will be valuable to reducing backup time & maximizing data storage space. Of that 80TB, how much do you think is duplicated data.

Thanks everyone for the inputs. We did decide on the 6100XV for our primary site and a 4100 for our DR site. We do not have them set up yet, but will leave an update after they are set up and running. We will probably not utilize the built in replication from the EqualLogics and instead use Veeam Backup and Replication 6 which we have been using for a little while now and have been having a lot of success.

0

This discussion has been inactive for over a year.

You may get a better answer to your question by starting a new discussion.