VNX – Storage pools spanning DAE 0_0

Anyone working in IT knows that there are usually enormous amounts of whitepapers available to help you install, configure and run a new system or software suite. The fun more than doubles when the whitepapers start conflicting themselves. But even when they’re crystal clear, sometimes you run into a different problem: budget! With all planning and designing done, sometimes the budget or the purchased equipment does not allow you to follow ALL best practices to the letter, or at least make it a bit more challenging. In this example there’s the need to span a storage pool across DAE 0_0.

What’s up with DAE 0_0?!

DAE 0_0 is special. It’s the only DAE (Disk Array Enclosure) that is fed by the SPS (Standby Power Supply). This means that in case of a power outage, DAE 0_0 will stay powered and online just a couple minutes longer than the rest of the drives. This is done so that any data in the write cache can be de-staged to the vault. The storage processors also receive their power from the SPS to do just that and to safely shut down once the write cache is empty.

Picture the following: a RAID 5 group (4+1) that has four drives in DAE 0_0 and one drive in another DAE. What happens in case of a power outage? The lone drive goes down but the other four are still humming away for a couple of minutes. This means that a rebuild is required after the system is powered up again. This in itself will cause a performance impact on your system, but there’s also an increased risk of data loss: if another drive fails during the rebuild you’re in trouble.

So the VNX bible called “EMC VNX Unified Best Practices for Performance – Applied Best Practices Guide” states the following as general guidelines related to DAE 0_0:

AVOID placing drives in the DPE or DAE-OS enclosure (0_0) that will be mirrored with other drives in another enclosure. For example, AVOID mirroring a disk in 0_0 with a disk in 1_0.

AVOID placing drives in the DPE or DAE-OS enclosure (0_0) if they will be in a parity RAID Group where one disk is placed outside of that enclosure.

For FAST Cache, the following is added:

Place all Flash drives in enclosure 0_0, up to 8 drives. If over 8 drives:

Spread Flash drives across all available buses.

Mirror drives within an enclosure, to AVOID mirroring across 0_0.

Drive layout spanning DAE 0_0

This was usually no problem in the traditional RAID group architecture. Just create a RAID group with the DAE 0_0 drives and create the rest of the RAID groups in or across the remaining DAEs. But the drive counts are a lot larger with storage pools: it’s not uncommon to only have one or two pools in an entire system. Your customer does not like to hear “yeah we can’t really use all those empty slots in DAE 0_0, just buy another DAE”. So lets find a way to stick to the whitepaper guidelines and still use DAE 0_0 effectively.

I needed to create a pool for a 3-DAE system. The pool contains 16 3TB NL-SAS drives and 15 300GB SAS 15k drives. There’s no FAST Cache (yet). And there’s two hot spares to top it all off. A bit of shuffling later I ended up with the following layout:

A single private RAID group of 5 drives in DAE 0_0 and DAE 0_0 alone. Which meets the guideline.

The rest of the drives in other DAEs.

The storage pool will still span DAE 0_0, but if you match the amount of drives in DAE 0_0 to the amount of drives in a private RAID group for that tier (which is 4+1), we can avoid the rebuilds and meet the guideline.

Creating the actual storage pool

I created the storage pool in two steps, to enforce the VNX OE software to keep the private RAID group inside of DAE 0_0 instead of spanning it across buses/enclosures. Maybe it does this automatically, but better safe than sorry! First of all, create the initial pool with all the drives except the ones in DAE 0_0.

Use the manual disk selection to select all drives excluding the DAE 0_0 drives. Double check you have the correct drives: it’s easiest to create the hot spares before you create the pool. Also check you are creating your pool with the recommended amount of drives in each tier: in this case a multiple of 5 for the performance tier and a multiple of 8 for the capacity tier.

Next up, expand the pool and add the drives from DAE 0_0.

Since you have only selected the drives in DAE0_0 instead of all the drives in one go, you’ve ensured the VNX cannot build the private RAID group mixing drives from DAE0_0 and other DAEs.