The new EMC VNX introducing MCx

Earlier this month EMC announced the new VNX series which promises more performance and capacity at a lower cost per GB and a smaller footprint. The hashtag for the event was #Speed2Lead which was trending on Twitter during the official event and the weeks leading up to the Mega Launch in Milan, Italy. With performance being key in the new systems, the announcement was built around the Monza race track which had the Formula 1 circus in town. Guess what the logo for the launch was?

I myself was on summer holidays during the big event (ending up only a hundred miles away from Milan, albeit a week late ;)), so I couldn’t do much more than refresh twitter and get my timeline blasted to bits. So consider this a catch-up post!

Performance -> MCx!

When I started implementing EMC CLARiiON CX3 and CX4 systems, most (small) customers ended up with a system that had the following performance metrics: storage processor utilization <20%, disks at 70%+ utilization. The rotating disks (10k/15k rpm) were the bottleneck in the system, with the storage processors picking their electronic noses out of boredom.

With the introduction and wide-spread adoption of flash in the current arrays, a peculiar thing happened: disks started outperforming the controllers. If you’ve ever introduced FAST VP and FAST Cache on a heavily loaded CX4 system you will know what I’m talking about. I can still remember the first time a DBA started a restore of a large database and got super excited about the 300MB/s sustained restore speed he was getting. I got slightly less excited: while the FAST VP Pool was still at <50% utilization, the storage processor servicing the LUN topped out at 100% load. The DBA was getting super fast restore speeds but the response time on that SP was completely absurd (100ms+) due to the fact the processor was completely overloaded with the rest of the IT environment going up in flames as a result.

The VNX systems improved on this with faster processors, more cores, etc. And indeed, the problem was alleviated! But even then, if you have a flash heavy system you’d end up with a storage processor utilization that’s higher than you might like.

The reason behind this is the fact that the VNX and previous systems were not designed for the multi-core processors that are now common. Tasks performed by the storage processors were assigned a processor core, with no means to load balance across more cores when desired. This could result in a feature maxing out a core, with other cores still relatively idle. In short: a waste of resources.

Enter MCx! A dynamic multicore optimized architecture, fully utilizing all available cores. Use a lot of RAID6 and thus need a lot of capacity for parity calculations? No problem! Heavy on the FAST Cache hits? No problem! All available resources will be used. And: if Intel introduces a new, bigger/better/faster processor, MCx can utilize the additional cores.

So what does this translate to for the people using the new arrays? Simple!

The new VNX with MCx scales linearly all the way up to ONE MILLION IOps, with the response time on those IOps being <1ms! From a business perspective, the end result is plain: you can run MORE virtual machines on a single VNX system: up to 8000 virtual machines on the biggest model (thin provisioning turned off). With more and more customers approaching a “100% virtual” approach, this is very welcome!

More firepower -> More/better features!

If you have more CPU cycles to spare you’d better use them intelligently. The MCx VNX does this with features like:

More efficient FAST VP. The increased CPU power allows for smaller FAST VP slices. This results in a more efficient use of the high performance tiers and thus a better ROI and also a better performance: data that needs to be on a high performance tier now has a better chance of actually being on there.

Block level dedupe. It will not require a license and will run twice a day to save you some space in the high performance flash tier.

Active/Active storage processors. Wait, what?! That’s Symmetrix territory! Well, no longer: the MCx VNX will offer an Active/Active configuration for RAID Group LUNs (for now!). Pool LUNs will still be A/P, but no doubt that will change in the future…

And to reduce the system footprint: 2,5″ 15k drives (previously only 10k drives were available in the 2,5″ form factor).

Want to know more?

My colleague and fellow EMC Elect member Rob Koper attended the launch in Milan and has written a not-so-short blog post over here. Dave Henry also had a front row seat in Milan and has written an extensive blog post. And finally you can watch the recorded launch over here.