Days in the life of a professional packet shepherd.

Dell Aims for the Clouds with Z9500 Spine

While at Networking Field Day 7, we got a small preview of a new switch Dell Networking has just announced, the Z9500. At some point I’ll have another post coming discussing more of Dell’s presentation at NFD7, but I wanted to briefly talk about this new product and what it brings to the table for Dell.

To be frank, Dell’s acquisition of Force10 Networks originally felt to me like a “me too” play so that Dell could compete with Cisco UCS and HP in the “full data center stack” play combining compute, storage, and networking in a single SKU or a playbook of blessed configurations. I wasn’t really expecting Dell to innovate all that much in this space. But based on the information I have at this point, I think that position is unfounded.

Here’s a picture of the new beast from a demo rack Dell showed us at NFD:

Dell Z9500

To summarize the hardware: it’s a hell of a lot of density. The Z9500 platform presents 132 line-rate 40G ports in a 3RU chassis. So, if one were inclined, one could potentially cram 14 such chassis into a standard 42RU cabinet to concentrate 1,848 40G ports into a footprint of a few square feet on the datacenter floor.

Still, though, the Z9500 is not really intended as a 40G access switch, I don’t think. The role it could fill nicely would be that of a spine switch in a Clos fabric. As I’ve always understood leaf/spine design, one of your scaling factors is the density of each spine. After all, each spine connects to each leaf — so you can only have as many leaves as you have ports in each spine. Access port oversubscription depends on the number of spines (really, the number of uplinks from the leaf), but the scale-out capability of the fabric depends on how dense each spine is. Hence, the advantage of a 132-port spine switch becomes clear: massive scale.

For example, with Dell’s existing Z9000 platform, a 32-port 40G switch, you could hang up to 32 Dell S4810 48-port 10G Top of Rack switches off each Z9000 spine. 32 x 48 = 1,536 10G ports maximum in the fabric, then. However, with the Z9500 in the spine layer, one could build out the network with 132 S4810s, totaling 6,336 10G access ports. Big difference. You could even take this a step further and use Dell’s S6000 32-port 40G leaf-layer switch with 10G breakout cables (I hate the idea of hanging hundreds of ports off QSFP+ breakout cables but let’s go with it for the sake of example) and get up to 96 10G ports (still at 3:1 oversubscription) per S6000, for a fabric-wide total port count of 12,672 10G ports in a single-stage leaf/spine design. The S6000 idea would actually let you increase oversubscription to get more access ports, but let’s be honest: If you’re building out a network with over 12,000 10G ports you’re probably not using them for some light-duty servers that occasionally need to burst up to their 10G port speed…

Dell’s staff gave us some additional details on the switch and it’s good stuff all around. Line-rate L2/L3 forwarding, full routing capability, ultra-low latency switching, etc. Assuming it supports the same features as the Z9000 model, it will also have OpenFlow built-in and support VLT, which is an MLAG technology similar to Cisco’s vPC. Of course, I say all this having just about no real-world experience with Dell’s Force10 products, so I would remind my readers that before abandoning an incumbent vendor for something new I always recommend doing some research. Dell does appear to have something worth researching, though, if you need this sort of scale.

I think it is very interesting that the Z9500 is actually a fixed hardware configuration, and ports are activated through licensing. The unit will be orderable with 36, 84, or 132 ports activated, and the others are just a license upgrade away. That really says something about the relative cost of the QSFP+ sockets and the fabric components. The value is in that ability to scale up to a huge number of high-speed ports in a compact space. Software activation on ports should also minimize the pain in ramping up the scale of a spine layer to accommodate more leaves as the data center is built out.

Dell gave us some example pricing which seemed very attractive, but I’m not going to post it here as I’m not entirely sure if that bit of info was intended to be made public at this time. Suffice to say that on a per-port basis, a maxed out Z9500 had a good price per 40G port. It was a bit higher than numbers being slung about for some other recent dense-40G solutions from some other vendors but the port density of the Z9500 is higher than just about anything else I’ve heard of so far.

Learn more about the Dell Z9500 switch here, and start thinking about how to build that next cloud!

Dell was a sponsor of Networking Field Day 7. In addition to a presentation, Dell provided me a USB storage drive and a small toy helicopter. At no time did they ask for, nor where they promised any kind of consideration in the writing of this review. The opinions and analysis provided within are my own and any errors or omissions are mine and mine alone.