Right now is a particularly interesting time in the world of IT. Historically, IT has swung back & forth between centralization and decentralization, closed and open, tightly controlled and loosely controlled. Lately, though, a third option has cropped up: centralized control with decentralized workloads. In my opinion it’s a function of speed, implemented through bandwidth and processing capacity. We now have enough bandwidth between our devices to start treating the device in the next rack column like a slightly-less-local version of ourselves. We also have enough bandwidth that we’ve outstripped our need for separate storage and data networks, and can converge them into a single wire, running a single set of protocols (most notably TCP and IP). On the processing side, each node is basically a datacenter unto itself. 16, 32, 64 cores per server, terabytes of RAM. The advent of SSD and PCIe flash rounds out the package, lessening the need for large monolithic collections of spindles (aka “traditional storage arrays”). The problem then becomes one of control. How do we take advantage of the performance and cost that local processing brings, but maintain all the control, redundancy, and management benefits we had with a monolithic solution, while keeping the complexity under control? And while we usually talk about doing this at great scale, can we do this on a small scale, too?

Dell unveiled the Dell PowerEdge VRTX last week. Pronounced “vertex,” the name is meant to invoke the idea of the convergence of storage, processing, and networking. “A corner or point where lines meet,” says the Math Open Reference. While I run the risk of seeming like I’m sucking up to Dell, I will state that I think it’s one of the most thoughtful naming choices a vendor has made in a while. The device itself seems fairly innocuous. It takes four M520 or M620 blades, just like its cousin the M1000e does. It also has built-in networking options like the M1000e, either pass-through or a full switch fabric. Looking at the VRTX to this point, and hearing Dell say that it’s targeted at the smaller parts of the market, people start thinking that it’s just a small blade chassis. Big deal, right? Who cares? If this is what passes as innovation at Dell they’re doomed as a company, right? Everybody is moving to the public cloud anyhow!

Um, no, on all counts. Despite the hype, the public cloud is relatively inaccessible in the world. It’s getting better, but bandwidth isn’t growing as fast as we need it to, especially in and to smaller communities. Price and availability remains high and scarce, often less from market forces than as a result of monopolistic and anti-competitive political and market actions. The public cloud might be the savior for people sitting in downtown San Jose but for many businesses, schools, and governmental agencies in less urban places (think Bayfield, WI) it just isn’t an option until they can get redundant, fast, inexpensive network connections.

So we look again at the VRTX, and what really starts to bend our brains is the 12 or 25 hard disk bays it has, attached to a shared PERC8 (LSI Logic) controller. That PERC8 can pull some tricks through its use of PCIe single root I/O virtualization, or SR-IOV, so that it can present storage to all of the blades at once. That’s a trick normally reserved for more complicated setups, like iSCSI storage arrays. What if you didn’t need a complicated storage array setup to build a small virtualization cluster? Sure, those disks are attached via SAS expanders, so those two 6 Gbps links to the PERC8 are going to be pretty busy at times. That’s just a matter of sizing, and it’s nice to have all that raw storage potential.

SR-IOV is a real interesting prospect, too, especially with eight assignable PCIe slots (four slots per blade max). Need a FAX card? Need a GPU? Need a VDI offload like a Teradici APEX 2800? You can’t stick any of those in an M1000e. There’s also some potential there, too, if Dell can enable more SR-IOV abilities. For example, the VRTX doesn’t ship with 10 Gbps Ethernet right now. The Intel X520 card can do SR-IOV. Wouldn’t it be cool if those four M620s could use a single X520, or a single Fusion-io flash card? Dell’s use of the PCIe fabric also means that mezzanine cards aren’t needed in the VRTX, either. That helps Dell list this device at $9999 with two modest M620s and 5 TB of storage.

Dell has done a lot of work on the VRTX in other ways, too. The chassis management controllers have a reworked UI that aims to be easy to use. Some think this is pandering to a less capable SMB IT audience, but I think Dell is starting to understand certain pressures in IT in ways other vendors don’t. Easy to use means faster configurations & deployments, fewer mistakes, less human error, and less downtime for everybody from SMB to large enterprises. They’ve also reworked OpenManage to add features like a map of where your systems are deployed, which isn’t something usually seen in free products. I hope Dell continues on this path, continuing to be thoughtful about how to design UIs and features. It will separate them from everybody else.

On a completely visceral note the VRTX is damn quiet. Dell has done a ton of work via their Fresh Air initiative to raise the tolerance of servers to warmer environments. They usually sell this as a reduction in data center OpEx, not only because you can raise the temperature in the room but also because thousands of fans won’t spin up as a result. Faster fans means more power draw, but also more noise. The fans in the VRTX are squirrel cage blowers which are inherently low-noise and highly reliable, especially so when they don’t need to spin fast. Aurally speaking, I’d be happy to have a VRTX in my office, which isn’t something I can say about any other Dell server, or even some of their desktops.

I really like the Dell PowerEdge VRTX. I think it shows that Dell has been doing a lot of listening to the small and medium businesses, as well as to industries like retail where there’s a real need for local computing to serve point-of-sale systems, for example. They also are thinking critically about what features really need to be present in hardware, especially in the face of software-defined computing. As a result I think Dell will be pleasantly surprised to see the uptake of the VRTX across the board, because of the price point, the local storage, the flexibility that PCIe cards allow for things like VDI and data center corner cases, and the potential for growth as new blades and switch modules are developed.

Share this Article:

Bob Plankers is an IT generalist with direct knowledge in many areas, such as storage, networking, virtualization, security, system admistration, and data center operations. He has 17 years of experience in IT, is a three-time VMware vExpert and blogger, and serves as the virtualization & cloud architect for a major Midwestern university.

Featured Solutions

Post navigation

4 comments for “The Potential of the Dell PowerEdge VRTX”

Jamie

June 17, 2013 at 10:33 AM

While this is priced for SMB, Nutanix provides similar solution on the larger scale. I am currently looking at them to move my VDI environment too. Looks like things are moving to a SAN-less datacenter.

Just a quick few points regarding SR-IOV and the sharable PCIe devices on the VRTX PCIe

Common SR-IOV models known pre-VRTX An SR-IOV enabled device has the ability to share that single resource, as a dedicated VF driver; within a SR-IOV capable Hypervisor to enable additional network performance and CPU load. This is accomplished by placing a dedicated FPGA onboard a PCIe card to handle transactions and facilitate these shared resource communications directly on the card instead of at the hypervisor and ultimately the CPU. I don’t want to go too far down that road as it’s too far off topic.

Sharing Fusion-IO, as an example, within the VRTX There are only a few devices on the market today that are capable of performing true SR-IOV (even less MR-IOV) functions. In order for SR-IOV functionality to be utilized, all components of the model must be SR-IOV capable; which actually just means supported by the PCIe card manufacturer and/or the software manufacturer. Therefore, a card offering from manufacturers such as Fusion-IO (currently anyways) do not have the ability to support the any-to-any SR-IOV to MR-IOV conversion model. Fusion-IO currently only supports a physical function driver, does not offer at all a virtual function driver, and has no onboard FPGA based table capabilities built-in to track the SR-IOV assigned physically attached servers or their “pecking” orders when a VF driver requests specific functions back down to the Physical Function driver.

If, technically speaking, the VRTX supports SR-IOV and sharing to multiple blades what is actually happening on the VRTX is a hardware virtualized SR-IOV process. There are additional mac tables and processing that happens and stored inside the VRTX which also assist with the potential of adding multiple 10Gig cards that then report their tables to either atop of rack or core switch.

That being said, I don’t believe the VRTX has the ability at this time to perform the SR-IOV to MR-IOV conversions beyond what the onboard LSI PERC controller is currently performing. Hopefully that will be coming in a future release with the Intel SR-IOV and Emulex HBA’s. That would be cool to have that functionality enabled in just a 5U chassis!

SR-IOV any-to-any devices Not all PCIe cards are created equal, and manufacturers must spend extra dollars on devices to allow them to be SR-IOV capable. Ex. Intel X520 or x540. Both these devices are SR-IOV capable and are capable of self-sustaining a mac table allowing communication to traverse between the shared virtual function drivers assigned and installed directly to all physical blades. No external switching of any kind is needed when sharing the X520 in the VRTX to multiple blades for cross blade communications (although you will have to have a RJ45 connector plugged in or it won’t work). This I/O abstraction layer becomes a shared resource, much like what VMware did for CPU and memory. A single shared 10Gig port only can provide 10Gig of shared bandwidth, but 2 devices at the same time sharing that 10Gig bandwidth is automatically balanced (all being equal) and 5Gig is available to each physical host or endpoint. The real cool factor falls towards the latency reduction between hosts and the performance found by aggregating multiple 10Gig ports when servers are coasting and not using the bandwidth. It provides a very flexible a modular solution which no other server vendor has yet to offer the world.

I commend Bob Plankers for sharing his knowledge and foresight to the world, and congrats also to Dell on the VRTX, its likely going to be a real game changer!

DR Bernstein

February 3, 2014 at 10:05 AM

When my office needed to get rid of a dozen old servers, I contacted a few companies in the New York area that sell servers. Network Doctor based in Englewood Cliffs, New Jersey provided a solution that saved my practice over $20,000 just on the hardware. The space that this little box takes up is a fraction of what the older equipment used and in Manhattan that saves thousands of dollars in rent. Network Doctor setup everything and continues to provide support. We just give them a call when we need changes made and they handle it. Everything is covered for a low monthly price. I highly recommend them for computer services.