Ethernet: The New Storage Area Network

Widespread adoption of 10 Gigabit Ethernet and rapid advancement of even faster speeds of the protocol push Fibre Channel out of the datacenter.

Ten Gigabit Ethernet finally hit its stride in 2013, and waves of innovation are rapidly advancing the technology. A new quad-link 40 Gigabit Ethernet can even spread data over four 10 Gigabit links, supporting applications like big data and mobile broadband.

These products are beyond the research stage, and can apply to the entire infrastructure. Forty Gigabit Ethernet is available in copper and low-cost short-haul fiber versions, making it ideal for inter-switch backbones and for connecting fast storage appliances such as all-flash arrays.

The new Ethernet spectrum includes RDMA technology in 40 Gigabit Ethernet families. This means Ethernet can now accomplish much of the performance edge InfiniBand has enjoyed for years. Admittedly, IB still has some advantages over Ethernet, and may be the optimal choice for users pushing the limits. Mellanox has capitalized on this with a software-definable NIC/switch combination that can run either protocol, allowing users to compare operation in both environments.

Protocol-wise, Ethernet already hosts a Fibre Channel derivative, FCoE, as well as iSCSI. These provide an alternative to Fibre Channel itself. iSCSI already runs on 40 Gigabit Ethernet. Other storage protocols run natively on Ethernet, including FTTP, NAS, and object storage.

Fibre Channel, on the other hand, has been losing ground to Ethernet-compatible alternatives. Fibre Channel drives are no longer available in the mainstream. A protocol that once went all the way through an array to the drive now ends at the inlet to the array box.

In the connectivity stakes, the Fibre Channel committee chose a doubling strategy, which allowed performance leapfrogging over Ethernet at the time. We had 4 Gigabit Fiber Channel when Ethernet could only muster 1 Gigabit. But the rules to the game have changed. Fiber Channel and Ethernet are now tied to the same physical layer structures, simply because of the huge cost of physical layer development and the resources needed to create and test them.

Here, the doubling idea works against Fibre Channel. Eight Gigabit Fibre Channel hit the market a year after 10 Gigabit Ethernet, and 16 Gigabit Fibre Channel is just getting going. Ethernet, on the other hand, has leapt forward to 40 Gigabit, which has already gone mainstream.

Fibre Channel's answer is a 32 Gigabit link speed. QLogic says it will release NICs in 2015, while switches will need to wait until 2016. That amounts to general availability of late 2016 for working product, giving 40 Gigabit Ethernet a four-year lead.

Ethernet isn't standing still. Single-link native 40 Gigabit Ethernet and 100 Gigabit products are already in testing, and a multi-lane 100 Gigabit product is available. The near-term landscape for 100 Gigabit is still a bit confused, however, with a number of incompatible alternatives available. This suggests that the market needs time to stabilize. Serious volumes won't ship until 2015.

To add to the complexity of comparison, a four-lane 128 Gigabit Fibre Channel is mooted for the 2016 timeframe, although it may be pushed into 2017.

All of this puts Ethernet in the catbird seat. A lead of four years at the 40 Gigabit level, and at least two years for 100 Gigabit speeds clearly favors industry convergence on Ethernet. InfiniBand and Ethernet with RDMA are picking off the highest performance use-cases. The cloud, unified storage and the move away from block-IO also favor Ethernet.

Ethernet wins on schedule, cost, and performance. It's also far easier to manage than a traditional SAN, which required specially trained SAN technicians. Fibre Channel is going to be on the defensive in a very tough fight for the market over the next few years. It's not down, and certainly not out, but Ethernet is giving Fibre Channel a heck of a pummeling, with no end in sight.

Solid state alone can't solve your volume and performance problem. Think scale-out, virtualization, and cloud. Find out more about the 2014 State of Enterprise Storage Survey results in the new issue of InformationWeek Tech Digest.

Jim O'Reilly was Vice President of Engineering at Germane Systems, where he created ruggedized servers and storage for the US submarine fleet. He has also held senior management positions at SGI/Rackable and Verari; was CEO at startups Scalant and CDS; headed operations at PC ... View Full Bio

Our InformationWeek 2014 State of Storage data shows FC holding its own, frankly more than we expected -- 47% use Fibre Channel SANs, down just four points from 51% last year. (You can the report here, and download the full data set here.)

Companies have invested a lot in FC gear and expertise, and storage admins benefit.

The fact that the installed base of FC is shrinking says it all. That means replacement is using Ethernet or InfiniBand more than FC. With the overall storage market expanding, that means that FC is in decline.

I suspect it will be 20 years before the last of today's FC gear is switched off, and possibly longer in government. Companies still use floppy disks, even though they end-of-lifed a few years ago!

The handwriting is on the wall for specialized, high performance technologies that don't reach a mass market. Cloud computing represents a ruthless sorting out and simplification on those technologies that are both most extensible and cheapest to product. Ethernet meets that standard on several counts.

I have never understood this notion of some kind of competition between InfiniBand and Ethernet unless of course the comparison is based mainly on link layer bandwidths. After all, Ethernet is a layer 2 technology whereas IB goes clear up the stack to layer 7.

Your article points out that RDMA is available to run over Ethernet, but interestingly you don't mention that the RoCE specification (RDMA over Converged Ethernet) was created by the InfiniBand Trade Association which continues to develop and enhance the technology. From an IBTA perspective, our interest is less about winning the bandwidth race (which, incidentally, IB has done for more than a decade now) and more about providing the RDMA value proposition over whatever wire is appropriate for the end user, whether that is an IB wire or an Ethernet wire.

There are encouraging signs that the focus is shifting away from raw bandwidth at the wire level and toward a broader discussion of application level performance and efficiency. And after all, application effectiveness is, or should be, the focus of any high performance network technology.

Perhaps the notion that Ethernet and IB are just variations of the same ISO stack lends credence to the convergence idea. After all, if the same gear can do either job, IB just falls into the same class as http and ftp etc as yet another Ethernet protocol. Convergence achieved?