FCoE and standards: this is what really matters

In the recent weeks, J. Michael Metz from Cisco entered a ****ing contest with my friend Greg and decided to prove that the FCoE standards are done. Iím a bit worried about the slightly nervously aggressive tone of his writing, but the bigger problem is that heís completely missing the point: usually it takes around a year from a stable technical solution to a shipping product (more so if custom silicon is involved).

The questions we should be asking Cisco and other FCoE vendors are thus:

Is your currently shipping hardware truly compatible with the standards?We all know that the generic ideas behind a specific standard are known well in advance, but the last-minute changes can represent either a minor annoyance (if they are in the control plane, which is implemented in software on a generic-purpose CPU) or a major headache (if they require hardware changes). Dear vendors: are you willing to certify that we will not have to upgrade your hardware when your implementation conforms to the DCB standards? Are you willing to offer free upgrades if your hardware turns out to be incompatible?

When will you ship the standard-compliant version of software and hardware? The question whether 802.1Qbb and 802.1Qaz standards are ìdoneî or not is becoming a pure semantic exercise. Itís more important to know when weíll see the standards implemented in shipping products.

What are you doing to ensure cross-vendor interoperability with pre-standard products?Before the (almost done) DCB standards are implemented in shipping products, we need to know what works and whatís officially supported. Cisco, NetApp and VMware apparently did an end-to-end test. However, I was unable to find the test reports or exact configurations on their joint web site. There are at least three components in every FCoE installation: the storage, the switch and the CNA. While the first two were specified in the press release, the third one (CNAs they tested) was not.

Thereís also a non-technical questions Iíd like to ask:

When will you align your FCoE stories? Netapp claiming FCoE needs TRILL and Cisco claiming it doesnít does not help your joint customers. Those of us that bothered to read the FC-BB-5 before writing about FCoE know the answer … but if you guys canít get together a coherent synchronized story, how can an average engineer hope to implement this technology in the Data Center network?

Comments

I hear your cry for completed standards but I also hear Cisco’s cry that enough has been done to start using this stuff.

Here is what I can tell you from experience with the tech in production environments: It works as a replacement for local FC storage. Would I use it for heavy Oracle DBs, maybe/maybe not. Would I use it for a bunch of VMs that need to boot from SAN, definitely. Does it require TRILL, nope didn’t touch it.

I have seen several docs on Cisco/EMCs site about FCOE/Oracle/EMC Storage Arrays, with performance, and configurations.

The problem is that this is a critical service but is being released very much like ISL or POE was, some pre-standard piece, standardized pieces and a mix. The customer is left to sort it out.

The vendors could and should do a better job of covering it all and disclosing all of this, but that is NEVER the case anymore.

There’s an important difference between ISL, POE, early MPLS … and FCoE: in all the other cases, we had to deal with boxes from a single vendor. We knew there was a lock-in factor, but we also knew that there’s a single vendor to contact if the proprietary technology won’t work as expected. Also, all of those technologies were covered by the same team in the IT organization.

FCoE is different: you need hardware from two or three vendors (CNA, switch, storage) unless you use a Cisco-only FCoE solution (UCS with Cisco mezzanine cards) connected to legacy FC network+storage. If you buy the hardware components separately (which is commonly the case, unless you buy from Acadia), you become your own system integration and interoperabilty lab, while at the same time dealing with three IT teams (network, server and storage teams … and at least in some organizations, one of them will be only too happy to say “I told you so” after you experience FCoE-related glitches.

Network Break Podcast

Network Break is round table podcast on news, views and industry events. Join Ethan, Drew and myself as we talk about what happened this week in networking. In the time it takes to have a coffee.

Packet Pushers Weekly

A podcast on Data Networking where we talk nerdy about technology, recent events, conduct interviews and more. We look at technology, the industry and our daily work lives every week.