Mmmm, TOE jam: Trev shoves Intel's NICs in his bonkers test lab

Review Tech offerings are rarely "good" to me, merely "less awful than others." In this case, however, I'll be looking at the complete package of Intel networking's efforts - the hardware, software and ongoing support - and might even attribute that word to its efforts. This isn't to say that everything in the Intel networking universe is awesome - I've certainly encountered some weird stuff over the years with its cards. In the majority of cases, however, the answers were to be found on Intel's own support website rather than as esoteric knowledge of the sysadmin mafiosi.

Intel's networking folks serve as stark contrast to my experiences with Nvidia's networking components. There is a special hell reserved for telemarketers and whoever designed the nForce NIC/forcedeth driver combo. I consider it a professional failing as a writer that I cannot create a sufficiently vitriolic phrase to encompass the depth of my discontent with that product. I simply lack the words.

But when it comes to Intel, not only were workarounds or potential fixes posted, but fairly upfront explanations as to why the bad thing was happening were generally posted as well.

Intel in VMware (click to enlarge)

If the fix needed to come from Intel's end, you stood a good chance of getting its networking guys to issue a fix. At the very least, it seemed willing to work with the open source community to make sure that – if we wanted to – we could build a fix on our own.

The end result was hardware that would "just work". When it came to some of the older Gig-E NICs, I would be lying if I said that Intel's offering delivered the highest speed or the lowest latency. Other vendors offered cards with more features, more offloading, and ultimately better performance... if you had everything set up just right.

But those setups were fragile, and they certainly weren't something I was prepared to use in deployments where I knew that the company would end up repurposing that server for some other task down the line. Many a Windows-based database server has ended up a VMware host in the past few years. For me, network card compatibility has proved the number one issue with those conversions.

The old and the new

While I've got a lab full of Intel 1Gbit cards that seem to work in everything, the truth of the matter is most of this stuff is old. Half my inventory consists of Pro/1000 cards that have been in service for over a decade. Until very recently my only 10GbE Intel cards were a pair of XF SR Dual Port Server Adapters, and those are from 2007.

As luck would have it, Intel was convinced to part with some newer cards for testing. I have a pair of X520-DA2s two I250-T4s as well as a pair of X540-T2s. In essence, I have managed to assemble "the evolution of Intel Ethernet networking" for the past decade or so.

Convergence

The newer units are Converged Networking Adapters (CNAs), and the features they offer make a real difference. I'll be honest when I say that I never really grokked the concept behind CNAs. What exactly does one of these little blighters do, and why should I care?

Intel fibre properties (click to enlarge)

To figure out if these cards were worthy of the respect earned by their predecessors, I first had to figure out what makes CNAs special.

It isn't the ability to carry the protocol. iSCSI doesn't need a "special card" to work, and neither does FCoE. So long as you have the right software, you should be able to make any network card speak storage protocols; those ancient Intel Pro/1000s work just fine with iSCSI. For a long time, I looked at this "converged networking" stuff as marketing gimmickry and stuck to caring about practical things, like "will the OS talk to the NIC?".

The world looks a lot different when you start pushing the cards into the red all day long. My new testlab affords me the opportunity to test storage at speeds that can saturate a 10GbE link. Start slamming a non-converged NIC with that kind of traffic while adding in traditional traffic (SMB, NFS, SIP, etc) and you can watch your latency climb. Eventually, your I/O requests may start timing out altogether. Not a good look.

Intel Data centre Bridging Settings (click to enlarge)

What the CNAs bring to the table is a limited form of layer-3 awareness and manipulation capability coupled with expanded offload capabilities that don't impinge upon the host CPU or depend on the operating system.

FCoE or iSCSI traffic can be tagged for QoS throughout your network (assuming your switching infrastructure plays ball.) Intel's CNAs can also guarantee minimum amounts of bandwidth be made available to a specific traffic class. Do you need iSCSI to always have 40 per cent of the available bandwidth? Just poke the settings and you're good.

Want to control the NICs from afar, or using an OS that doesn't make configuring these aspects of the driver easy? Not a problem: the Intel NICs default to prioritising packets based upon the settings provided by the switch. This is the whole Datacenter Bridging concept and it works far better than I would have believed.

Better than good enough

Back in the bad old days of Windows Server 2003, one of the very first things a systems administrator did when they got a new server was disable any TCP Offload Engine (TOE) settings in the network cards. Early TOE configs caused problems at the OS level, at the driver level and – with some vendors – at the NIC level.

It would be a lie to say that I never had TOE issues with Intel cards, but they have always given me far less trouble than others. These new NICs seem quite capable of carrying the torch: every system I've tried slotting them into, they have worked. They provide not simply bare-bones functionality, but a wide range of features that seem to "just work."

I can sustain 1,200 MiB/sec out of these cards using iSCSI. That's within the error bars of the fastest storage I can build, and it will do it without any noticeable impingement upon the host CPU. They tag and guarantee traffic and they do all of this without tinkering, Googling getting inventive with language or even deploying blunt objects.

Intel's networking team hasn't let me down with its latest cards. The ARK is one of the better product information websites our industry has built, and Intel's networking team seem set to continue working with nerds the world over to ensure that these cards work wherever they need to. Here's hoping they keep it up for another 10 years. ®