I recalled a comment I heard Greg Ferro made on a packet pushers episode (and subsequent blog post) about copper not being reliable enough for storage, with the specific issue being the bit error rate (BER), how how many errors the standard (FC, Ethernet, etc.) will allow over a physical medium. As we’ve talked about before, networking people tend to be a little more devil-may-care about their bits, where as storage folks get all anal rententive chef about their bits.

For 1 Gigabit Ethernet over copper (802.3ab/1000Base-T), the standard calls for a goal BER of less than 10-10, or one wrong bit in every 10,000,000,000 bits. Which incidentally, is one error every second for a line rate 10 Gigabit Ethernet. For Gigabit, that’s on error every 10 seconds, or 6 per minute.

Fibre Channel has a BER goal of less than 10-12, or on error in every 1,000,000,000,000 bits. That would be about 2 errors a minute with 10 Gigabit Ethernet. That’s also 100 times less error-prone than Ethernet, which if you think about it, is a lot.

To give a little scale, that’s like comparing Barney Fife from The Andy Griffith show’s bad assery to Jason Statham’s character in.. well any movie he’s ever been in.

Holy shit, is he fighting… truancy?

Barney Fife, the 10-10 error rate of law enforcement. Wait… Wow, did I really just say that?

So given how fastidious about their storage networks storage folks can be, it’s understandable that storage administrator wouldn’t want their precious SCSI commands running over a network that’s 100 times less reliable than Fibre Channel.

However, while the Gigabit Ethernet standard has a BER target of less than 10-10, the 802.3an standard for 10 Gigabit Ethernet over copper (10GBaseT) has a BER goal of less than 10-12, which is in line with Fibre Channel’s goal. So is 10 Gigabit Ethernet over Cat 6A good enough for storage (specifically FCoE)? Sounds like it.

But the discussion also got me thinking, how close do we get to 10-10 as an error rate in Gigabit Ethernet? I just checked all the physical interfaces in my data center (laundry room), and every error counter is zero (presumably most errors would show up as CRC errors). And all it takes to hit 1010 bits is 1.25 Gigabytes of data transfer, and I do that when I download a movie off of iTunes. So I know I’ve put dozens of gigs through my desktop since it was last rebooted, and nary an error. And my cabling isn’t exactly data center standard. One cable I use came with a cheap wireless access point I got a while ago. It makes me curious to what the actual BER is in reality with decent cables that don’t come close to 100 meters.

Of course, there’s still the power consumption issues and other drawbacks that Greg mentioned when compared to fiber (or coax). However, it’ll be good to have another option. There are some shops that won’t likely ever have fiber optics deployed.

Like this:

Related

12 Responses to A High Fibre Diet: Twisted Pair Strikes Back

I’m really way more concerned about power consumption. People have been predicting that copper won’t be good enough for faster Ethernet since Fast Ethernet. I remember hearing the engineers talking about 100VG-AnyLAN way back in the day…

But we’ve always been able to make copper work. And 10GBase-T works.

It’s just the power draw right now that’s the problem. And thus, the heat and density of 10 Gb copper. But we’re getting that done, too…

Unfortunately, we’re basically in the same situation today. The economics of 10GBASE-T will win out so long as switch vendors think charging US$1000-2000 per link (more than 10x markup) for short-reach optics on each end is a good idea.

Sure 10Gbase-T uses 5w/port more power. That’s not much to save $100 on the cable, or $1000 vs. fiber. At $0.18/kwh and a PUE of 3 (lots of AC) that $100 would be the equivalent of four years of the power bill.

More expensive now perhaps, but chances are it will go down in prices after a while. As for power and latency, it certainly won’t be appropriate for every situation, but especially with smaller shops it will likely be very welcome.

With Emulex’s latest CNAs the 10Gbase-t version is actually less expensive.

Truth is the next gen (next year) of 10Gbase-t PHY chips are which will use smaller geometries and use 50-70% the power of the current get are when it makes most sense and when 10G volume starts to skyrocket.

Another area that is often ignored is that Cat6a cable is not physically robust. The signalling rate on the copper is at the maximum range of the spectral performance of the copper and has little room for signal degradation.

That is, any kink or damage to cable sheath, any stress on the connector that causes the copper to come under physical duress, or even stretching due to improper mounting, can cause a copper cable to have errors over time. Compared to fibre cable of today, copper is less reliable.

Location cabling faults is very hard, typically done as a last resort after multiple failures , and very expensive to fix.

Finally, it seems certain that there will never be a 40G copper so any Cat6a installation is dead money.

I wonder if you could do the same type of bonding that fibre does for 40/100 GigE links. Instead of 4 twisted pairs, a cable that has 16 twisted pairs to do 40GBase16T. We wouldn’t need that for a long time (servers won’t need 40 GigE for at least 5 years I would guess).

When 10GE ASIC’s gets smaller and more energy efficient, vendors of server motherboards will use them. Once you can get 10GE interfaces onboard a server, 10GE use will skyrocket.
( It’s on the server motherboard, so people will want to use it. )

Ethernet over copper cable will stay (10GE and faster) for a while, but not at the distances we are used to. (100 meter)

POTS cables where made just to do analogue voice. (300 Hz – 2400 Hz)
Using a modem we could get 56kbps of data over it.
Next was ADSL (8 Mbps) ADSL2 (Up to 24 Mbps) and VDSL2 (200 Mbps)
But when you are at 4 km from the nearest PBX, it doesn’t matter if you use VDSL2 or ADSL2. ( Have a look at: http://nl.wikipedia.org/wiki/Bestand:VDSL2_Snelheid.gif )

Signal degradation is just to high. Performance of coax cable is much better, but coax cable was build to transport data. (e.g. high frequency television signals.)
If you want faster internet access, you need FttH. (Fiber optics)

CAT6 cables do 10GE up to 37 to 55 meters, CAT6a does the full 100 meters.

I think 40GE over copper is possible, but certainly not the full 100 meters.
But then again, you don’t need 100 meter cable lengths in most data centers.
( Just from the server to the nearest switch. From there it’s all optic fiber. )

Although the standards don’t require a great BER in Ethernet space traditionally compared to FC the reality is that the BER is actually very good which is why it’s very easy to support fcoe on both active and passive dac and why at short range and maybe even long range 10GbT likely as well especially with the newer Asics next year. power is of course a concern and for me even more important is this delaying the inevitable as 40G copper and 100G copper will be even harder… There is a reason fc went optical after 1G (anyone remember those copper fc deployments as I do ).