I notice you got the SFP+ optics...does your compute environment not support DAC?

What's the power and cost comparison of DAC to copper 10GBASE-T? Both fixed and SFP+ copper.

I'm curious to know what the power/cooling/performance differences would be between optics vs twinax vs 10GBASE-T - would it be significant or significant only in large deployments? The cost differential would be significant between optics and 10GBASE-T, but perhaps not everything else?

I notice you got the SFP+ optics...does your compute environment not support DAC?

What's the power and cost comparison of DAC to copper 10GBASE-T? Both fixed and SFP+ copper.

I'm curious to know what the power/cooling/performance differences would be between optics vs twinax vs 10GBASE-T - would it be significant or significant only in large deployments? The cost differential would be significant between optics and 10GBASE-T, but perhaps not everything else?

10gbaset is around 1W on each end of the connection versus .1W for sfp+. At $.10/kwhr that's a difference of $2 per year for the two ends, even if your infrastructure costs are 5x per watt what your power costs then that's only $50 over 5 years, much less than even direct attached cables, let alone optical. Unless the switch psu is going to push you over some large capex threshold it makes zero sense to favor sfp+ over 10gbase based on power requirements. If you need the lower latency of sfp+, or you can't accommodate the physical requirements of a cat6a cable plant then fine go that direction, but I think in general power is a red herring.

I notice you got the SFP+ optics...does your compute environment not support DAC?

What's the power and cost comparison of DAC to copper 10GBASE-T? Both fixed and SFP+ copper.

I'm curious to know what the power/cooling/performance differences would be between optics vs twinax vs 10GBASE-T - would it be significant or significant only in large deployments? The cost differential would be significant between optics and 10GBASE-T, but perhaps not everything else?

CostOptics will always be the most expensive path at list price, and even with significant discounting.

10GBaseT comes in second, as installing a proper, certified cable plant is expensive, just not as expensive as optics.

DAC is by far the cheapest, as twinax runs are shorter and cheaper than a certified Cat6a plant, and require no adapter. They also have no specific installation/certification requirements. Just buy them to length, and connect. Of course, if you exceed 15m distance requirements, you're right back to options 1 and 2.

TCO

The numbers I can find (but can't recite directly here) tell me that SFP+ and DAC are close enough in power consumption (and cooling requirements) as to make it unworthy of consideration. 10GBaseT, having longer distance to push than DAC as well as some additional electronics, comes in a bit more, but a BoE calculation makes me think it would take decades or longer for this ever to come into play as a cost consideration.

For very large installations, it might make sense to do a bake-off run between port types to see how they perform under actual load...if you were planning on forklifting your infrastructure. Otherwise, buy what best suits your existing use-case. (distance, existing server port types, etc.)

edit: My memory may be off on the distance limitation of DAC...could be as short as 7m now that I think about it.

Passive up to 5m, active @ 7 and 10 for the usual suspects. Unaware of a vendor than has anything odd, but not impossible.

IIRC 10Gbase-T also has the equiv of short and long haul modes which changes max distance and power consumption. Power draw is significantly different between early 08ish PHYs and current, so there isn't a universal quick and easy.

I notice you got the SFP+ optics...does your compute environment not support DAC?

What's the power and cost comparison of DAC to copper 10GBASE-T? Both fixed and SFP+ copper.

I'm curious to know what the power/cooling/performance differences would be between optics vs twinax vs 10GBASE-T - would it be significant or significant only in large deployments? The cost differential would be significant between optics and 10GBASE-T, but perhaps not everything else?

CostOptics will always be the most expensive path at list price, and even with significant discounting.

10GBaseT comes in second, as installing a proper, certified cable plant is expensive, just not as expensive as optics.

DAC is by far the cheapest, as twinax runs are shorter and cheaper than a certified Cat6a plant, and require no adapter. They also have no specific installation/certification requirements. Just buy them to length, and connect. Of course, if you exceed 15m distance requirements, you're right back to options 1 and 2.

TCO

The numbers I can find (but can't recite directly here) tell me that SFP+ and DAC are close enough in power consumption (and cooling requirements) as to make it unworthy of consideration. 10GBaseT, having longer distance to push than DAC as well as some additional electronics, comes in a bit more, but a BoE calculation makes me think it would take decades or longer for this ever to come into play as a cost consideration.

For very large installations, it might make sense to do a bake-off run between port types to see how they perform under actual load...if you were planning on forklifting your infrastructure. Otherwise, buy what best suits your existing use-case. (distance, existing server port types, etc.)

edit: My memory may be off on the distance limitation of DAC...could be as short as 7m now that I think about it.

5m for unpowered cables, 10m for active cables. My biggest problem with sfp+ DAC is the stupid vendors that insist on using their branded cables, this causes potential interoperability issues (I'm connecting a Cisco switch to a Brocade, who's branded cable do I use?) and jacks up the cost per drop by several times to the point where it's pretty much a tie between a certified plant and DAC. The nice thing with a certified plant and 10GBaseT ports is you can do a gradual migration, buy a 48 port switch and you can connect 1Gb servers in alongside 10Gb. If you do that with sfp+ you end up throwing away the 1Gb sfp+ adapter which is a waste of capital plus you had to pull two different cables.

I notice you got the SFP+ optics...does your compute environment not support DAC?

What's the power and cost comparison of DAC to copper 10GBASE-T? Both fixed and SFP+ copper.

I'm curious to know what the power/cooling/performance differences would be between optics vs twinax vs 10GBASE-T - would it be significant or significant only in large deployments? The cost differential would be significant between optics and 10GBASE-T, but perhaps not everything else?

CostOptics will always be the most expensive path at list price, and even with significant discounting.

10GBaseT comes in second, as installing a proper, certified cable plant is expensive, just not as expensive as optics.

DAC is by far the cheapest, as twinax runs are shorter and cheaper than a certified Cat6a plant, and require no adapter. They also have no specific installation/certification requirements. Just buy them to length, and connect. Of course, if you exceed 15m distance requirements, you're right back to options 1 and 2.

Twinax SFP+ is cheaper than optical but CAT6 is even cheaper than Twinax, it connects from port-to-port (RJ-45) directly (no adapter anywhere) - 10GBASE-T wins.

2. Over 10m:Certified CAT6a cabling is dirt cheap and the installation is a lot cheaper than fiber optic, it's not expensive at all (unless you are completely clueless and have no idea what you're ordering, from which company etc.) 10GBASE-T wins again.

Passive up to 5m, active @ 7 and 10 for the usual suspects. Unaware of a vendor than has anything odd, but not impossible.1

At one point I ran custom-made, certified Twinax up to ~12m, it worked fine but that's the max as far as I know.

Quote:

IIRC 10Gbase-T also has the equiv of short and long haul modes which changes max distance and power consumption. Power draw is significantly different between early 08ish PHYs and current, so there isn't a universal quick and easy.

As far as I know current ones are 1W, afidel is correct. CAT6a max supposed to be 100m but I've never heard of anyone running 10Gb at that distance; mines are running around ~30m and with iperf I've measured 9.7-9.8Gb (both WS were directly connected to one of the v2 10GBASE-T modules of our E8212zl.)

Twinax SFP+ is cheaper than optical but CAT6 is even cheaper than Twinax, it connects from port-to-port (RJ-45) directly (no adapter anywhere) - 10GBASE-T wins.

The very first time you call your vendor to troubleshoot an issue with your 10G, and you mention Cat6, regardless of length, they're going to tell you to upgrade your cable plant before they'll even talk to you. TwinAx is cheap, and requires no adapters.

Quote:

2. Over 10m:Certified CAT6a cabling is dirt cheap and the installation is a lot cheaper than fiber optic, it's not expensive at all (unless you are completely clueless and have no idea what you're ordering, from which company etc.) 10GBASE-T wins again.

Which is exactly what I said, with the exception of it being "dirt cheap." A certified Cat6a install isn't "dirt cheap" when you're dealing with a large cable plant.

CAT6a max supposed to be 100m but I've never heard of anyone running 10Gb at that distance;

Spec says Cat6a to 100m and Cat6 to 37m.

10GBASE-T power usage is apparently variable based on length and conditions; I imagine this is a big factor in why it took so long to get a copper spec and products in the chain.

afidel wrote:

My biggest problem with sfp+ DAC is the stupid vendors that insist on using their branded cables, this causes potential interoperability issues (I'm connecting a Cisco switch to a Brocade, who's branded cable do I use?)

Networking is one of the last areas where dominant vendors can still pretend you're running vendor-homogenous without being laughed out of a room. Rest assured that the vendors haven't overlooked the confusion, fear and doubt that comes with deliberate transceiver incompatibility.

Quote:

The nice thing with a certified plant and 10GBaseT ports is you can do a gradual migration, buy a 48 port switch and you can connect 1Gb servers in alongside 10Gb. If you do that with sfp+ you end up throwing away the 1Gb sfp+ adapter which is a waste of capital plus you had to pull two different cables.

When you say SFP+ here you mean twinax-based DAC, right? We should say DAC when we mean direct-attach. Otherwise I'd say get 10GBASE-T SFP+ whenever you have a copper SFP+ slot to fill and you'll obsolete less hardware.

Reminds me that a long time ago I assumed 100BASE-FX was downward compatible with 10BASE-FL in the same way that copper is. It's just different frame spacing, right? Lesson learned.

The very first time you call your vendor to troubleshoot an issue with your 10G, and you mention Cat6, regardless of length, they're going to tell you to upgrade your cable plant before they'll even talk to you.

Speak up -- I didn't quite hear the name of those vendors I need to avoid.

Quote:

TwinAx is cheap, and requires no adapters.

Cost isn't terribly relevant when you're talking about plant cable, unless you're volunteering to come run it yourself. Patches, overhead-rack and underfloor usually have a lot more flexibility.

All this talk of twinax makes me a bit wistful for all the twinax I cut out to prevent use after deprecation.

Cost isn't terribly relevant when you're talking about plant cable, unless you're volunteering to come run it yourself. Patches, overhead-rack and underfloor usually have a lot more flexibility

There's two cases here:

short runs: Optical is by far the most expensive. The comparison of the other two (TwinAx vs. Cat6a) is neligigble in difference, and total cost largely depends on what existing ports you have in your servers. As I stated earlier.

long runs: Optical has the most distance, and is still the most expensive. 10GBaseT makes sense here, but it isn't dirt cheap as szlevi posited. In either case, your decision should still be informed by the compute environment you already have. TwinAx is a non-starter.

Quote:

Speak up -- I didn't quite hear the name of those vendors I need to avoid.

All of them. You're attempting to run 10G over an under-spec'd cable system. Why would a vendor waste cycles on trying to fix a problem that isn't related to their own equipment?

I really want to know that store where Twinax is cheaper than CAT6a... care to share your secret supplier with us?

Quote:

Quote:

Twinax SFP+ is cheaper than optical but CAT6 is even cheaper than Twinax, it connects from port-to-port (RJ-45) directly (no adapter anywhere) - 10GBASE-T wins.

The very first time you call your vendor to troubleshoot an issue with your 10G, and you mention Cat6, regardless of length, they're going to tell you to upgrade your cable plant before they'll even talk to you.

Huh? That only shows that neither caller nor the vendor knows what they are talking about, not to mention this claim doesn't even make the slightest sense: do you think a vendor selling you 10GBASE-T gear (HP, Dell, Juniper etc) will tell you to stop using CAT6a and switch to Twinax?

Quote:

TwinAx is cheap, and requires no adapters.

Right. Twinax is copper and uses SFP+ but 10GBASE-T is also copper (CAT6/a) and even cheaper and uses RJ-45, requiring no adapter either, I'm not sure I'm following you...

Quote:

Quote:

2. Over 10m:Certified CAT6a cabling is dirt cheap and the installation is a lot cheaper than fiber optic, it's not expensive at all (unless you are completely clueless and have no idea what you're ordering, from which company etc.) 10GBASE-T wins again.

Which is exactly what I said, with the exception of it being "dirt cheap." A certified Cat6a install isn't "dirt cheap" when you're dealing with a large cable plant.

We're not talking about throwing some cat6a patch cables into a rack and calling it a cat6a cable plant, we're talking about a professionally installed and certified 6a cable plant that meets all specs. That cost is going to be close to the same per-run as buying inflated margin twinax cables. My point is that you do the 6a plant once and you can plug any gigabit or 10gbaset device in and it will work, no need to run new cables to go from gigabit to ten gigabit, no need to switch out sfp+ modules or worse yet buy 10gbaset sfp+ modules (watch that cost per port shoot through the roof!). Cisco's launching the 10gbaset nexus 5k and nexus 2k fabric extenders soon so even if you're a Cisco shop it will soon be a realistic solution.

No. There is an EIA/TIA annex that makes some recommendations for additional installation specs that *can* allow it to run up to 37m, but it's not part of the 10GBaseT spec.

Errr, yes, there is, even by TIA: search for 568-B.2-10, it's out since 2008, doubling the spec to 500Mhz in augmented CAT6, similarly to ISO/IEC's augmented E-class, standardized in amendment 2 of 11801.

I really want to know that store where Twinax is cheaper than CAT6a... care to share your secret supplier with us?

It's not the cost of the cable...the cable is negligible cost in short runs...but which you choose should be informed by what you have in your compute environment...which is exactly what I stated earlier.

Quote:

Huh? That only shows that neither caller nor the vendor knows what they are talking about, not to mention this claim doesn't even make the slightest sense: do you think a vendor selling you 10GBASE-T gear (HP, Dell, Juniper etc) will tell you to stop using CAT6a and switch to Twinax?

You really need to work on your reading comprehension. I stated that if you have problems with 10GBaseT and call your vendor, and if they find you are using Cat6 (note the lack of an 'a') then they'll tell you to upgrade your cable...to Cat6a I said nothing about TwinAx in that scenario other than it is perfectly viable for short runs and far cheaper than optics. It *can* be cheaper than Cat6a connections, depending on your compute environment.

As an example, we have any number of HP chassis that support DAC natively, and we plug them into our non-fixed switches without requiring adapters. This means we don't need to add NICs on the compute side, nor anything else on the switching side. This is cheaper than adding fixed ports if we already have non-fixed ports, and cheaper than adding 10GBaseT NICs if we don't already have them (and why would we if the native gear supports DAC)

Quote:

Right. Twinax is copper and uses SFP+ but 10GBASE-T is also copper (CAT6/a) and even cheaper and uses RJ-45, requiring no adapter either, I'm not sure I'm following you...

See above.

Quote:

It is dirt cheap, compared to anything else you can opt for.

No it isn't. DAC is perfectly cheap, again, depending on your environment.

Errr, yes, there is, even by TIA: search for 568-B.2-10, it's out since 2008, doubling the spec to 500Mhz in augmented CAT6, similarly to ISO/IEC's augmented E-class, standardized in amendment 2 of 11801.

Jeebus...again with the reading comprehension:

I explicitly stated there was a TIA annex...I'm fully aware of it. Guess what that *isn't*? It *isn't* part of the 802.3an-2006 spec...you know...the one called 10GBaseT. 802.3an recognizes that Cat6 may support 10GBaseT to limited distances, it makes and recognizes no guarantees. TSB155 should never have been published.

edit: for posterity, the guidance portion of TSB155 requires the following, which you also ignore in your cost calculations:

We're not talking about throwing some cat6a patch cables into a rack and calling it a cat6a cable plant, we're talking about a professionally installed and certified 6a cable plant that meets all specs.

Me too because within 5-10 meters it's a negligible cost anyway and you already have existing tunnels, tracks, etc. However CAT6/CAT6a is still cheaper than Twinax.

Quote:

That cost is going to be close to the same per-run as buying inflated margin twinax cables.

There is no Twinax beyond 10m so it's a moot point. I have 10-12x 30m CAT6a installed and validated drop with outlets at the end, with certified patch panels etc, I know it's not that expensive at all unless you are clueless and you order from the first quote you get. FYI the quotes I got back then (late 2010?) had 10x spread despite all offering similar validation process etc. Unionized workforce, using different suppliers and, of course, sales people selling snake oil because they by default assume if you are ahead of the curve you must be making a lot of money so he should try to rip you off too...

Quote:

My point is that you do the 6a plant once and you can plug any gigabit or 10gbaset device in and it will work, no need to run new cables to go from gigabit to ten gigabit, no need to switch out sfp+ modules or worse yet buy 10gbaset sfp+ modules (watch that cost per port shoot through the roof!). Cisco's launching the 10gbaset nexus 5k and nexus 2k fabric extenders soon so even if you're a Cisco shop it will soon be a realistic solution.

There is no Twinax beyond 10m so it's a moot point. I have 10-12x 30m CAT6a installed and validated drop with outlets at the end, with certified patch panels etc, I know it's not that expensive at all unless you are clueless and you order from the first quote you get. FYI the quotes I got back then (late 2010?) had 10x spread despite all offering similar validation process etc. Unionized workforce, using different suppliers and, of course, sales people selling snake oil because they by default assume if you are ahead of the curve you must be making a lot of money so he should try to rip you off too...

That's cute. I have somewhere in the neighborhood of 32000 runs across two data centers and a few more labs.

There is no Twinax beyond 10m so it's a moot point. I have 10-12x 30m CAT6a installed and validated drop with outlets at the end, with certified patch panels etc, I know it's not that expensive at all unless you are clueless and you order from the first quote you get. FYI the quotes I got back then (late 2010?) had 10x spread despite all offering similar validation process etc. Unionized workforce, using different suppliers and, of course, sales people selling snake oil because they by default assume if you are ahead of the curve you must be making a lot of money so he should try to rip you off too...

That's cute. I have somewhere in the neighborhood of 32000 runs across two data centers and a few more labs.

...and yet still cannot make a difference between CAT6 and CAT6a - security guard or fireman-on-duty?

As an example, we have any number of HP chassis that support DAC natively, and we plug them into our non-fixed switches without requiring adapters. This means we don't need to add NICs on the compute side, nor anything else on the switching side. This is cheaper than adding fixed ports if we already have non-fixed ports, and cheaper than adding 10GBaseT NICs if we don't already have them (and why would we if the native gear supports DAC)

I'm confused about what you mean by 'DAC natively' here. Do you mean an SFP+ port, SFP+ with some optional feature, a twinax port or what? Twinax DAC interconnects work on anything with SFP+, don't they?

Frennzy wrote:

edit: for posterity, the guidance portion of TSB155 requires the following, which you also ignore in your cost calculations:

I'm confused about what you mean by 'DAC natively' here. Do you mean an SFP+ port, SFP+ with some optional feature, a twinax port or what? Twinax DAC interconnects work on anything with SFP+, don't they?

Ah, sorry. Yes, I'm referring to a native SFP+ port, without any additional adapters or drivers. As for the latter question, the answer is "well, they should, but not always."

to expand: In our environment, we have a lot of FC, so it makes a lot of sense for us to buy switching gear and compute gear that has native SFP+ ports, but not necessarily 10GBaseT. It provides a lot of flexibility we wouldn't otherwise have. Given that we will have those ports available anyway, it makes sense to use them via DAC for uplinks from Server chassis into our standard ToR/MoR gear.

...and yet still cannot make a difference between CAT6 and CAT6a - security guard or fireman-on-duty?

Do you really want to compare e-peens on what our jobs are with respect to networking?

There *is* a difference between the two, and I've stated that, categorically.

Except you kept writing about CAT6 which clearly has no place in any modern DC since 2008...

Guess why, dipshit? Because YOU brought it up!

szlevi wrote:

Twinax SFP+ is cheaper than optical but CAT6 is even cheaper than Twinax, it connects from port-to-port (RJ-45) directly (no adapter anywhere) - 10GBASE-T wins

Jesus, you can't even maintain your own argument coherently.

It's funny that you are the slow one yet you are the on who starts cursing...

Let me help you out: that was for short-length, where it does not matter, based on the fact that you were comparing to Twinax which implies you are talking about <10m, obviously.

That being said under 10m it's still cheaper to buy 10GBASE switches and use CAT6 cabling instead of Twinax.

Look I understand that you work for Brocade and you guys were late to the many-port 10GBASE party so you probably have plenty of Twinax etc installed + you are arrogant despite often being miles off with your quick-shot opinions but hey, I don't have anything to with any of these, don't curse at me...

I'm confused about what you mean by 'DAC natively' here. Do you mean an SFP+ port, SFP+ with some optional feature, a twinax port or what? Twinax DAC interconnects work on anything with SFP+, don't they?

Ah, sorry. Yes, I'm referring to a native SFP+ port, without any additional adapters or drivers. As for the latter question, the answer is "well, they should, but not always."

to expand: In our environment, we have a lot of FC, so it makes a lot of sense for us to buy switching gear and compute gear that has native SFP+ ports, but not necessarily 10GBaseT. It provides a lot of flexibility we wouldn't otherwise have. Given that we will have those ports available anyway, it makes sense to use them via DAC for uplinks from Server chassis into our standard ToR/MoR gear.

See above, this is what I was suspecting: you guys are a huge FC company (vendor), you have an entirely different existing environment which makes it less likely that you will start adding 10GBASE-T.

However your claim about CAT6 being refused by a support person was confusing me hence my initial response that it does not make sense - under 100 feet CAT6 is perfectly fine, no vendor will refuse to support it, beyond that you will go with CAT6a but since prices are really low you will probably use CAT6a everywhere anyway.

See above, this is what I was suspecting: you guys are a huge FC company (vendor), you have an entirely different existing environment which makes it less likely that you will start adding 10GBASE-T.

Then why were you completely ignoring my statements that cost depends on your environment.

Quote:

However your claim about CAT6 being refused by a support person was confusing me hence my initial response that it does not make sense - under 100 feet CAT6 is perfectly fine, no vendor will refuse to support it

Most will...because it simply isn't allowed within the 802.3an-2006 spec. The TSB155 which you obliquely mentioned specifically excludes using Cat6 UTP as a patch cord. That right there would be enough to refuse to support you until you use properly spec'd interconnects.

See above, this is what I was suspecting: you guys are a huge FC company (vendor), you have an entirely different existing environment which makes it less likely that you will start adding 10GBASE-T.

Then why were you completely ignoring my statements that cost depends on your environment.

Well... I guess it didn't come through such a way...

Quote:

Quote:

However your claim about CAT6 being refused by a support person was confusing me hence my initial response that it does not make sense - under 100 feet CAT6 is perfectly fine, no vendor will refuse to support it

Most will...because it simply isn't allowed within the 802.3an-2006 spec. The TSB155 which you obliquely mentioned specifically excludes using Cat6 UTP as a patch cord. That right there would be enough to refuse to support you until you use properly spec'd interconnects.

Never met any vendor refusing to support 2-3m long CAT6 patch cables and all our early ones were older CAT6 ones (we replaced them when I ordered a load of CAT6a ones.)

Quote:

Quote:

CAT8a

What? why?

Typo, already corrected but thanks.

PS: my 8024Fs are all using Twinax, of course but that was part of my Dell deal back then, I got ~50 Twinax cables thrown in for free with my storage gear so I have no love lost for Twinax.

Anyone who pays Dell their list price is fucking insane. They're always willing to discount, and they dropped bid pricing requirements down from 20k to around 13-14k.

+++

For some reason, Dell networking list prices are completely out of whack. We get PowerConnect switches from our local importer for something like 1/3 of what I see on their US site. This is specific to networking - servers, desktops, laptops, storage, etc, have sane list prices.