Post navigation

10Gbps Cheap & Without Risk In Even The Smallest Environments

Over the last 18 months cheaper, commodity, small port count, but high quality 10Gbps switches have become available. NetGear is a prime example. This means 10Gbps networking is within reach for even the smallest deployments.

Size is an often used measure for technological needs like storage, networking and compute but in many cases it’s way too blunt of a tool. A lot of smaller environments in specialized niches need more capable storage and networking capacities than their size would lead you to believe. The “Enterprise level” cost associated with the earlier SPF+ based swithes was an obstacle especially since the minimum port count lies around 24 ports, so with switch redundancy this already means 2 *24 ports. Then there’s the cost of vendor branded SPF+ modules. But that could be offset with Copper Twinax Direct Attach cabling (which have their sweet spots for use) or finding functional cheaper non branded SFP+ modules. But all that isn’t an issue anymore. Today 10GBase-T card & switches are readily available and ready for prime time. The issues with power consumption and heat have been dealt with.

While vendors like DELL have done some amazing work to bring affordable 10Gbps switches to the market it remained a obstacle for many small environments. Now with the cheaper copper based, low port count switches it’s become a lot easier to introduce 10Gbps while taking away the biggest operational pains.

You can start with a lower number of 10Gbps ports (8-12) instead of a minimum of 24.

No need for expensive vendor branded SPF+ modules.

Copper cabling (CAT6A) is relatively cheap for use in a rack or between two racks and for this kind of environment using patch lead cables isn’t an issue

Power consumption and heat challenges of copper 10Gbps has been addressed.

So even for the smallest setups where people would love to get 10Gbps for live migrations, hypervisor host backups and/or the virtual network it can be done now. If you introduce these for just CSV, live migration, storage or backup networks you can even avoid having to integrate them into the data network. This makes it easier, non disruptive & the isolation helps puts minds at easy about potential impacts of extra traffic and misconfigurations. Still you take away the heavy loads that might be disrupting your 1Gbps network, making things well again without needing further investments.

So go ahead, take the step and enjoy the benefits that 10Gbps bring to your (virtual) environment. Even medium sized shops can use this as a show case while they prepare for a 10Gbps upgrade for the server room or data center in the years to come.

12 thoughts on “10Gbps Cheap & Without Risk In Even The Smallest Environments”

Great post. I’ve used these and think they’re great in specific applications. I used 4x of them with 2x for HV traffic, and 2x for Storage/LM/Cluster/Mgmt traffic. The only problem I can’t get around is that there isn’t a great way to use them as ToR switch in an office and connect down-level switches in redundantly; lack of MLAG. Any thoughts on this?

Dell N2000 and N3000 series support MLAG. Great when they get MLAG and VLT to co-operate. Wonder if they will go with one of them for Dell Networking OS 10 (that will unificate OS 6 and 9) or if both will be available.

How timely! I plan to do some testing with one of these guys and our Compellent (for a test and sandbox cluster). To their credit, while it’s not officially “blessed” by Dell they were pretty reasonable when planning how well it might work. Can’t beat the price.

Infiniband is another hidden gem when you look at raw perf per port. Why go 10G when you can go 56? 😉

Infiniband is indeed very cost effective $/Gbps. It’s however more difficult to introduce it some environments as it is often associate it with high cost specialty niche computing in a world where convergence has become a goal instead of a means to an end.

Are there any benefit of SFP+ based 10Gbps over 10GbaseT connectivity within reach of SFP+ to SFP+, 10GbE, Copper Twinax Direct Attach Cable, 7 Meters?

I must admit that 8-12x 10Gbps switches are tempting. No stacking nature of these switches make it difficult to implement in HA design with converged networking. It is possible to build NIC team with active and passive members to avoid switch to switch communication. At the end it is all about weighting value of such solutions.

No benefit really from SFP+ over 10GBaseT. I think that if you’re going beyond 4x of the Netgear 12-port switches that you should really be looking at the Dell N4032 (tremendous bang for the buck there after you negotiate with Dell on their ludicrous markup). Using these Netgear switches you need to really design carefully to workaround the limitations, but its doable under the right circumstances.

Exactly. It depends on your needs. Do realize that for CSV/LM traffic you don’t need stacking/teaming and for the vSwitch you can use Windows Switch independent NIC teaming which is active/active with great DVMQ/vRSS characteristics. So for most (NOT all) scenarios this is OK. It all depends and I know some situations that just getting the Live migration traffic offloaded to 2 ( for redundancy, if you want/need it) is a life saver and doesn’t require anything more.

Regarding the markup, yes … negotiate, really, do not ever pay list prices. The good thing with DELL is that even when % wise they might give less discounts than some other vendors you still get lower prices and great value. Ah, markups & list prices. Reminds me of a 89% discount on a SATA drive backup appliance that per TB was still more a expensive than a SAN with SLC SSD / SAS / NL-SAS at lower discount.

SFP+DAC is about 0.1W per port and SFP+Optical is below <1W per port while 10GBASE-T is currently at best 1W per port and for example Intel X540-T2 nic is at almost 5W per port. For perspectice, Intel 1GBASE-T i350-T2 is at roughly 2W per port.

Latency for 10GBASE-T is 2-4 microseconds, for SFP+ it's 0.3 microseconds. Again for perspectice, 1GBASE-T is 1-12 microseconds. I would suggest that this is negligible for all but the most highly latency sensitive applications (HPC). If using Jumbo Frames (which you probably should) the difference levels out even more.

There has been some talk about difference in BER between 10GBASE-T and SFP+DAC and SFP+Optical. I haven't found any hard evidence of this so I'll leave it at that.

Personally I'm going to continue with SFP+ in the server room and obviously BASE-T for client switch ports.

The BER issue with quality cabling is a non issue today, I think, perhaps it was in the past, but I don’t worry about it. Wattage in under control, latency as you state only in certain scenarios where base-T won’t even play. I also prefer SPF+ but there is choice & “it depends” on the environment and needs.

If I were to start fresh with 10GbE today it is not unlikely that I would go with 10GBASE-T instead of SFP+. One of the main reasons that I will continue with SFP+ is that I already invested in SFP+ equipment before 10GBASE-T was a viable option.

Your email address will not be published. Required fields are marked *

Comment

Name *

Email *

Website

Notify me of follow-up comments by email.

Notify me of new posts by email.

* Checkbox GDPR is required

*This form collects your name, email and content so that we can keep track of the comments placed on the website. For more info check our privacy policy where you'll get more info on where, how and why we store your data.