Selecting a Managed Switch for Home OpenStack

I'm building out a small OpenStack cluster at home, and I'm having a tough time trying to pick out a switch.

The entire goal of this deployment is extreme thrift.

The current v1 compute nodes are Mini-ITX boards with Celeron processors and a single gigabit interface. I have already built the first two nodes, and I expect them to be the only nodes for 3 months. I will likely add two additional nodes for storage in the next three months, followed by two additional compute nodes. These storage nodes will have two or three gigabit interfaces. Currently, I am providing storage from my desktop (via GlusterFS), which will likely use two gigabit interfaces once I get a proper switch. (I am currently on a Linksys 4-port Fast Ethernet hub that I found in my spare hardware drawer. )

I am open to purchasing used equipment.

I'm no networking expert, but here are, to the best of my knowledge, my complete requirements. These are absolute requirements.

Gigabit

802.11Q VLANs

802.3ad LACP

16 ports (or more)

I would prefer the following features, but they are not required.

Syslog

SNMP

CLI configuration

I had originally looked at the Cisco S2008-08 S200-08 (non-POE), but I fear that my desire to do LACP on my storage nodes will constrain my long-term desires to expand the cluster. This switch is available on Newegg for US$100. I have heard that it is bad practice to link multiple switches together, and I am also unsure of the feasibility of using LACP on an uplink/crossover. This contradicts the horizontal scaling mentality of the rest of the project, but I understand the practicalities.

The v1 compute nodes have a Realtek rtl8111/8168 gigabit card. My research indicates that jumbo frames are limited to 7200B rather than 9000B. I imagine that any switch meeting my requirements would already include jumbo frame support, but if not, should I spend extra?

Within the year, I expect to have 4 v1 compute nodes (one interface each), two storage nodes (two or three interfaces each), and two interfaces connected to my workstation for NAT uplink to my home network (and Internet).

I'm really trying to keep things as cheap as possible. To be honest, I was rather excited about the S200-08 at $100 and am not thrilled to spend much more.

I don't harbor any negative feelings towards any budget brands, but I will defer to others with experience.

802.11Q VLANs: do what now? (I assume you mean 802.1Q, also easy enough)

802.3ad LACP: Are you sure your clients support this?

16 ports (or more): easy enough

Quote:

I have heard that it is bad practice to link multiple switches together:

The internet would be a very boring place if this were true. Linking switches together is how we provide connectivity to all those PCs and whatnot in the office.

Quote:

and I am also unsure of the feasibility of using LACP on an uplink/crossover

Any modern equipment won't need/use a crossover, and LACP is perfectly hunky-dory on inter-switch links.

Quote:

My research indicates that jumbo frames are limited to 7200B rather than 9000B. I imagine that any switch meeting my requirements would already include jumbo frame support, but if not, should I spend extra?

ALERT: ASSUMPTION DETECTED. Why do you think you need jumbo frames? What benefit do you expect? Have you done any testing to determine if it nets you any gain? (real world experience tells most of us that, at best, you may gain 2-3% performance increase...and that is far from guaranteed.) Do not let jumbo frames (or lack of support therefore) influence your decision in any way.

Quote:

Within the year, I expect to have 4 v1 compute nodes (one interface each), two storage nodes (two or three interfaces each), and two interfaces connected to my workstation for NAT uplink to my home network (and Internet).

I don't understand this...can you explain what you mean by "NAT uplink to my home network (and Internet)?"

Quote:

I'm really trying to keep things as cheap as possible. To be honest, I was rather excited about the S200-08 at $100 and am not thrilled to spend much more

So you have champagne taste and a Kool-aid budget. E-bay is probably your friend here...but there are *tons* of switches that will meet all of your requirements (as I understand them) for under $100 out there, used/refurbed.

There is a SG300-20 model now I think with 18 copper ports and 2 SFP (maybe SFP+?) ports that would be an ideal fit at around $300 I think. Still out of the budget but $100 is not reasonable for a gigabit managed switch at all.

You could do a 100 megabit managed switch for your controller/message bus network and non-managed gigabit switches for your storage and VM networks.

Depending on how fast you need the storage to be and data moving between VMs, you could go all 100 megabit without too much issue if money really is a serious issue.

Of course, if money that tight, you probably shouldn't be tossing it at a project like OpenStack.

I don't know that I'd buy one for production at work, but I use a 2808 and 2816 in my home lab. Just picked up the 2816 for ~$120 refurb on ebay with two sets of ears. 2700 series is discontinued and even cheaper used. I couldn't find any significant difference between the two lines in their spec sheets, so I don't know what you'd be losing if anything by choosing the 2700 series.

802.11Q VLANs: do what now? (I assume you mean 802.1Q, also easy enough)

Typo on my part.

Quote:

802.3ad LACP: Are you sure your clients support this?

I was under the impression that I only needed support on the switch for this. If that's not true, I'll need to be careful when select the dual-port cards for the storage nodes.

Do you mean something else by "client" than hardware? If so, could you be more specific?

On another note, I'm far more interested in using (and developing) multi-path TCP.

Quote:

The internet would be a very boring place if this were true. Linking switches together is how we provide connectivity to all those PCs and whatnot in the office.

Of course I know that switches are connected together. It was more of a comment about bandwidth limitations on inter-switch connections. However, from what you say, I shouldn't have problems using LACP if needed.

Quote:

ALERT: ASSUMPTION DETECTED. Why do you think you need jumbo frames? What benefit do you expect? Have you done any testing to determine if it nets you any gain? (real world experience tells most of us that, at best, you may gain 2-3% performance increase...and that is far from guaranteed.) Do not let jumbo frames (or lack of support therefore) influence your decision in any way.

Thanks! I was not aware that jumbo frames provided so little performance increase.

Quote:

Quote:

Within the year, I expect to have 4 v1 compute nodes (one interface each), two storage nodes (two or three interfaces each), and two interfaces connected to my workstation for NAT uplink to my home network (and Internet).

I don't understand this...can you explain what you mean by "NAT uplink to my home network (and Internet)?"

Sorry for the confusion. Here's my current network topology, which is drastically limited by the physical restrictions in my rented house (moving in two months...).I have a 2WIRE wifi AP in my bedroom (which cannot be connected anywhere else in the house) that provides a 192.168.1.0/24 network to the Internet (through NAT).I have a desktop system in another room, which actually connects via Ethernet to an AP running DD-WRT in "client bridge" mode to act as a client on the 2WIRE network due to signal strength issues with cheap USB and PCI wifi adapters that I have. There is a "openstack" network (currently provided by that 100mb hub and will be replaced by whatever I purchase) that connects to each compute node and the desktop. This network is currently 10.1.1.0/24, and the desktop is assigned 10.1.1.1 (nodes at .2 and .3).The desktop has iptables NAT rules setup to forward to the 2WIRE network. (Yo dog, I heard you like NAT...) The compute nodes (and VMs on them) use 10.1.1.1 as their gateway.

While my ISP connection is far too slow to saturate even 802.11G, once I move, I want a high performance home network. In all likelihood, way, way overkill, but I'd like to have the spare ports in case I'd like to aggregate links in the future.

Quote:

I'm really trying to keep things as cheap as possible. To be honest, I was rather excited about the S200-08 at $100 and am not thrilled to spend much more

So you have champagne taste and a Kool-aid budget. E-bay is probably your friend here...but there are *tons* of switches that will meet all of your requirements (as I understand them) for under $100 out there, used/refurbed.[/quote]

I'm not saying that $100 is any kind of budget. I've traditionally thought that networking is about as interesting as electricity. Networking is obviously quite complex, but I still don't want to spend more than necessary.

I was under the impression that I only needed support on the switch for this. If that's not true, I'll need to be careful when select the dual-port cards for the storage nodes.

Nope. Both ends of any LAG must support the same protocol(s).

Quote:

It was more of a comment about bandwidth limitations on inter-switch connections. However, from what you say, I shouldn't have problems using LACP if needed.

I currently have numerous 80Gbps inter-switch lags (8x10G) in production. It's mostly for show, though...since we don't come *close* to using that much bandwidth.

Quote:

The desktop has iptables NAT rules setup to forward to the 2WIRE network

Ah...gotcha. Shouldn't be a problem...though I'm curious why you chose this design...is there any compelling reason you have built this openstack network on a separate subnet, hidden behind IPTables? I'm genuinely curious. There are occasions where a double-NAT setup can break your communications.

My "kool aid budget" crack wasn't meant to be snarky...just that for $100 or less, you'll need to look up refurbs/used boxes on E-bay, (try the Agora here as well).

There is a SG300-20 model now I think with 18 copper ports and 2 SFP (maybe SFP+?) ports that would be an ideal fit at around $300 I think. Still out of the budget but $100 is not reasonable for a gigabit managed switch at all.

Could you explain the SFP connectors more? I know that we use them at work on our optical 10GbE connectors, and they are quite pricely (everything 10GbE is). I don't see what the advantages would be. And why only two ports?

I also see that the SG300-20 is fanless, which is a definite plus. I just wish it was only have the width...

Could you explain the SFP connectors more? I know that we use them at work on our optical 10GbE connectors, and they are quite pricely (everything 10GbE is). I don't see what the advantages would be. And why only two ports?

SFP can have the optics switched out for longer distances than you can cover with copper...the "only two ports" thing typically means you are using them for uplinks to core switching gear.

The desktop has iptables NAT rules setup to forward to the 2WIRE network

Ah...gotcha. Shouldn't be a problem...though I'm curious why you chose this design...is there any compelling reason you have built this openstack network on a separate subnet, hidden behind IPTables? I'm genuinely curious. There are occasions where a double-NAT setup can break your communications.

Entirely due to physical constraints in my house. There's no reasonable way to run an Ethernet cable to the 2WIRE AP; the only method is via Wifi. It's not a good design -- just the only thing that I could think of last night to proceed with software updates on the compute nodes. For now, the OpenStack controllers (read: web interface and API) will run on my desktop and have access to both the 192.168.1.0/24 and 10.1.1.0/24 networks. As you say, I may run into problems with stuff running in OpenStack.

In all likelihood, I'll probably put my desktop in 2WIRE's "DMZ" mode, which will give it a public IP and eliminate one level of NAT.

Quote:

My "kool aid budget" crack wasn't meant to be snarky...just that for $100 or less, you'll need to look up refurbs/used boxes on E-bay, (try the Agora here as well).

I don't know that I'd buy one for production at work, but I use a 2808 and 2816 in my home lab. Just picked up the 2816 for ~$120 refurb on ebay with two sets of ears. 2700 series is discontinued and even cheaper used. I couldn't find any significant difference between the two lines in their spec sheets, so I don't know what you'd be losing if anything by choosing the 2700 series.

I think that this is exactly what I want. I'm very tempted to go with a refurbished 2716...

The only thing I don't see on the PowerConnect 2716 Datasheet (direct PDF link) is the fan configuration. However, I have to assume that it is fan-less like the 2816.

I've been buying Zyxel switches at work because my budget is pretty tight (local government). The price is great and haven't had any trouble with the nine or so I've used to slowly replace our un-managed gear.

This is a design for wherever I move. The 2WIRE AP is replaced by a DD-WRT AP (with 4x 100MbE), which will be located in my office.

Hopefully that makes sense. Note that this is a logical diagram. For example, the comute nodes only have one physical interface, so the four connectors are trunked.

The Wifi clients and my workstation's Internet connection continue to work like they always have: DHCP assigned address on 192.168.1.0/24 behind NAT. (The workstation is connected via 100MbE.)

The workstation has a second NIC with VLAN 2 that provides GlusterFS storage. (I will likely get a dual-port NIC and do LACP here.)

Each compute node has a single NIC with VLANs 1, 2, and 100-200 trunked. VLAN 1 will be used as a management network and for patches (hence the Internet access). (DD-WRT will be configured to only hand out DHCP address for the first /25, and these nodes will go in the second /25 with static addresses.) VLAN 2 is for storage. Finally VLANs 100-200 (The third octet in 172.16.0.0/16 will match the VLAN number.) will be used as private networks between OpenStack VMs.

In the future, I will have dedicated storage nodes, which are not shown. They will have 2x LACP-enabled NICs trunked on VLANs 1 and 2.

I don't know that I'd buy one for production at work, but I use a 2808 and 2816 in my home lab. Just picked up the 2816 for ~$120 refurb on ebay with two sets of ears. 2700 series is discontinued and even cheaper used. I couldn't find any significant difference between the two lines in their spec sheets, so I don't know what you'd be losing if anything by choosing the 2700 series.

Thanks so much for the Dell recommendation. I received a refurbed 2816 last night and did the initial setup. The web interface is nice to work with, everything has worked well so far, and it draws an impressive (but likely expected from veteran network admins) 9W.

I did have to learn the hard way the the RS-232 to RJ-45 serial connector would not work with my laptop's ethernet and switch's serial ports. But all I had to do was get the switch rebooted in unmanaged mode, and I did everything through the web ui.

I think you must have got that cable backwards. It is usually for an RJ45 console port on the switch and the serial (DB9) port connects to your computer (often requiring a USB-Serial adapter). Then you use a terminal emulator program (like putty, SecureCRT, hyperterm, etc.) on your computer to talk to it. This allows you to configure the switch when the web interface is unavailable or do things like recover the password or reset the configuration or recover from a failed boot up on the switch.

I think you must have got that cable backwards. It is usually for an RJ45 console port on the switch and the serial (DB9) port connects to your computer (often requiring a USB-Serial adapter). Then you use a terminal emulator program (like putty, SecureCRT, hyperterm, etc.) on your computer to talk to it. This allows you to configure the switch when the web interface is unavailable or do things like recover the password or reset the configuration or recover from a failed boot up on the switch.

That's exactly what happened.

The switch has the DB9 port and cannot do console over one of it's gigabit ports. The reseller included a cable meant for a switch that does console over RJ45 to DB9 on the client.

I didn't realize that it was a one-way connection and spent several hours trying to get it to work on my laptop's ethernet port.