multiple subnets to blades?

I have a networking guy, who usually seems to know what he's doing, that's apparently having some issues assigning multiple vlans to a single blade.

To further muddy the waters, this person isn't the same person as the one who's managing the virtual connect for the blades.

And I, the primary SA for the blades, am neither one of these fine folk. Organizational structure designed by a committee...

The core requirement is that I have multiple subnets assigned to the blades so I can get a cluster going. My networking knowledge predates vlans so I'm not real clear on the concept; however, it sounds like it's a one to one relationship between vlans and subnets.

Is anyone familiar with this issue and, if so, where I could direct my cohorts for the answers to their issues?

Re: multiple subnets to blades?

I'm in somewhat similar situation, but managed to implement multiple VLANs to a single blade successfully.

It might have been helpful that our VC configuration guy chose not to try anything fancy with the VC and instead went for a simple & stupid approach: anything that comes in through the VC trunk is allowed to go to all the blades, and they will then sort it out themselves.

These blades are all virtualization hosts (VMware for x86 and HPVM for Itanium), so most stuff happens in the VSwitch layer of the respective virtualization platforms anyway...

> My networking knowledge predates vlans so I'm not real clear on the concept; however, it sounds like it's a one to one relationship between vlans and subnets.

One subnet per VLAN is usually correct. It does not have to be so, but doing otherwise tends make things more complicated than necessary.

A single VLAN behaves mostly like a single physical network segment: however, in multi-VLAN trunk connections, seeing that there is no traffic for the VLAN you're interested in does not guarantee that the physical media is not busy with other VLAN's traffic. This may be desirable to enable more efficient use of network bandwidth, or it may require the implementation of per-VLAN bandwidth limiting and/or QoS.

The most important thing would be that you, the VC guy and the networking guy should all be able to "speak the same language", so that you can all describe your requirements and any problems you may have found in a way that is understandable and unambiguous to all three.

In Cisco terminology, to enable multiple VLANs, the connection from the switch to the VC must be a trunk port, not an access port.

In multi-VLAN connections (trunk ports) there can be (at most) one VLAN whose traffic is transmitted as usual (untagged). This may be called "Default VLAN" for this port. All other VLANs must be "tagged", i.e. have the VLAN ID added to the Ethernet packet frame.

It is also possible to have all traffic in tagged form: in this case there is no default VLAN, and any non-tagged packets will be dropped. The default VLAN configuration must be the same in both endpoints of a trunk link, otherwise it won't work.

Adding the VLAN ID makes the Ethernet-level packet protocol unrecognizable to non-VLAN-aware hosts. (By the way, this is not a strong protection: a full-featured network sniffer on a trunk link can parse the traffic of all the VLANs it can hear with no trouble, even if the host it's running on is not VLAN-aware as such.)

There are two ways to deal with this:

a.) make the VC untag the traffic and pass it to the appropriate interfaces on the blades.

In this solution, the number of hardware-level interfaces in your blades will limit the number of VLANs you can access. (Although the limit is not necessarily one-to-one: each Flex-10G interface will split into multiple logical interfaces when used with a Virtual Connect module.)

b) make the VC pass the traffic in tagged form, and configure the blade OS to accept VLAN-tagged traffic.

This allows you to aggregate all the blade's physical NICs into one big fault-tolerant link (usually called "NIC teaming" in Windows, "bonding" in Linux, and "EtherChannel" in Cisco terminology), then implement VLAN support on top of that.

Re: multiple subnets to blades?

One *can* run mulitple IP subnets over the same "broadcast domain" - the caveat is there isn't any traffic isolation at layer2. You simply create multiple "logical" interfaces on top of the same physical interface - eg lan0:1, lan0:2, etc. The HP-UX transport only knows about the one "physical" (from the transport's perspective) interface.

What vlans allow one to do is provide traffic isolation at layer2 - to pretend that the one set of physical infrastructure is really multiple, separate logical infrastructure.

Now, if you have a single physical port in your server, and you want multiple virtual lans (aka vlans), those have to be "tagged" vlans - there is an additional header sent to/received by the systems that includes a vlan id, and that is how the mux/demux of the traffic over a single port/wire takes place. You define multiple vlans on top of your lan0 hardware and each of those appears as a separate "physical" interface to the transport.

Re: multiple subnets to blades?

1. You have a integrity blade (Since this is the HPUX FOrum)2. You have 4 built in lan interfaces on each blade plus you might have a mez card that provides additional interfaces.

Lets just say you have the 4 basic lan cards. So HPUX will see lan0, lan1, lan2, lan3. You assign them different IPs on different subnets.

Now the VC Guy and the network guy have to figure out what lans the Interconnect module connects to on what outbound ports and then using the Virtual connect software he virtually connects the correct lan card from the blade to the correct uplink connected to the interconnect module.

Lets say you have 4 uplinks to 4 different networks on the VC module. You specify what ports each network is connected to and then you specify what lan card on the blade is connected to what network. It really is that simple.

There is a online book "Virtual Connect for Dummies" Its a real book not a criticism. It explains this plus there are whitepapers that should be able to get you though.