The Lab Goes SDN: Part 1 – NSX Install at Last!

I’ve covered overlay networks and their importance a few times in these pages over the years, but I have to admit that until this week I was never “walking the walk” at the ComplaintsHQ lab. To set a baseline, NSX comes in two flavors:

The first is NSX multi-hypervisor, the old Nicira Open vSwitch, which can be integrated with both vSphere and other competing hypervisors (KVM, Xen). The catch is that this really is a vSphere play and not so much a vCenter play. The integration of Open vSwitch replaces the vDS and so must integrate directly with a hosts vSS. If you already have a vDS infrastructure in place, this requires some significant rearchitecting.

The second is NSX-V, or the native VMware flavor of NSX, which is quickly evolving to be the defacto network architecture for VMware and is core to the SDN strategy. As an example of this, in upcoming versions of VCNS (vCloud Networking and Security), the NSX virtual firewall/router edge device is replacing the old vShield Edge. With NSX-V, the NSX SDN capabilities integrate directly with the vDS.

In my OpenStack entry I touched on the plans I had for introducing OpenStack into the lab. Unfortunately, the realities of NSX integration complicates things and have delayed those plans. Before we move forward I think it is worthwhile to call these out:

A mixed hypervisor (vSphere + other) OpenStack environment will require NSX-MH if you want to take advantage of advanced OpenStack SDN constructs (Neutron)

If you do not go that path, you need to fall back to static Nova network models. These map pretty closely to vCloud Director “port group assignment” org networks. So you have to configure a bunch of VLANs up front and map them to port groups which are then utilized by the OpenStack controller at the compute deployment layer (Nova).

VXLAN requires vDS, NSX-MH can’t integrate with vDS, but I Open vSwitch can integrate with VXLAN. Confused? Don’t feel bad. Overlay networking can get confusing fast. The net out here is that in a vCenter environment, to take advantage of both Neutron and VXLAN, you need essentially parallel networking setups. NSX-MH will be speaking VXLAN, but doing it’s own thing and not part of an existing vDS VXLAN.

For lots of reasons I don’t want to break down my HA/DRS clusters. I could have potentially played with OpenStack and NSX-MH exclusively in my entirely nested vCenter 2 environment, but the purpose of that one is really SRM so it would complicate things. I still may go ahead and create a third nested vCenter environment and play with OpenStack and NSX-MH there, but that will have to wait. For now I decided to move forward with NSX-V and shelf the OpenStack testing.

So back to the implementation detail… NSX is a fairly complex technology with some dependencies that never quite fit my old white box lab setup. For example you’ll need to have a vDS which means you’ll need to have a cluster and multiple NICs in each host. This means you’ll need either a pretty complex white box build, or a really good nested setup. I never quite had the former as I really was focused on building to a rock bottom budget, but these days I am running the latter so the time was right.

NSX has a few core components to be aware of:

NSX Manager: The NSX management plane is built by the NSX manager. The NSX manager provides the single point of
configuration and the REST API entry-points in a vSphere environment for NSX.

NSX Controller: The NSX control plane runs in the NSX controller. In a vSphere optimized environment with VDS the controller enables multicast free VXLAN, control plane programming of elements such as VDR. In a multi- hypervisor environment the controller nodes program the vSwitch forwarding plane. In all cases the controller is purely a part of the control plane and does not have any data plane traffic passing through it. The controller nodes are also deployed in a cluster of odd members in order to enable
high-availability and scale. Any failure of the controller nodes does not impact any data plane traffic.

Hypervisor Integration: The NSX Data plane consists of the NSX vSwitch. The vSwitch in NSX for vSphere is based on the vSphere Distributed Switch (VDS) (or Open vSwitch for non-ESXi hypervisors) with additional components to enable rich services. The add-on NSX components include kernel modules (VIBs) which run within the hypervisor kernel providing services such as distributed routing, distributed firewall and enable VXLAN bridging capabilities.

As you might imagine from the above, the first step in getting started with implementation is to deploy the NSX Manager. Luckily, as is frequently the case lately, VMware has packaged this as a click through OVA. Download the OVA and start the OVF Template deployment wizard from the web client as always:

The NSX Manager OVF package detail…

Agree if you’re ready to do this:

Select a deployment location for the VM:

Select a storage destination for the VM:

Connect the NSX Manager to a network (admin network generally since this is a management plane component):

Provide configuration for the appliance – passwords, hostname and IP info for the appliance:

Finish off the configuration and the NSX Manager VM will deploy:

Very easy https connection to the appliance IP and you will see the VAMI login:

Simple and clean UI. You can grab the tech support logs here, view hte configuration summary, manage and update the network configuration, upgrade the appliance and, the two most important at this stage, integrate the appliance with vCenter and back it up:

vCenter registration is very straightforward. Enter the vCenter address and login info, as well as the lookup service. Configured vCenter registration provided below as reference:

With this part complete the NSX Manager appliance is configured so you should go ahead and back it up just to be safe. After this we can head into the web client where we will now see the NSX management solution – Networking & Security. Clicking on that icon will bring us to the next stage of the configuration, but more on that next entry!