Building the Right Network for your VMware NSX Deployment

** I have and will continue to make some edits to this post based on feedback. I’ll leave the original text with strikethrough, and make modifications in blue. First I’m copying the disclaimer to the top. **

** Disclaimer: I work for Cisco as a Principal Engineer focused on Cisco ACI and Nexus 9000 product lines, this means you’re welcome to write all of this off as biased dribble. That being said I intend for this post to be technically accurate and support people looking to deploy these solutions. If I’ve made a mistake, let me know. I’ll correct it. **

VMware NSX is a hot topic within the SDN conversation, and many customers are looking to deploy it, for test, or proof of concept purposes, although it’s probably not quite ready for production prime-time as indicated by the only +/- 250 paying customers VMware has claimed at this point. Those numbers for a product that is 7+ years old from the Nicira perspective and several years old as NSX after Niciria’s acquisition suggest their is still work to be done.

That being said VMware has built a message around NSX which originally focused on network, then morphed into security due to lack of interest and traction. The VMware NSX security message is focused on a very important story: micro-segmentation. Micro-segmentation is a different way of looking at the concept of security within the data center. Traditionally we’ve focused on what’s called perimeter security, securing traffic as it enters or exits the data center to the campus or the WAN. Within this architecture very little security is provided between servers, or server groups within the data center. The graphic below shows an example of this.

The problem with this model is that little to no security is provided, or available for server to server communications. This poses an issue as modern network exploits are constantly advancing. Many exploits are designed to find a single vulnerable server within the data center, exploit that server, then utilize that server to attack other servers from within the ‘trusted zone.’ This is only one example of the need for modern security designs such as micro-segmentation. Other examples may be compliance zones, multi-tenancy designs, etc. The graphic below depicts the general concept of micro-segmentation.

As shown in the diagram above micro-segmentation applies additional security at the edge to filter and protect traffic at the edge. This provides the ability to secure traffic between physical/virtual servers or groups of servers. Edge security is typically layered in as an addition to the perimeter security, and can be done with virtual appliances, physical appliances, or both. This typically means implementing stateless, or stateful inspection devices capable of securing traffic between server groups.

VMware NSX for micro-segmentation:

The key benefit of VMware NSX is its distributed firewall within the hypervisor. It is one of several products that can provide this capability. This capability allows traffic between VMs to be inspected and enforced locally within a hypervisor. This can provide security benefits for VM traffic, as well as potential performance benefits, in the form of reduced latency for traffic that is switched locally within a single hypervisor, although that is not a common scenario. The same level of security can, of course, be provided in other ways (this is IT, it always depends.) One of those ways would be enforcing traffic switching and policy enforcement consistently at the network edge (leaf or access layer.)

When choosing whether or not to use VMware NSX you’ll want to consider a few important factors:

Performance – There will be significant CPU overhead in each hypervisor which is directly proportional to the amount of traffic inspection/enforcement being done on VM traffic. You’ll want to weigh this overhead against the real purpose of the hypervisor CPU: running your VMs.

Hypervisor(s) used – VMware NSX is actually two products:

NSX for VMwarevSphere, which only works with the VMware hypervisor and management tools

NSX Multi-Hypervisor (NSX-MH), which works with a select few additional hypervisors but also adds dependencies for a VMware version of OpenStackOpen Vswitch (OVS) that is not part of the OpenStackOVS community code train.

Each of these have disparate feature sets, so ensure you’re assessing a specific product for the features you require.

It has been brought to my attention that VMware NSX-MH is being discontinued and will no longer be sold. Some of the features/support it added above NSX for VMware vSphere will be incorporated into NSX over time.

Assuming now that you’ve chosen to go down the path of NSX and chosen the correct product for your needs, let’s discuss how to implement NSX in a holistic network architecture. For the purpose of brevity, we’ll focus on VMware NSX for VMwarevSphere and that you’re using only VMware hypervisors which is required by that product.

NSX is now providing the ability to inspect and secure your VM traffic. In addition it can provide routing functionality for the VM data. It’s important to remember that NSX is only able to secure, route, or manage traffic that is within a hypervisor, and in this case a VMware hypervisor. This means that the rest of your physical servers, appliances, mainframe ports, etc will need another security model, and management domain, or the traffic will need to be artificially forced into the hypervisor. The second option will cause bandwidth constraints and non-optimal traffic patterns which will result in increased latency and degraded user experience.

The other important thing to remember is that NSX is primarily designed for management and security of the VM-to-VM communication. This means that even non-VM-data ports on the hypervisor require special consideration. The following diagram depicts these separate ports.

As shown in the diagram above a single VMware host utilizes several physical port types for network connectivity. VMware NSX focuses on routing and services for VM data, which is a subset of the host networking requirements. Security, services, and connectivity will need to be applied to host management, vMotion, IP storage ports etc. Additionally physical network connectivity, port configuration and security will need to be implemented for the VM data ports.

With that consideration it will be important to assess your end-to-end network infrastructure prior to deploying NSX. Especially when working with network security you don’t want to start down a path laid out with a myopic virtual only view. One way to improve upon the gaps provided by the VMware NSX Network Functions Virtualization (NFV) tool is to utilize Cisco Application Centric Infrastructure (ACI) to provide robust, automated, secure network transport. As NSX is simply an application for virtual environments ACI is able to handle it as it would any other application. Additionally ACI is built with end-to-end micro-segmentation in place as a native feature of the platform.

Using VMware NSX with Cisco ACI for a holistic security solution:

This section will assume preexisting knowledge of the basic ACI concepts. If you’re not familiar the following short videos will bring you up to speed:

Cisco ACI takes an application requirements approach to automating network deployment. Put simply, ACI automates network provisioning based on the applications requirements. For example, rather than translating compliance requirements for an application into VLANs, subnets, firewall rules etc. you simply place PCI compliant end-points (servers, ports, etc.) in a group dictating the access rules for that group.

ACI groups are called End-Point Groups and can be used to group any objects that require similar connectivity and policy as described in the video above ‘Application Centric Infrastructure | End-Point Groups’. This grouping can be used to enhance NSX security and automation by providing for physical links, management/vMotion/IP Storage ports, etc. The diagram below shows an example of this configuration.

As shown in the diagram above ACI will provide segmentation beyond what’s provided by NSX for the VM traffic. This allows the NSX VM data ports to be placed into groups as required, while grouping and segmenting the ports not managed by NSX, as well as physical server and appliance ports. This allows for a complete security and automation stance beyond the limitations of VMware NSX.

ACI additionally provides line-rate gateway functionality from any port, to any port. NSX relies on encapsulation techniques such as VxLAN to traverse the underlying IP network. In order for this encapsulated VM traffic to communicate with other devices such as the 30+ percent of workloads that aren’t virtualized using traditional VM techniques. This reduces the x86 requirements of VMware NSX and removes performance bottlenecks caused by NSX gateway servers. The following diagram depicts a 3-tier web application utilizing ACI to provide traffic normalization for both virtualized and non-virtualized servers.

The diagram shows the standard ports used by the VMware hosts as well as two EPGs used to group NSX Web and App tier VMs. The database tier is shown running on physical servers which create the complete web application using both virtual and physical resources. Typically with this type of design in NSX gateway servers would be required for encapsulation/de-encapsulation between the virtual and physical environment.

Within ACI, overlay encapsulation is normalized between untagged, 802.1q, and VxLAN with NVGRE support coming shortly. This means that gateway devices are not needed because the ACI ports handle translation at line-rate. The following diagram shows the encapsulation and traffic flow for this example 3-tier web application.

The diagram above depicts the traffic encapsulation translation that will be done natively by Cisco ACI. User traffic will be 802.1q VLAN tagged or untagged, and need to be sent to the Web Tier via the correct VxLAN tunnel. From there the VxLAN to VxLAN forwarding between the virtualized Web and App tier can be handled by NSX or ACI. Lastly the VxLAN encapsulation will be translated by ACI back to VLAN or untagged traffic and sent to the Database Tier. This will all be handled bi-directionally, significantly reducing latency and overhead.

Summary:

When NSX is chosen for virtual network automation or security there are several factors that must be considered. The most significant of these is handling traffic that NSX does not support, such as: physical servers/appliances, unsupported hypervisors, non-hypervisor based containers, etc. In order to provide end-to-end security and automation, the physical transport network will be the best point as it handles all traffic within the data center. Cisco ACI provides a robust automated and secured network that greatly enhances the NFV functionality provided by NSX for virtual machine traffic.

** Disclaimer: I work for Cisco as a Principal Engineer focused on Cisco ACI and Nexus 9000 product lines, this means you’re welcome to write all of this off as biased dribble. **

Post Author:
Joe Onisick

Joe has over 13 years experience in various disciplines within technology and the data center. His current focus is cloud computing infrastructures, I/O consolidation, and next generation data center architectures.

7 Replies to “Building the Right Network for your VMware NSX Deployment”

Hi Joe – i guess the issue of running NSX with ACI comes down to cost as well as having multiple management points yada yada……but Cisco already have a micro-segmentation distributed firewall within Vmware (and hyper-v?) via the 1kv and the VSG. Integrate that with ACI and this seems like the best of both worlds?