NSX-T

Back in November 2018, VMware Cloud on AWS (VMC) SDDC 1.5 Patch 1 was released and it was one of the most highly anticipated release by our customers. Although this was a "patch" release, it included a ton of new features and also brought the full power of the NSX-T platform to VMC as a generally available feature!

With NSX-T, customers also now have access to the highly requested Distributed Firewall (DFW) capability which enables granular control over East-West traffic between application workloads. In addition to enabling micro-segmentation in VMC, customers can now easily manage DFW rules using a number of grouping constructs (Tags, Virtual Machines & Conditional Statements) to create dynamic policies which follow their workloads.

Customers can configure DFW (as well as Edge Firewall) rules using the VMC Console UI but many of you have been asking for an automated method, especially if you need to create a large number of policies for more than a couple of workloads. After returning from the holiday, I spent the last couple of days updating my NSX-T Policy PowerShell Module which now includes basic support for managing DFW. For those of you who are new to using the NSX-T Policy API and PowerCLI, be sure to give these two articles a read here and here before proceeding further.

Today, when you deploy a new SDDC on VMware Cloud on AWS (VMC), NSX-T is now the default networking stack and NSX-V is no longer used for net new deployments. In fact, we are about to start migrating existing VMC customers who have an NSX-V SDDC and converting them to an NSX-T SDDC. Humair Ahmed who works over in our Networking & Security Business Unit has an excellent blog post here that goes into more details on what NSX-T brings to VMC.

Upon first glance, you might think that this is the exact same version of NSX-T that we have been shipping to our on-premises customers but in fact, it is actually a brand new and improved version. Similar to vSphere (vCenter and ESXi) and VSAN, VMC is always running a newer version of our software than our on-prem customers. One immediate difference that you should be aware of when using NSX-T in VMC is that the current NSX-T API is not available and instead a new NSX-T Policy API has been introduced to help simplify the consumption of NSX-T. All functionality in the current on-prem NSX-T API can be consumed using the new Policy API.

At VMworld, I spoke to a number of current and upcoming customers with NSX-T based SDDCs and they were really interested in using the new NSX-T Policy API and as the title of this blog post suggests, this will be a quick primer on how to do that. Before we get started, confirm that you have an NSX-T based SDDC deployed. If you are not sure, there are a few ways to determine this using either the VMC Console UI or API, instructions can be found here and here.

This question came up last week asking for a programmatic method to identify whether NSX-V or NSX-T is deployed in your environment. With NSX-V, vCenter Server is a requirement but for NSX-T, vCenter Server is not a requirement, especially for multi-hypervisor support. In this post, I will assume for NSX-T deployments, you have configured a vCenter Compute Manager.

Both NSX-V and NSX-T uses the ExtensionManager API to register themselves with vCenter Server and we can leverage this interface to easily tell if either solutions are installed. NSX-V uses the com.vmware.vShieldManager string to identify itself and NSX-T uses the com.vmware.nsx.management.nsxt string to identify itself.

Here is a quick PowerCLI snippet that demonstrates the use of the vSphere API to check whether NSX-V or NSX-T is installed and provides the version shown in the registration:

1

2

3

4

5

6

7

8

9

$extensionManager=Get-View ExtensionManager

foreach($extension in$extensionManager.ExtensionList){

if($extension.key-eq"com.vmware.vShieldManager"){

Write-Host"NSX-V is installed with version"$extension.Version

}elseif($extension.key-eq"com.vmware.nsx.management.nsxt"){

Write-Host"NSX-T is installed with version"$extension.Version

}

}

Here is a screenshot from my environment which has both NSX-V (6.4) and NSX-T (2.1) installed:

Note: Due to some current testing, I have not upgraded my NSX-T deployment to the latest 2.2 release, so I do not know if the version gets bumped to match the actual released version

While working on my Getting started with VMware Pivotal Container Service (PKS) blog series awhile back, one of the things I was also working on was some automation to help build out the required infrastructure NSX-T (Manager, Controller & Edge), Nested ESXi hosts configured with VSAN for the Compute vSphere Cluster and Pivotal Ops Manager. This was not only useful for my own learning purposes, but that I could easily rebuild my lab if I had messed something up and allowed me to focus more on the PKS solution rather than standing up the infrastructure itself.

To be honest, I had about 95% of the script done but I was not able to figure out one of the NSX-T APIs and I got busy and had left the script on the back burner. This past weekend while cleaning out some of my PKS research documents, I came across the script and funny enough, in about 30minutes I was able to solve the problem which I was stuck for weeks prior. I just finished putting the final touches on the script along with adding some documentation. Simliar to my other vGhetto Lab Automation scripts, I have created a Github repo vGhetto Automated PKS Lab Deployment

UPDATE (06/19/18) - I have just updated the script to also include the deployment and configuration of the PKS components (Ops Manager, BOSH Director, Harbor & Stemcell). The script by default will now configure everything end-2-end and you will have a fully functional PKS environment that you can start playing around with. For complete details, please see the Github repo which has the updated requirements and documentation. Below is a screenshot of the PKS deployment and configuration which requires the use of the Ops Manager CLI (OM).

The script will deploy the following components which will be placed inside of a vApp as shown in the screenshot below:

NSX-T Manager

NSX-T Controller x 3 (though you technically only need one for lab/poc purposes)

NSX-T Edge

Nested ESXi VMs x 3 (VSAN will be configured)

Ops Manager

The script follows my PKS blog series and automates Part 3 (NSX-T) and the start of Part 4 (Ops Manager deploy), please refer to these individual blog posts for more information. The goal of the script is to enable folks to jump right into the PKS configuration workflows and not have to worry about setting up the actual infrastructure that is needed for PKS. Once the script has finished, you can jump right into Ops Manager and start your PKS journey.

Here is a sample execution of the script which took ~29 minutes to complete.

The full requirements for using the script be found on the Github repo and below are the software versions that I had used to deploy and configure PKS:

In this article, we are now going to start configuring NSX-T so that it will be ready for us to install PKS and consume the networking and security services provided by NSX-T. The result is that PKS can deliver on demand provisioning of all NSX-T components: Container Network Interface (CNI), NSX-T Container Plugin (NCP) POD, NSX Node Agent POD, etc) automatically when a new Kubernetes (K8S) Cluster is requested, all done with a single CLI or API call. In addition, PKS also provides a unique capability through its integration with NSX-T to enable network micro-segmentation at the K8S namespace level which allows Cloud/Platform Operators to manage access between application and/or tenant users at at a much finer grain level than was possible before which is really powerful!

As mentioned in the previous blog post, I will not be walking through a step-by-step NSX-T installation and I will assume that you already have a basic NSX-T environment deployed which includes a few ESXi host prepped as Transport Nodes and at least 1 NSX-T Controller and 1 NSX-T Edge. If you would like a detailed step by step walk through, you can refer to the NSX-T documentation here or you can even leverage my Automated NSX-T Lab Deployment script to setup the base environment and modify based on the steps in this article. In fact, this is the same script I have used to deploy my own NSX-T environment for PKS with some minor modification which I will be sharing at a later date.

If you missed any of the previous articles, you can find the complete list here:

In todays data centers, it is not uncommon to find servers with only 2 x 10GbE network interfaces, this is especially true with the rise of Hyper-Converged Infrastructure over the last several years. For customers looking to deploy NSX-T with ESXi, there is an important physical network constraint to be aware of which is quickly mentioned in the NSX-T documentation here.

For example, your hypervisor host has two physical links that are up: vmnic0 and vmnic1. Suppose vmnic0 is used for management and storage networks, while vmnic1 is unused. This would mean that vmnic1 can be used as an NSX-T uplink, but vmnic0 cannot. To do link teaming, you must have two unused physical links available, such as vmnic1 and vmnic2.

As shown in the diagram below, an ESXi host with only two physical NICs can not provide complete network redundancy as each pNIC can only be associated with a single switch (VSS/VDS or the new N-VDS) as pNICs can not be shared across switches.

For customers, this means that you need to allocate a minimum of 4 pNICs to provide redundancy for both overlay traffic and non-overlay VMkernel traffic such as Management, vMotion, VSAN, etc. This is much easier said than done as not all hardware platforms can easily be expanded and even if they can, there still is a huge cost in expanding the physical network footprint (switch port, cabling, etc).

UPDATE (06/12/18) - As of NSX-T 2.2, which was recently released, there is now a UI in NSX-T Manager for managing the migration of VMkernel interfaces to the N-VDS. For automation purposes, you may still find this article useful but now you have option of using the UI.

Primary Sidebar

Search this website

Author

William Lam is a Staff Solutions Architect working in the VMware Cloud on AWS team within the Cloud Platform Business Unit (CPBU) at VMware. He focuses on Automation, Integration and Operation of the VMware Software Defined Datacenter (SDDC).