VirtualizeStuffhttps://www.virtualizestuff.com
Virtualize EverythingSat, 24 Aug 2019 04:17:25 +0000en-US
hourly
1 https://wordpress.org/?v=5.2.3https://i0.wp.com/www.virtualizestuff.com/wp-content/uploads/2016/11/cropped-SiteIcon.png?fit=32%2C32&ssl=1VirtualizeStuffhttps://www.virtualizestuff.com
3232105065763A Plethora of Underlay Network Options with EVE-NG and NSX-Thttps://www.virtualizestuff.com/2019/08/23/a-plethora-of-underlay-network-options-with-eve-ng-and-nsx-t/
https://www.virtualizestuff.com/2019/08/23/a-plethora-of-underlay-network-options-with-eve-ng-and-nsx-t/#respondSat, 24 Aug 2019 04:14:49 +0000http://www.virtualizestuff.com/?p=2415Intro Hey, what’s up engineers! Lately, I’ve wanted the ability to create network topologies quickly that I can use as an underlay network for NSX-T. Allowing me to validate use cases or perform customer-specific demos. For example, if the customer is a Cumulus shop, then the demo must utilize a Cumulus underlay network. Now I understand that NSX-T doesn’t care what the underlay is but providing customer-specific […]

Intro

Hey, what’s up engineers! Lately, I’ve wanted the ability to create network topologies quickly that I can use as an underlay network for NSX-T. Allowing me to validate use cases or perform customer-specific demos. For example, if the customer is a Cumulus shop, then the demo must utilize a Cumulus underlay network. Now I understand that NSX-T doesn’t care what the underlay is but providing customer-specific demos like this goes a long way in building rapport with the networking team when attempting to pitching NSX-T.

Purchasing physical switches was out of the question. As they tend to be expensive, loud, and consume lots of power. So, in my situation, I wanted to leverage some network emulator and decided on EVE-NG PRO because it supports multiple network vendors, nice HTML5 interface, and is easy to use.

Topology

EVE-NG Topology

So, before we jump into the EVE-NG, let’s quickly review the topology we are going to build. As you can see, we have a total of 5 Cumulus switches (2 spines and 3 leaves) connected to the OOB Mgmt network. This network provides IP addressing for the switches via DHCP server and is also where the Ansible server lives, allowing us to execute playbooks against the switches. Leaf-01 & Leaf-02 connect to the NSX-T edge nodes, while Leaf-03 provides access to external networks. In this lab, we will use /30 subnets to establish IP connectivity between the spine & leaf switches and BGP will handle all routing decisions. The spine switches will be in AS 65100 while each leaf switch will have it’s own respective AS numbers. Now let’s jump into EVE-NG and begin building!

EVE-NG

A quick note, I won’t be discussing how EVE-NG is deployed or configured in my environment. If this is something that interests, you let me know in those comments down below, and I can certainly make a follow-up video!

Alright, we have a blank canvas, so let’s add some nodes and networks. Well, start with nodes by clicking Add an object in the upper left-hand corner and selecting Node. As you can see, we’re presented with a list of templates. Notice a bunch is grayed out because we haven’t added images for those specific templates. In the search box type Cumulus and select it. We’ll begin by increasing the number of nodes from 1 to 5. Then we’ll choose an image we want to use, in this case, 3.7.7. Leave the default name for now as we will change this later. Finally, let’s increase the CPU count to 2, RAM to 512mb and Ethernet ports to 5 and click Save.

Woohoo! We have five switches on the canvas now let’s add the four networks by heading back up to Add an object this time selecting Network.

The 1st network we’ll create is the OOB Mgmt for our switches. Let’s name it OOB Mgmt, below that we have type dropdown. Depending on how you deployed EVE-NG, your selection might be different. In my environment, I will select Cloud1 and click Save. Now we can rinse and repeat these steps a few times. One for External connecting to Cloud2, one for NSXT-01connecting to Cloud3, and finally one for NSXT-02 connecting to Cloud4. For the sake of time, I’ll speed up the creation of these three new networks. With everything on the canvas, let’s organize the objects to replicate what was shown in the diagram earlier and connect the switches:

We’ll start by connecting eth0 for all the switches to the OOB Mgmt network

Spine-01, swp1 to Leaf-01, swp1

Spine-01, swp2 to Leaf-02, swp1

Spine-01, swp3 to Leaf-03, swp1

Spine-02, swp1 to Leaf-01, swp2

Spine-02, swp2 to Leaf-02, swp2

Spine-02, swp3 to Leaf-03, swp2

Leaf-01, swp3 to Cloud NSXT-01

Leaf-02, swp3 to Cloud NSXT-02

Leaf-03, swp3 to Cloud External

Awesome, now that all our switches are connected, we can power them on by selecting More actions and clicking Start all nodes. I’ll open the console for Spine-01 and login with cumulus and the default password of CumulusLinux!. To confirm it has received an IP address I’ll run “net show interface” and as you can see, we have an IP address. I’ll quickly repeat this process for the other switches.

That’s it! That’s all we need to do from an EVE-NG perspective; let’s look at the ansible-playbook responsible for configuring the switches.

Ansible

Let’s switch to VSCode and review the directory structure for the playbook. As you can see, it’s a basic Ansible folder structure. Let’s start from the bottom and work our way up:

The readme.md provides a high-level overview of the topology.

The lab01.yml is our playbook that runs a single play called Underlay Network Configuration, which executes two roles routing and common.

The hosts file is our Ansible inventory file broken into two groups leaf and spine. These are then added as children objects to the network group. Note, the playbook calls the network group as this ensures Ansible runs against all switches.

The ansible.cfg sets default settings for the ansible environment.

The roles folder contains two subfolders common and routing. These folders essential contain tasks that run against the switches. For example, if we look at common/tasks folder, we can see we have three tasks that will run. The first sets the hostname, the second copies over the ssh keys and the last one changes the default password. In the routing/files directory, we can see each switch has a folder containing the configuration files for the daemon, BGP, and interfaces.

The group_vars folder contains a file called all, which is the global variables file. The vault folder contains our encrypted password variable my_password used to change the switches default password.

Finally, we have the docs folder that contains any relevant documentation related to the playbook. In this case, we have a topology diagram of our lab.

Now for the moment of truth, let’s kick off this playbook by running the following command: ansible-playbook lab01.yml -i hosts -ask-vault-pass -k. Will prompted us for the SSH and Vault passwords and once I hit enter the playbook will execute against all the switches and you will see a Play Recap at the end. We can switch back to EVE-NG and perform some tests.

EVE-NG

Let’s open the console for Leaf-01 and run the same command we ran earlier “net show interface” before we only had an IP address for eth0 now you can see we have multiple interfaces with IP addresses. Next, we’ll look at the BGP status and the route table by running the following commands:

“net show BGP summary”

“net show route ipv4 | less”

And as you can see, we three neighbors that established connectivity. We can see Leaf-01 has learned the default route from Leaf-03 as well as subnet 172.18.48.0/24 was learned from the NSX-T environment. We can confirm external connectivity by pinging 1.1.1.1, which we can successfully do. Now we test connectivity to the NSX-T environment by pinging 172.18.48.100, which is a VM attached to T1 logical router in the NSX-T domain and voila we have communication!

Wrap it up already!

Remember in the begin when I mentioned customer-specific demos well in EVE-NG you can create multiple labs. If I power down all nodes and close out of the current lab. I can navigate to a Cisco specific environment that we can easily use with NSX-T. That’s going to wrap up this video as you’ve seen utilizing EVE-NG gives us the ability to swap out the underlay networking to meet our needs quickly. If you have any question, drop them down in the comments below. If you enjoyed this video make sure to hit that like button and consider subscribing to my channel. See you in the next video!

]]>https://www.virtualizestuff.com/2019/08/23/a-plethora-of-underlay-network-options-with-eve-ng-and-nsx-t/feed/02415A Practical Look At VMC & AWS Networkinghttps://www.virtualizestuff.com/2019/07/18/a-practical-look-at-vmc-aws-networking/
https://www.virtualizestuff.com/2019/07/18/a-practical-look-at-vmc-aws-networking/#respondFri, 19 Jul 2019 04:45:12 +0000http://www.virtualizestuff.com/?p=2402I’ve recently received questions around VMC on AWS from customers and colleagues, specifically around how VMC workloads communicate with AWS workloads for both the connected and non-connected VPCs. So I decided to put together a video that provides a practical […]

]]>I’ve recently received questions around VMC on AWS from customers and colleagues, specifically around how VMC workloads communicate with AWS workloads for both the connected and non-connected VPCs. So I decided to put together a video that provides a practical look at establishing connectivity. The video can be found below and covers the following:

]]>https://www.virtualizestuff.com/2019/07/18/a-practical-look-at-vmc-aws-networking/feed/02402The NSX-T 2.3 Bridge Firewall bug that drove me crazy!https://www.virtualizestuff.com/2019/02/02/the-nsx-t-2-3-bridge-firewall-bug-that-drove-me-crazy/
https://www.virtualizestuff.com/2019/02/02/the-nsx-t-2-3-bridge-firewall-bug-that-drove-me-crazy/#respondSun, 03 Feb 2019 03:21:20 +0000http://www.virtualizestuff.com/?p=2331The above video demonstrate the bug and provides a workaround. 😉 FYI, the bug mentioned in this post should be resolved in the next release of NSX-T which I believe is 2.4. The objective of this post is to provide […]

FYI, the bug mentioned in this post should be resolved in the next release of NSX-T which I believe is 2.4.

The objective of this post is to provide additional details around NSX-T Edge Bridge Profiles and hopefully, prevent others from banging their heads against the wall when it comes to the Bridge Firewall not working. NSX-T 2.3 offers a couple L2 bridging options:

Option 1: ESXi Bridge Cluster which leverages two ESXi hosts in a cluster to perform the L2 bridging.

Edge Bridge Profiles

I’ll focus on Option 2 involving Edge Bridge Profiles. VMware provides instruction on how to implement it here as well as a great demo video by Francois Tallet on L2 Bridging. The documentation does a decent job of explaining the components as well as some of the requirements as seen below:

One requirement that is missing from the above documentation is that Forged Transmits must be enabled on the same port group otherwise traffic will not flow. I’d also like to see use cases added around choosing ESXi Bridge Cluster vs Edge Bridge Profile.

Note: If your running vSphere 6.7 you could leverage native MAC learning capability which William covers over at virtuallyGhetto. I leveraged his MacLearn functions and confirmed it works with Edge Bridge.

Disclaimer: Use of MAC learning I don’t believe is officially supported by VMware. Therefore I don’t recommend using in a production environment.

Even though it might not be supported officially, I’m curious to see if there is a performance gain over Promiscuous Mode, so stay tuned for a future post

Bridge Firewall Bug

In order to leverage NSX-T Bridge Firewall, you must be utilizing the Edge Bridge Profiles. That being said, I attempted to replicate Francois setup in my environment but was struggling to get Bridge firewall to work. I created a bridge firewall rule that should prevent the virtual machine from pinging the physical server (192.168.11.20 – VLAN11). However, communication continued to work which was odd.

I decided to create a separate Edge Bridge Profile for VLAN 100 on a separate logical switch. In this test, I was going to bridge a physical workload that had its gateway pointing to a Tier-1 Distributed Router (DR) within the NSX-T domain. When I created the Bridge Firewall to block traffic it worked as expected.

Workaround for Scenario 1

I decided to contact VMware to see if they were aware of this situation. They informed me that this was indeed a bug and will be fixed in the upcoming 2.4 release. They also mentioned a DR needs to be attached to the overlay LS where you want to leverage bridge firewall as a workaround. In scenario 1, above our LS does not have a Tier 0 or Tier 1 DR. The moment I add a Test-T1 DR with a fake GW address of 1.2.3.4/24 our bridge firewall rule function correctly.

I hope you found this post helpful. Stay tuned for a post where I’ll discuss my NSX-T homelab until then if you have any questions don’t hesitate to reach out.

]]>As more organizations leverage the capabilities of VMware Cloud on AWS, it’s essential to understand the connectivity options: VPN, Direct Connect, and Hybrid Cloud Extension (HCX). I recently had the privilege of deploying HCX in our Technical Solutions Center (TSC). Today’s discussion aims to provide a high-level overview of HCX and the associated components. Before doing so, it’s important to highlight some of the migration challenges experienced by organizations today:

Dispersed versions of vSphere along with a mixture of legacy/new hardware across sites

HCX Introduction

To address these challenges, HCX provides an abstraction layer allowing for vSphere on-premises and cloud resources to be presented to the application as a single resource regardless of vSphere version (vSphere 5.5 +). VMware refers to this as “infrastructure hybridity.” That allows application mobility across multiple clouds without the need to reconfigure virtual machines or infrastructure. HCX also packs a capable disaster recovery solution that’s easy to set up, manage and allows organizations to scale their DR capabilities. For organizations that currently leverage VMware Cloud providers like IBM, OVH you too can also utilize HCX, however, for the purposes of this post we’ll focus on VMC on AWS implementation of HCX.

HCX Cloud vs HCX Enterprise

Before we jump into the components its best to clarify HCX Cloud vs. HCX Enterprise:

If you’re a VMC on AWS customer, then you already have access to HCX at no additional cost. To automatically provision the HCX Cloud VM into your SDDC instance simply press the “Deploy” button from the VMC console. Once deployed you can into the HCX cloud web console where you can download the HCX Enterprise OVA for use with the on-premises data center.

Pro Tip #1: Deployment of HCX services into the on-prem site automatic initiates deployment of their “peer” counterparts into the SDDC instance, as shown in step 4 of the above diagram.

Infrastructure Hybridity Components

The additional HCX service appliances mentioned above provide the “infrastructure hybridity” so let’s explore each of the components.

HCX WAN Interconnect – Handles the migration and cross-cloud vMotion capabilities over the internet or private lines to the target site. The WAN Interconnect also provides strong encryption, traffic engineering, and virtual machine mobility. Pro Tip #2: The WAN Interconnect appliance also shows up as a fictitious ESXi host in vCenter at both sites acting as a secure proxy for cross-cloud vMotions.

HCX Network Extension – Extends L2 networks from on-premises to the cloud without the need to change the virtual machine’s IP or MAC addresses or on-premises infrastructure. Pro Tip #3: Extension of NSX universal wires are not currently supported but is on the roadmap.

A great feature on the horizon for VMC customers is proximity routing (HCX-PR) which allows for optimized routing that eliminates the need for hairpinning between sites. There are a couple of caveats:

HCX-PR requires dynamic routing between both sites.

HCX-PR isn’t supported yet for VMC customers but is on the roadmap

Those currently using VMware cloud providers like IBM, OVH you can take full advantage of HCX-PR.

Pro Tip #4: The configuration/connectivity of the IPsec VPN is automatic between the source and target sites for their respective service (HCX WAN Interconnect and HCX Network Extension). For a visual reference step 5 in the above diagram.

That’s going to wrap up this post on HCX Overview hope it was helpful.

]]>https://www.virtualizestuff.com/2018/11/05/vmwares-hcx-a-quick-overview/feed/02293vRA: Developing a NSX-T Blueprint – Part2https://www.virtualizestuff.com/2018/06/20/vra-developing-a-nsx-t-blueprint-part2/
https://www.virtualizestuff.com/2018/06/20/vra-developing-a-nsx-t-blueprint-part2/#respondWed, 20 Jun 2018 16:15:43 +0000http://www.virtualizestuff.com/?p=2120In the previous post, I discussed how to setup a development environment, configuration settings made to the virtual machines and converting virtual machines to templates. This post assumes vRA is operational as I’ll be discussing the following topics as it relates to […]

]]>In the previous post, I discussed how to setup a development environment, configuration settings made to the virtual machines and converting virtual machines to templates. This post assumes vRA is operational as I’ll be discussing the following topics as it relates to creating/publishing the NSX-T Blueprint that will be used to deploy PKS:

Network Profiles & Reservation

Network profiles contain pre-defined IP information like gateway, subnet, DNS, and IP address ranges. When a virtual machine is provisioned vRA will assign an IP address based on the information in the network profile. There are three types of Network Profiles as depicted in Figure1. I have opted for NAT network profile (One-to-Many) in order to support multiple deployments. In Figure2 are the network profiles I created for the blueprint. Next, we have to tie the external network profile with our existing port group as shown in Figure3.

Software Component

Custom specification will destroy any user profile (tsc\administrator) custom configurations we make like MTputty, WinSCP, Chrome/Firefox bookmarks, desktop image, etc. So in order to access the ControlCenter, we need a way to assign the IP address of the external interface.

During a Livefire training event for Hybrid Cloud in Boston a couple weeks ago I struck up a conversation with classmate Chris Smith who focuses on the Cloud Automation. I explained the situation above to him and wanted to get his perspective. He mentioned the easiest way is to use Software Components with bindings. After the completing our session for the day I went back to the hotel and began playing with Software Components/bindings and to my surprise, it just worked! So a huge thank you to Chris for sharing the knowledge. Please check out his site here as he has excellent content around vRA.

SetExternalIP

The first software component I created was SetExternalIP and created a property called “cc_ip”. We will use this property later when we configure binding. Then proceeded to Actions where I created a basic script that sets the IP address of the External interface. I take the IP address based on the cc_ip property and assign it to [string]$IP. Line 1 of the code below ensure the variable is a string then passes it to the -IPAddress property of New-NetIPAddress cmdlet. The -InterfaceAlias property references the network adapter “External” that was changed in the previous post.

SetStaticRouteThe SetStaticRoute software component is simpler only having an action that references the External interface again. I increased the metric to ensure traffic goes through the internal interface. During the Blueprint Design phase, we will attach both software components to the nsxt-controlcenter machine.

Blueprint Design

With the networking and software component portion complete we can now start working on the blueprint design. To ensure our vSphere templates are imported into vRA lets perform an inventory sync as shown in Figure7. Now click on “Design” tab > Blueprints > New give the blueprint a name shown in Figure8. Select the transport zone and reservation policy Figure9.

Figure7 – Sync Inventory

Figure8 – New Blueprint

Figure9 – Select Transport Zone and Reserveration

We’ll start with a blank canvas shown in figure10.

Figure10 – Blank canvas

The first thing to do is drag some network constructs to the canvas Figure11:

1 x Existing Network

2 x On-Demand NAT Network

Figure11 – Add Networks

Next up is the virtual machines we will focus on the vPodRouter and the ControlCenter VMs (Figure12 – Figure16) as all other VMs will be attached to the vSphereTransitNAT network. We drag over a vSphere (vCenter) Machine then configure the machine to use the vPodRouter template we created in the previous post and attach it to the required networks. We have two virtual machines attached to their corresponding networks (Figure17). Now to attach the software components we created earlier to nsxt-controlcenter machine (Figure18) and then bind property “cc_ip” to “_resource~nsxt-controlcenter~ip_address” (Figure19). This will pass the IP address from the nsxt-controlcenter

vRO PowerShell Host / Event Subscription

Since this is a nested environment we will need to enable the following security policies; Promiscuous Mode, Forged Transits, and MAC Address Changes in order for the virtual machines to communicate. To accomplish we will leverage Event Subscription to initiate a PowerCLI script via a Powershell host in VRO due to my lack of javascript experience but is something I’m working on it ;).

Add PowerShell Host

Invoke an External Script
In our case, it’s just a simple .ps1 file with Write-Output “Testing ;)” to validate things work. You can see in the screenshot above Invoke an external script ran successfully.Important: In order to get the scripts to execute successfully I had to update the PowerShell plugin from 1.0.11 to 1.0.13 after doing so the scripts ran without issue.

Figure22 – Simple script to test functionality

Create a vRO workflow that will process the vRA payload looking for the two on-demand networks by finding the port group ids then passing that information to the PowerCLI script that will enable Promiscuous Mode, Forged Transits, and MAC Address Change.

Publishing Time

Our blueprint is ready to be published simply click Finish and click Publish (Figure21).

In order to be selectable from the catalog, we need to associate the blueprint with a service assuming Service and Entitlements have already been set up. Navigate to Administration > Catalog Items > Select NSX-T Environment > Configure > Service. In my case, the service is called “VMware [Nested]”.

Note: You can also add an icon here for your blueprint.

If we go to our catalog > VMware [Nested] we should see our new shiny blueprint (Figure22). In our environment, this blueprint takes an hour to deploy (Figure23).

FYI the Blueprint name is “[NAT] NSX-T Environment” is the original blueprint. The “NSX-T Environment” blueprint was created for this post as you can see it’s a popular one (Figure24).

To access the environment I simply get the IP address for the nsxt-controlcenter from the Items tab (Figure25).

Results

Below are screenshots of various components vSphere, NSX-T, and PKS (Figure37 -Figure41). This has been an awesome learning experience I hope you enjoyed the two posts series on developing an NSX-T blueprint in vRealize Automation.

If your interested in seeing a video demonstration of the vRA/NSX-T environment just leave a comment below and I’d be happy to put something together.

]]>https://www.virtualizestuff.com/2018/06/20/vra-developing-a-nsx-t-blueprint-part2/feed/02120vRA: Developing a NSX-T Blueprint – Part1https://www.virtualizestuff.com/2018/05/21/vra-developing-a-nsx-t-blueprint/
https://www.virtualizestuff.com/2018/05/21/vra-developing-a-nsx-t-blueprint/#respondTue, 22 May 2018 02:23:44 +0000http://www.virtualizestuff.com/?p=1998In today’s post, we will I’ll be talking about developing an NSX-T blueprint for vRealize Automation. Having developed live lab exams in the past using vCloud Director I figured it would be interesting to do the same thing but with […]

]]>In today’s post, we will I’ll be talking about developing an NSX-T blueprint for vRealize Automation. Having developed live lab exams in the past using vCloud Director I figured it would be interesting to do the same thing but with vRealize Automation and NSX-V. I initially started simple creating a 3-node vSAN cluster blueprint to get my feet wet and work out any “gotchas”. After successfully deploying said blueprint I decided to take it further this time deploying a full NSX-T environment. NSX-T has been gaining traction in the enterprise space so it makes sense and would allow our organization to leverage it for training and demonstrations.

Requirements:

Multiple deployments with repeatable results

Dynamic routing (BGP)

Allow for installation of solutions like PCF / PKS

Demonstrate NSX-T capabilities

3 tier app deployed and operational

Approval policy in place (due to the size of the blueprint)

Lease duration

NSX-T Blueprint Design

Figure1 – NSX-T Diagram

Creating the Dev Environment

The diagram above (Figure1) shows the end game of the NSX-T blueprint but before we can build the vRA blueprint we need to create a development environment leveraging NSX-V components. I created NSX Edge called “NSXT” that has 3 interfaces connected to their respective port group / logical switches (Figure2) and NAT applied on the uplink interface (Figure3). This allows us to deploy our NSX-T environment in isolation. All virtual machines will be assigned to logical switch “NSXT Dummy Network” as shown in Figure4. The external interface of the vPodRouter will be connected to “NSXT vPodRouter Network” and the ControlCenter external interface will be connected to the vDS port group “dVS-Prod-Mgmt_VLAN12” as illustrated in Figure1. We’ll have 16 total virtual machines that will make up the vRA NSX-T blueprint once everything is said and done.

Important:

Make sure Promiscuous mode, MAC address change, and Forged transmit have been set to “Accept” otherwise our nested virtual machines and the NSX-T Overlay won’t work properly.

Set the MTU size on the vDS that’s hosting the nested environment to 9000. In the dev environment, I couldn’t get jumbo frames (9000) to successfully work when testing with vmkping so I dropped the MTU down to 8000 for the nested vDS and VMkernel interfaces. I believe this has to do with VXLAN overhead but need to capture some packets to confirm.

Figure2 – NSX Edge Interfaces

Figure3 – NSX Edge NAT

Figure4 – NSXT Dummy Network

Virtual Machine Configurations

To keep this post manageable I won’t be providing the step by step details as most folks are familiar with deploy vSphere components and the NSX-T documentation is pretty good too. I will, however, provide screenshots of specific settings made to the NSX-T environment:

DOMAIN CONTROLLER:

Update/Patch OS

Install the following roles:

AD DS

DHCP

DNS

File and Storage Services (NFS)

CONTROLCENTER:

Update/Patch OS

Customize Windows Profile – This is why we don’t want to use a custom spec in vRA

SSH, WinSCP, RDCMan, Chrome, Firefox, and Map a Network Share

Additional configurations:

vRA Configuration will be discussed in follow up post:

Install vRA guest agent

Change default route metric [Software Component]

Assign static IP Address [Software Component]

Bind IP Address to ControlCenter [Bindings]

Rename Network Adapters (Figure5)

Add persistent route to the network you will be RDPing from. (Figure6)

Once you have gone through the configuration process for all your virtual machines it should look similar to Figure17.

Figure17 – Results

THE NESTED ENVIRONMENT

Below are a couple screenshots of the NSX-T Dashboard (Figure18) and vCenter (Figure19). This environment should allow individuals the ability to deploy PKS with the management components deployed to the Mgmt cluster and the K8s clusters to the Compute-PKS cluster.

Figure18 – NSX-T Dashboard

Figure19 – vCenter Hosts / Clusters

Figure20 – vCenter Datastores

Figure21 – vCenter Networks

As you can see from Figure19 we have a basic 3 tier application deployed in order to test NSX-T load balancer. The 3 tier application was created using Doug Baer’s three-part series called “HOL Three-Tier Application“. Thank you, Doug, for the excellent post! Once the VMs were deployed I attached them to their respective logical switches. I confirmed both web tier VMs (Figure22 & Figure23) were accessible as well as the NSX-T LB VIP “webapp.tsc.local” (Figure24):

Figure22 – Web-01a

Figure23 – Web-02a

Figure24 – WebApp VIP

CONVERT TO TEMPLATES

Create a temp vDS port group to put your virtual machines on before converting to templates. When I deployed the blueprint I realized some of my ESXi hosts were still attached to the original dev logical switch as shown in Figure25.

Figuer25 – Network Adapter 5 still attached to Dev LS

With testing complete and all virtual machine network adapters on a temp port group its now time to shut down the NSX-T dev environment and convert the virtual machines to templates save your wrist and use PowerCLI to accomplish this! That’s going to wrap up this post stay tuned for the next post, where we will go through the process of creating our NSX-T vRA Blueprint!

]]>https://www.virtualizestuff.com/2018/05/21/vra-developing-a-nsx-t-blueprint/feed/01998NSX-T – How to Attach KVM VM to Logical Switchhttps://www.virtualizestuff.com/2018/04/15/nsx-t-how-to-attach-kvm-vm-to-logical-switch/
https://www.virtualizestuff.com/2018/04/15/nsx-t-how-to-attach-kvm-vm-to-logical-switch/#respondSun, 15 Apr 2018 22:23:06 +0000http://www.virtualizestuff.com/?p=1946Introduction Having recently deployed NSX-T in our environment, I can say the deployment and configuration were straightforward using the installation docs. When it came time to attach a VM hosted on the KVM host it was a bit unclear how […]

Having recently deployed NSX-T in our environment, I can say the deployment and configuration were straightforward using the installation docs. When it came time to attach a VM hosted on the KVM host it was a bit unclear how to accomplish this. VMware’s documentation mentions the following command

virsh dumpxml <your vm> | grep interfaceid

.

As you can see from the screenshot above I have no interfaceid, what gives??? Admittedly, when it comes to KVM and Openvswitch I’m a bit of a novice. The purpose of this post is to provide additional details around VM configuration on a KVM host for those in a similar situation.

After reviewing VMware’s documentation I decided to jump into VMware’s NSX-T Hands-on-labs to see how the VMs on the KVM hosts were configured:

Reconfigure web-05 Virtual Machine

Excellent, now the first thing to do is to dump the XML configuration of web-05 to file with the following command:

virsh dumpxml web-05 > web-05.xml

. After editing the XML to match that of VMware’s hands-on-labs we can proceed with shutting down web-05.

]]>https://www.virtualizestuff.com/2018/04/15/nsx-t-how-to-attach-kvm-vm-to-logical-switch/feed/01946How to fix “Failed to Deploy Edge Appliance” in vRAhttps://www.virtualizestuff.com/2017/12/30/how-to-fix-failed-deploy-edge-appliance-vra/
https://www.virtualizestuff.com/2017/12/30/how-to-fix-failed-deploy-edge-appliance-vra/#respondSat, 30 Dec 2017 07:41:25 +0000http://www.virtualizestuff.com/?p=1894When attempting to deploy On-Demand NAT / LB within vRA. I came across this lovely error message: After scratching my head and reviewing the DEM logs nothing was jumping out at me. The error message seemed generic enough, so I googled […]

]]>When attempting to deploy On-Demand NAT / LB within vRA. I came across this lovely error message:

After scratching my head and reviewing the DEM logs nothing was jumping out at me. The error message seemed generic enough, so I googled “failed to deploy edge appliance vra”. Sure enough, the first hit I get is a VMware KB article and it mentions the following:

” When trying to create a multimachine blueprint in VMware vRealize Automation (formerly known as VMware vCloud Automation Center) with NSX 6.0.1 using a single datastore cluster, the Edge deployment fails. “

Note: I didn’t see a similar error message in the DEM logs as described in the article.

In our environment, we are running vRA 7.3 and NSX 6.3.3 so I began to check the Reservations and noticed the storage was indeed pointing to a single datastore cluster as shown below:

After selecting the individual datastores within the cluster I was able to successfully deploy an On-Demand NSX blueprint within vRA!

]]>https://www.virtualizestuff.com/2017/12/30/how-to-fix-failed-deploy-edge-appliance-vra/feed/01894How to Remove NSX Security Policies with Rules using PowerNSXhttps://www.virtualizestuff.com/2017/08/24/how-to-remove-nsx-security-policies-with-rules-using-powernsx/
https://www.virtualizestuff.com/2017/08/24/how-to-remove-nsx-security-policies-with-rules-using-powernsx/#respondThu, 24 Aug 2017 19:37:37 +0000http://www.virtualizestuff.com/?p=1766In the previous post, we discussed how to edit existing security policy using PowerNSX. We will round out this series by talking about how to remove NSX security policies with rules using PowerNSX. The video above demonstrates the cmdlets discussed […]

]]>In the previous post, we discussed how to edit existing security policy using PowerNSX. We will round out this series by talking about how to remove NSX security policies with rules using PowerNSX.

The video above demonstrates the cmdlets discussed in this post.

Disclaimer:The code shown in this post is not included in the PowerNSX module. There is still work to be done as I need to write Pester tests for these cmdlets to ensure everything works as expected and doesn’t break anything else. That said all code has been used in a production environment without issue.

Cmdlets:

Remove-NsxSecurityPolicyFwRule:

Allows the ability to add additional rules to an existing Security Policy.

As you can see manipulating security policies via PowerNSX allows for an automated and streamlined approach to managing NSX objects. The days of having to deal with vCenter web client reload error messages are over! That’s going to wrap up this series I encourage administrators of NSX to have a look at PowerNSX as it can simplify management.

]]>In the previous post found here, we discussed how to create security policies with PowerNSX. In this post, I’ll demonstrate how to edit existing security policy firewall rules then apply security policies to security groups with PowerNSX.

The video above demonstrates the cmdlets discussed in this post.

Disclaimer:The code shown in the video is not included in the PowerNSX module. There is still work to be done as I need to write Pester tests for these cmdlets to ensure everything works as expected and doesn’t break anything else. That said all code has been used in a production environment without issue.