William Lam

I recently had a need for a for a basic Kubernetes (k8s) Cluster which I also needed to have running locally in my vSphere Home Lab for testing purposes. I know there are a number of great blog articles out there that shows you how to setup your own k8s from scratch, including a recent blog series from Myles Gray. However, I was looking for something quick that I could consume without requiring any setup. To be honest, installing your own k8s from scratch is so 2017 😉

If you ask most people, they simply just want to consume k8s as an integrated solution that just works and not have to worry about installing and managing the underlying components that make up k8s. VMware PKS and PKS Cloud are two great examples of this where Pivotal and VMware provides a comprehensive solution (including Software Defined Networking) for managing the complete lifecycle (Day 0 to Day N) for running Enterprise K8s, whether that is within your own datacenter or running as a public cloud service. For my exploratory use case, PKS was overkill and I also did not have the required infrastructure setup in this particular environment, so I had to rule that out for now.

While searching online, I accidentally stumbled onto a recent VMware Open Source project called sk8s, short for Simple Kubernetes (k8s) which looked really interesting. At first glance, a few things stood out to me immediately. This project was created by none other than Andrew Kutz, for those not familiar with Andrew's work, he famously created the Storage vMotion UI plugin for the vSphere C# Client before VMware had native UI for the feature. He was also the creator of the first vCenter Simulator back in the day called simDK that was also widely used by a number of customers including myself. I knew Andrew had joined our Cloud Native Business Unit (CNABU), but I was not sure what he was up to these days, guess I now know 🙂 and is helping both VMware and the OSS community in k8s development.

As the adoption of VMware Cloud on AWS (VMC) continues to accelerate, one of the very first UI interface that customers must interact with is the NSX-T UI, for enabling basic connectivity. By default the Edge Gateway has a Deny All Firewall Rule, so you will need to come to this screen to setup connectivity from your on-premises environment including a Direct Connect (DX) or Route/Policy-Based VPN. For some customers who have familiarize themselves with the NSX-T UI and its capabilities, usually the next order of business is how do I go about automating these various aspects from Day 0 setup all the way to Day N where I am migrating in or creating additional workloads.

A very common set of questions that I have been getting lately is which API do I need to look at to do X in the NSX-T UI in VMC?

Having spent some time with the NSX-T Policy API, I figure it would be useful to share the categories of NSX-T Policy API that maps back to what you see in the NSX-T UI in VMC. The list below is not exhaustive, but should it should point you in the right direction when needing to automate a particular operation.

In the previous article, we reviewed the concepts and basic approach to building your own VMware Virtual Appliance (OVF/OVA). In Part 2, we are now going to take a look at a reference implementation for building a Linux VA using VMware PhotonOS. Although I am using PhotonOS as the guest, you can apply these same techniques to any other Linux distribution of your choice.

Step 1 - Create a new VM in vCenter Server and then install PhotonOS using the ISO format. Once you have completed the OS installation, you may want to apply any patches or packages that you want included as part of your VA. Once that is done, go ahead and shut down the VM.

Step 2 - Select the VM in the vSphere Inventory and then click on Configure->vApp and then check the Enable vApp Options. Once enabled, select OVF environment for the IP allocation scheme. In the OVF Details tab, select VMware Tools for the OVF environment transport. (Optionally) You can specify some additional metadata including appliance name and URLs to help others who maybe consuming your VA once it has been exported to an OVF/OVA.

Step 3 - Next, add the following 6 OVF properties which will be used as input to configure networking within PhotonOS. Click Add and provide a Label, Key and optional Category.

Label

Key

Category

Hostname

guestinfo.hostname

Networking

IP Address

guestinfo.ipaddress

Networking

Netmask

guestinfo.netmask

Networking

Gateway

guestinfo.gateway

Networking

DNS Server

guestinfo.dns

Networking

DNS Domain

guestinfo.domain

Networking

Step 4 - Power back on the VM and once it is available on the network (assuming DHCP), download and copy the sample first boot script rc.local to /etc/rc.d/rc.local. This script is where all the magic happens and will process the OVF property input and then configure the network settings. Right now it assumes these fields are optional, meaning if they left blank, it will default the system to DHCP. If you provide all input properties, then it will go ahead and configure a static network address.

Today, I am very excited to announce a new Fling that I have been working on which is a Native Driver for ESXi that will enable support for three of the most popular USB network adapter chipsets found in the market today. The ASIX USB 2.0 gigabit network ASIX88178a, ASIX USB 3.0 gigabit network ASIX88179 & the Realtek USB 3.0 gigabit network RTL8153. This effort had initially started back in 2016 as a side project with Songtao, a VMware Engineer who works on our USB stack for ESXi. Based on the enormous amount of feedback from the community as well customer Production use cases, this side project evolved into the development of a full fledge Native Driver for ESXi.

This Fling is more than just adding additional network interfaces for vSphere Home Labs, which is definitely a use case, but it is also about enabling new and future computing platforms that may not always have the traditional network connectivity that we have come to expect. Today, ESXi supports a number of high-end network controllers (10G/40G/100G) designed for Enterprise Data Centers that include advanced networking & low latency features. As more & more workloads appear at the Edge like IoT, point-of-sales & remote office use cases, the traditional networking solutions may no longer meet the needs of these new infrastructures.

For Edge computing environments, reducing the cost & power consumption is definitely one of the driving factors. However, with some of these platforms, their form factors can make it difficult or impossible to support traditional high-end network controllers. Luckily, there are a number of options for network adapters in the market but is can also be difficult to support them all.

USB has become one the most widely adopted connection type in the world & USB network adapters are also popular amongst Edge computing platforms. In some platforms, there is either limited or no PCI/PCIe slots for I/O expansion & in some cases, an Ethernet port is not even available. This Fling will hopefully help enable some of these Edge use cases today and with the help of the community and feedback, we can see how this can be enhanced or evolved over time including where it could even be part of the ESXi distribution.

Another use case for USB-based network adapters as mentioned earlier are for vSphere Home Labs, platforms like the Intel NUC or Apple Mac Mini have limited number of built-in Ethernet ports, but plenty of USB & USB-C ports which can enable these platforms with additional networking capabilities. These systems could also be potential Edge platform candidates given the right connectivity.

The vCenter Server Events sub-system is an incredibly rich and powerful interface that enables customers to monitor, alert and even trigger additional actions based on a particular event. One such example that I have written about before is to key off of a VM provisioned event and automatically apply security hardening settings when the VM is created or cloned. This can be useful if customers are not taking advantage of VM Templates or if a VI Admins manually creates a VM from scratch, you can still ensure you have a compliant VM deployment through the use of Automation. You can either poll for the VM created event and then execute a script as shown in this example or you can automatically trigger a remote action by generating an SNMP trap when the event actually occurs.

The possibilities are truly endless on what you can do with vCenter Events and for the complete list of all Event types, you can refer to the vSphere API documentation here. One thing to be aware of is that not every operation within vCenter Server generates an Event, one example of this is when a Folder object is created or deleted. You can use vCenter Server Tasks sub-system to query for this info but there is not a respective vCenter Event that you can key off of to generate an Alarm for example. This was something I had noticed myself and assumed it was a limitation of the platform or feature teams that publish VC Events.

Recently, this question came up again from a customer who was looking for a way to trigger an alarm every time a VM Folder was created. I took another look at this and came to learn about a more generic type of Event that can be used to create an Alarm for such use cases where a native VC Event may not exists called a Task Event.

I recently a question from one of our VMware Cloud on AWS (VMC) field folks who was looking to programmatically retrieve the SDDC Public IP Address which is shown under the NSX-T Networking & Security Overview page within the VMC Console as shown in the screenshot below.

This actually had me stumped for a bit as I was not able to find anything mentioned in the NSX-T Policy API documentation. My last resort before pinging the NSX Engineers was to use one of my favorite browser tool, Chrome Developer Tools, which allows me to inspect all requests made to a specific web page and can also be helpful in figuring out which REST APIs the UI is using.

It turns out for this particular page, the information was not actually coming from the NSX-T Policy API but rather from another endpoint and specifically /cloud-service/api/v1/infra/sddc-user-config which I am guessing has to do with the fact that some of this information is really AWS specific information such as the Public IP Address for example. In any case, once I realized what the endpoint was and that I could still use the VMC NSX-T Reverse Proxy to retrieve the details, it was pretty straight forward.

Primary Sidebar

Search this website

Author

William Lam is a Staff Solutions Architect working in the VMware Cloud on AWS team within the Cloud Platform Business Unit (CPBU) at VMware. He focuses on Automation, Integration and Operation of the VMware Software Defined Datacenter (SDDC).