VMware Hands-on Labs - HOL-2013-01-SDC

VMware Tech Preview - Disclaimer

This session may contain product features that are currently under development.

This session/overview of the new technology represents no commitment from VMware to deliver these features in any generally available product.

Features are subject to change, and must not be included in contracts, purchase orders, or sales agreements of any kind.

Technical feasibility and market demand will affect the final delivery.

Pricing and packaging for any new technologies or features discussed or presented have not been determined.

“These features are representative of feature areas under development. Feature commitments are subject to change, and must not be included in contracts, purchase orders, or sales agreements of any kind. Technical feasibility and market demand will affect the final delivery.”

Introduction

Welcome to the vSphere 7 with Kubernetes Lightning Lab!

We have developed Lightning Labs to help you learn about VMware products in small segments of time. In this lab, you will go through the process of creating a Kubernetes Supervisor cluster within VMware's vSphere 7 with Kubernetes platform.

What is vSphere 7 with Kubernetes?

vSphere 7 with Kubernetes, empowers IT Operators and Developers to accelerate innovation by converging containers and VMs into VMware's vSphere platform with native Kubernetes. VMware has leveraged Kubernetes to rearchitect vSphere and extend its capabilities to all modern and traditional applications.

vSphere 7 with Kubernetes transforms vSphere into the app platform of the future.

Enterprises can now accelerate the development and operation of modern apps on VMware vSphere while continuing to take advantage of existing investments in technology, tools, and skillsets. By leveraging Kubernetes to rearchitect vSphere, vSphere 7 with Kubernetes will enable developers and IT operators to build and manage apps comprised of containers and/or virtual machines. This approach gives enterprises a single platform to operate existing and modern apps side-by-side.

vSphere 7 with Kubernetes exposes a new set of services that developers can consume through the Kubernetes API. The VMware Tanzu Kubernetes Grid Service for vSphere allows developers to lifecycle manage Kubernetes clusters on demand. The Network Service enables integrated Load Balancing, Ingress and Network Policy for developer’s Kubernetes clusters.

The Storage service integrates vSphere Cloud Native Storage into Kubernetes to provide stateful application support with Persistent Volumes back by vSphere Volumes.

The vSphere Pod service takes advantage of the other services to deliver Pods Natively on Esxi. The primary place that customers will run Pods is in the upstream aligned, fully conformant clusters deployed through the Tanzu Kubernetes Grid Service. The Pod service compliments the TKG Service for specific use cases where the application components need the security and performance isolation of a VM in a Pod form factor.

And finally, vSphere 7 with Kubernetes has a native registry service that can be used to deploy container images as Kubernetes pods.

Application Focused management means that policy has been attached to namespaces that contain Developer’s applications. Operations teams now have a holistic view of an application by managing the namespace that contains all of the application objects.

Video: vSphere 7 with Kubernetes Demo (3:42)

See vSphere 7 with Kubernetes in action in this brief demonstration!

Module 1 - Deploying a Supervisor Kubernetes Cluster (15 minutes)

Introduction

The Supervisor cluster is a special kind of Kubernetes cluster that uses ESXi as its worker nodes instead of Linux. This is achieved by integrating a Kubelet (our implementation is called the Spherelet) directly into ESXi. The Spherelet doesn’t run in a VM, it runs directly on ESXi.

Supervisor Cluster

Workloads deployed on the Supervisor, including Pods, each run in their own isolated VM on the hypervisor. To accomplish this we have added a new container runtime to ESXi called the CRX. The CRX is like a virtual machine that includes a Linux kernel and minimal container runtime inside the guest. But since this Linux kernel is coupled with the hypervisor, we’re able to make a number of optimizations to effectively paravirtualized the container.

The supervisor includes a Virtual Machine operator that allows Kubernetes users to manage VMs on the Supervisor. You can write deployment specifications in YAML that mix container and VM workloads in a single deployment that share the same compute, network and storage resources.

The VM operator is just an integration with vSphere’s existing virtual machine lifecycle service, which means that you can use all of the features of vSphere with kubernetes managed VM instances. Features like RLS settings, Storage Policy, and Compute policy are supported.

In addition to VM management, the operator provides APIs for Machine Class and Machine Image management. To the VI admin, Machine Images are just Content Libraries.

In the following interactive simulation, we will demonstrate how to deploy a Supervisor cluster in a vSphere environment

Namespaces

Using vSphere 7 with Kubernetes, admins now have the ability to create Namespaces within vSphere. The vSphere Namespace is an abstraction onto which admins attach policy and then assign to development teams. More specifically, authentication and authorization for Namespaces are enabled through vSphere Single Sign-On and Administrators align Storage and Network policy with corresponding Kubernetes constructs through the Namespace. Administrators are able to create and manage these Namespaces directly through the vSphere Web Client.

This part of the lab is presented as a Hands-on Labs Interactive Simulation. This will allow you to experience steps that are too time-consuming or resource-intensive to do live in the lab environment. In this simulation, you can use the software interface as if you are interacting with a live environment.

In this stage of the lab, we will be walking through the creation of a Kubernetes cluster from the perspective of an infrastructure administrator.

When finished, click the “Return to the lab” link to continue with this lab.

The lab continues to run in the background. If the lab goes into standby mode, you can resume it after completing the module.

ISim Notes Admin Persona- Do Not Publish

This part of the lab is presented as a Hands-on Labs Interactive Simulation. This will allow you to experience steps which are too time-consuming or resource intensive to do live in the lab environment. In this simulation, you can use the software interface as if you are interacting with a live environment.

The orange boxes show where to click, and the left and right arrow keys can also be used to move through the simulation in either direction.

vSphere 7 with Kubernetes is the new generation of vSphere for modern applications and is referred to as the Supervisor cluster. This fundamentally changes how developers get access to vSphere infrastructure and how IT Operations provides governance.

In the following scenario, the application platform team has requested resources for a new application that is being worked on by the development team. The IT operations team will enable a Supervisor Cluster on an existing ESXi Cluster. Once the Supervisor cluster is enabled, resources are allocated to the development via the creation of a Namespace. Let’s start by enabling the Supervisor Cluster in vCenter.

Enable Supervisor Cluster

Click Menu

Click Workload Platform

Click the scroll bar

Click I'm Ready

This is a list of ESXi clusters that are compatible with being enabled as a Supervisor Cluster. Currently that means HA and DRS are enabled on the cluster.

Click Site01-Cluster01

Click OK

Enabling the Supervisor Cluster means that a Kubernetes control plane will be deployed onto the ESXi Cluster. This will be a highly available, multi-master deployment with etcd and the Kubernetes API stacked into each node. For this small environment, we are deploying a single Master control plane. This plane defines the size of the master VM and an estimate of the number of pods that could be deployed.

Click Tiny

Click Next

Next we add networking to the Control Plane configuration. Each Master node will have multiple network interfaces. The first one is the Management network and supports traffic to vCenter.

Click Select Network

Click VM Network

Click the Starting Master IP field and type 192.168.120.10

Click in the Subnet Mask field and type 255.255.255.0

Click in the Gateway field and type 192.168.120.1

Click in the DNS Servers field and type 192.168.110.10

Click in the DNS Search Domains field and type corp.local

Click in the NTP Servers field and type 10.166.1.120

Click the scroll bar

The other Network interfaces support traffic to the Kubernetes API and to the Pods/Services that are deployed on the Supervisor cluster. This network is supported by NSX. Choose the NSX Distributed switch and Edge. The Master Nodes will be assigned Virtual IPs (VIPS) and be front ended by a load balancer with its own VIP. Internal traffic from the Kubernetes nodes (the ESXi hosts) to the master also has a Load Balancer.

Click Select a VDS for the Namespace network

Click the available VDS in the list

Click on Select an Edge Cluster

Click edge-cluster-01

Click in the DNS Servers field and type 192.168.110.10

Pod and Service CIDRs are the network segments that are assigned to the Kubernetes Pods and Services that are created on the cluster. These IPs are private to the cluster and do not get routed. Routing into and out of the cluster happens via the segments defined with Ingress and Egress CIDRs. The Ingress CIDR IPs are the VIPs assigned to Master Nodes and its Load Balancer, as well as any Kubernetes Service that must be accessed from outside the cluster. VIPs are also assigned to each Namespace with an SNAT rule for any outbound traffic. These VIPs come from the Egress CIDR.

Click in the Ingress CIDRs field and type 192.168.124.0/28

Click in the Egress CIDRs field and type 192.168.124.32/28

Click Next

Next we will choose where storage objects are placed. Note that storage placement is done via vSphere Storage Policies rather than by individual Datastores. Choose the Storage policy that defines where the Master VMs will be placed.

Click Select Storage for the Master Node

Click high-performance-ssd

Click OK

Ephemeral disks are the volumes that are created for the native Pods running on ESXi hosts. Choose where these should reside.

Click Select Storage for the Ephemeral Disks

Click high-performance-ssd

Click OK

As part of supporting pods running natively on ESXi, we have created the capability to cache images that are downloaded for the containers running in the pods. Subsequent pods using the same image will pull from the local cache rather than the external container registry. Choose the storage policy that defines where this cache should be located.

Click Select Storage for the Image Cache

Click high-performance-ssd

Click OK

Click Next

Click Finish

Click Menu

Click Hosts and Clusters. There is a new resource pool called Namespaces.It will include inventory objects created on this Supervisor cluster.

Click the Namespaces resource pool. Note the Kubernetes MasterAPI VM is running which means Kubernetes has been enabled directly on the cluster Site01-Cluster01

Create a Namespace

Now that the Supervisor cluster has been enabled, we will create a new Namespace. Namespace is an extension of the core Kubernetes namespace construct and allows the vSphere admins to attach policy that spans both vSphere and Kubernetes. All developer workloads run in the namespaces that they are assigned.

Click Menu

Click Workload Platform

Click on the Namespaces tab

Click Create Namespace

Click Site01-Datacenter01

Click Site01-Cluster01

Click in the Name field and type hol (Note: We use lower case because it is a Kubernetes namespace)

Click Create. This creates a namespace in the Supervisor cluster and a vCenter namespace object.

Click Menu

Click Host and Clusters

Click hol under the Namespaces resource pool to view the Summary page

Add Developer Access

We will now grant edit permissions for the namespace to a user named Fred who is a member of the development team. A Kubernetes role binding is created and the application platform team will be able to create objects in the Kubernetes namespace. Users will be authenticated to the cluster using vSphere Single Sign-On.

Click Add Permissions

Click Select Domain

Click vsphere.local

Click in the User/Group field and type Fred

Click fred in the search results

Click Select Role

Click Can Edit

Click OK

Add Storage Policy to Namespace

vSphere administrators can associate storage policies to this namespace. Once assigned, this will automatically create a Kubernetes Storage Class on the Supervisor cluster and associate it to this policy. It also causes an unlimited resource quota to be assigned for the Namespace on that storage class. Storage limits can be configured and are enforced through Kubernetes Storage Class resource quotas assigned to the namespace. These storage classes are used in the placement of Kubernetes persistent volumes that are created within the Namespace.

Click Add Storage

Click high-performance-ssd

Click OK

Add CPU and Memory Limits

CPU and Memory limits can also be set for the namespace. Each namespace is backed by a resource pool where these limits are enforced.

Click the Configure tab

Click Resource Limits

Click Edit

Click in the CPU field and type 3000

Click in the Memory field and type 1000

Click in the high-performance-ssd field and type 2000

Click OK

Click the Summary tab and review details

In the next simulation, see how members of the development team can use the new namespace.

This part of the lab is presented as a Hands-on Labs Interactive Simulation. This will allow you to experience steps that are too time-consuming or resource-intensive to do live in the lab environment. In this simulation, you can use the software interface as if you are interacting with a live environment.

In this stage of the lab, we will be continuing to configure the same Kubernetes cluster from the perspective of a developer.

When finished, click the “Return to the lab” link to continue with this lab.

The lab continues to run in the background. If the lab goes into standby mode, you can resume it after completing the module.

ISim Notes Developer Persona - Do Not Publish

This part of the lab is presented as a Hands-on Labs Interactive Simulation. This will allow you to experience steps which are too time-consuming or resource intensive to do live in the lab environment. In this simulation, you can use the software interface as if you are interacting with a live environment.

The orange boxes show where to click, and the left and right arrow keys can also be used to move through the simulation in either direction.

Download and install the Plugin

In the following scenario, we will focus on a Developers Persona and use the Kubernetes cluster to deploy, manage and secure a container deployment.

Click Open Link to access the CLI Tools

We will use a Linux VM as our client and need to download the appropriate client plugin

Click Show More Options

Click Linux

Click Plugin for Linux

Click Copy Link Address

Click Putty Icon

We will ssh into the Ubuntu cli-vm using Putty

Click cli-vm

Click Load

Click Open

The wget command downloads the plugin zip file through the proxy server in the Supervisor cluster control plane.

Unzip the plugin file. Note that the bin directory now contains both the kubectl and the kubectl-vsphere plugin executables. This directory must be in your system PATH. We have already set that up for you.

Click to paste unzip vsphere-plugin.zip and hit Enter

Click to type A and hit Enter

Application Deployment on the Supervisor Cluster

Supervisor Namespaces are integrated with vSphere Single SignOn through the vSphere Kubernetes Plugin.

The Login command is redirected using an authenticating proxy in the Supervisor Control plane to the PSC in vCenter. The PSC returns a json web token that is stored in the client config file and will authenticate the user to the Supervisor Cluster master on all subsequent kubectl commands.

The config file contains the contexts for the namespaces the user has access to, along with the jwt tokens needed to authenticate to the kubernetes API.

Click to type cat ~/.kube/config and hit Enter

Next we want to see the Namespaces that user Fred has access to

Click to type kubectl config get-contexts and hit Enter

Now we set the current Namespace to be hol. This means Fred won’t have to append the -n hol flag to each kubectl command.

Click to type kubectl config use-context hol and hit Enter

To verify that your context is set correctly and that the API is responding, execute the following command. You should see No Resources Found because nothing has been created in this new Namespace.

Click to type kubectl get all and hit Enter (Should get No Resources Found response)

Let’s jump back into our admin persona and take a look at NSX. NSX objects are automatically created as we do things like create clusters, add Namespaces or deploy application objects on our cluster. We will log into NSX-Mgr and look at some of those objects.

Click to open a new Chrome tab

Click nsx-mgr.corp.local bookmark

Click username and type admin

Click password and type VMware1!base

Click LOG IN

Click Advanced Networking & Security

You will see Logical Switches (now called segments) for each Namespace. The first one in the list should be your Namespace –

Click on the square box to select Logical Switch

Click on the arrow to Expand Subnets

Click to Scroll Down - Notice the CIDR for your Namespace. 10.244.7.0/24 This segment came from a Pod CIDR defined on cluster create

Click on Routers - Note the T0 router for north/south routing a single T1 for your cluster

Click on NAT

Click the Logical Router

Click domain-cB:4122c457-a31b-4749-aaa7-a8b4f9a03adb

Note that there is an SNAT IP for each Namespace in the cluster. Find the CIDR for your Namespace and note that all traffic from Pods in this Namespace will have that translated IP. These IPs come from the Egress CIDR range and must be routable from your corporate network.

Click IPAM

Click the domain-cB:4122c457-a31b-4749-aaa7-a8b4f9a03adb-ip... to see your Pod CIDR subnets

Click on Load Balancers

Click the Small domain-cB:4122c457-a31b-4749-aaa7-a8b4f9a03a... this is the Load balancer for access to the MasterAPI from external users.

Click on Virtual Servers and note the IP.

Click to Scroll to the right. This is the IP you logged in with. The LB labelled clusterip-domain…. Is used for calls from outside the cluster to the master. This is in preparation for Multi-Master support that will be in a later build.

Click to Scroll back to the left

Click on Security.

Click on Distributed Firewall

Click the plus to expand the line that starts ds-domain-cB:4122c457-a31b-...

Click to Scroll Down

Notice the Deny-all Drop rule that will prevent all ingress/egress by default. Keep this in mind as we deploy an application.

Click the Putty on the taskbar to go back to the CLI-VM

Click to type cd $HOME/labs and hit Enter

We are going to start by creating a persistent Volume Claim. This command will cause a kubernetes persistent volume and a vSphere volume to be created. The Volume will be bound to the volume claim as a single step.

Click to type cat nginx-claim.yaml and hit Enter

Click to type kubectl apply -f nginx-claim.yaml and hit Enter

Click to type kubectl get pvc and hit Enter - Notice volume bound to the claim

Now we will create a pod that mounts that persistent volume onto the pod when it is deployed.

Click to type cat nginx-pers.yaml and hit Enter

Click to type kubectl apply -f nginx-pers.yaml and hit Enter

Click to type kubectl get all and hit Enter

The pod has been created but is still Pending while the image is downloaded and the container is created.

Click to type kubectl describe pod/nginx-pers-646cdfdbbd-b58gv and hit Enter to show you the status of the pod startup

Now the Pod is running and the persistent volume has been successfully attached.

Click the tab at the top to vCenter

Click Menu

Click Hosts and Clusters

Click on nginx-pers-646cdfdbbd.. Native Pod

This is a new summary page for vSphere Native Pods. Notice that there is no console and that you cannot take action on these objects except through the Kubernetes API.

Click the hol Namespace Resource Pool

Kubernetes events are now captured in vCenter. VI Admin can collaborate with developers on kubernetes specific troubleshooting.

Click Monitor - Note Kubernetes events in vSphere client

Container Native Storage (CNS) in vCenter provides the VI Admin with the ability to associate vSphere storage volumes with the associated Kubernetes objects.

Click Storage

Click Persistent Volumes

Click the Volume Name

Click icon next to volume name for details

Click Basic - This will display some basic information about the volume

Click to Scroll Down to view more information

Click to Scroll Back Up

Click Kubernetes Objects

Click to Scroll Down to see the Native Pod it's attached to and the other Kubernetes information

Click to Scroll Back Up

Interaction with vSphere Native pods is the same as with traditional Kubernetes pods. Kubectl provides the capability to jump into individual containers within the pod. We will do that here.

NSX objects are automatically created as part of the lifecycle of vSphere native pods. We will now create a deployment and a Load Balancer service. An NSX Load Balancer is automatically created and associated with the service

Click to type cat nginx-lbsvc.yaml and hit Enter to deploy replica set with LB

Click to type kubectl apply -f nginx-lbsvc.yaml and hit Enter

Click to type Kubectl get all and hit Enter

Click to type Kubectl get svc -o wide and hit Enter. Notice that the EXTERNAL-IP status is <pending>. Let try to check again to see if an IP has been assigned.

Click to paste Kubectl get svc -o wide and hit Enter

Click to open a new Tab in Chrome.

Click to type the URL http://192.168.124.2 - Note that you are blocked. Namespace Ingress is Denied by default

Click the Putty on the taskbar to return to the session

Click to type cat enable-policy.yaml and hit Enter. This will create a network policy that enables Ingress/Egress to/from all Pods.

Click to type kubectl apply -f enable-policy.yaml and hit Enter

Kubernetes Network Policy is implemented through NSX. Deploying this network policy causes distributed firewall rules to be created in NSX that enable Ingress to the application we just deployed.

Click the NSX-Manager tab in Chrome

Click on Distributed Firewall

Click on plus sign to expand hol-allow-all-whitelist. Note that all-ingress-allow has been added.