The Beginner’s Guide to VMware Pivotal Container Service (PKS)

If you're one of those people that prefers to spend their free time on the weekend away from their screens, breathing in fresh air or taking a walk in the park, there might be a chance that you aren't a weirdo and/or that you haven't heard about Pivotal Container Service, one of VMware's latest offerings in the container space. To be fair, VMware has quite a few of them. What you need to know about this one though, is that Pivotal Container Service (or PKS for short) is jointly developed by VMware, Pivotal and Google and it will help you easily build multiple Kubernetes clusters, with a bunch of cool features and manage them fairly easily.

At this point, Kubernetes is considered the new gold standard for deploying and managing container workloads. I won't get too deep into why Kubernetes has value, many others have done that much better than I would be able to. But what I can tell you is that developers want Kubernetes. In many organizations, if you're not offering that possibility to your developement team, they might not be willing to patiently wait for you to get your act together and start looking into it, they will simply get what they need in other ways, whether that's using the ton of public cloud possibilities or a do-it-yourself Kubernetes build inside VMs.

Why Pivotal Container Service?

You might be wondering why you would use PKS rather than manually installing kubernetes in a VM to get this done. Simply put, building and managing Kubernetes manually is hard. Many Kubernetes based offerings and even some entire companies' existence are based on that fact alone! If you want to understand what I mean by "hard" , or if you enjoy crying tears of anguish, follow the very popular guide on Kubernetes the Hard Way by Kelsey Hightower.

For anyone that is (relatively) chemically balanced, let's walk through a few reasons why you should consider PKS.

Vanilla kubernetes, constantly compatible with GKE: Upstream and constantly compatible kubernetes is a big deal. Portability of tools between other Kubernetes environments and PKS are easily possible, which is not always true if you use a vendor that brings gives their own flavor to their K8s distribution. As soon as GKE releases a new stable version of Kubernetes, PKS normally follows suit with exactly the same one shortly after.

Advanced Networking features : An important aspect of PKS is the networking portion with NSX-T. Even if you are very comfortable with NSX for vSphere, NSX-T is a different playing field altogether. Concepts from NSX-V do not necessarily carry over 1:1 to NSX-T with a different name, there are some entirely redesigned concepts that might need you to forget what you know and start fresh. Since this is an important subject in PKS, here are a few quick tips on this subject:

If you already use NSX-V and are looking to integrate it with NSX-T because of PKS, you can use both NSX's with a single vCenter, but you'll need to use different ESXi clusters for each.

If you're short on time, this is the list of NSX-T constructs you'll want to look into at a bare minimum before you make any design decisions :

The NSX-T Container Plugin or NCP for short is constantly listening to Kubernetes' etcd database, to find out when a user creates a Kubernetes cluster or Namespace. The NCP is the glue that allows NSX-T to really work its magic.

Logical Switches. A ton of these are created when you create a new K8s cluster or add a namespace to an existing one.

Edge clusters and their limitations related to the number of PKS k8s clusters it supports for each size.

Tier 0 routers are created when you create new Kubernetes clusters, it is normally the single point of entry and exit into and out of the kubernetes clusters.

Tier 1 routers are used extensively by PKS. Every Kubernetes cluster creates a few and attaches then to Logical Switches also created by NSX-T. Every k8s namespace you create with also come with its own new T1 router.

NSX Distributed Firewall is consumed as a service directly through the Kubernetes API

NSX Load balancing is integrated to the Kubernetes API via the NCP and is configured when the associated Kubernetes constructs are requested.

DDI (DNS, DHCP & IPAM) : The IPAM functionality is used to configure an IP pool from which /24 networks are pulled for each new K8S cluster and Namespace.

Harbor registry : you always need a container registry, this is an on-premises enterprise-ready and its a project that is part of the Cloud Native Computing Foundation (CNCF). Harbors is pretty widely used in the industry on its own, but the integration with PKS makes it that much sweeter. Everytime you create a new Kubernetes cluster via PKS, Harbor automatically gives it a "Project", which allows for segregation of multiple tenants that use the same PKS/Harbor instance.

Persistent volumes : Persistent volumes are a breeze to configure and consume. Consumption works through another VMware open source project called Hatchway, which allows a Kubernetes user to assign persistent volumes in a self-service directly through the Kubernetes API which actually uses standard VMDK disks in the backend. Pretty simple, yet pretty powerful and convenient.

Integration with GCP : PKS allows you to provision K8s clusters in your vSphere Datacenter or do the same in GCP using Google Cloud's Compute Engine to provision instances and BOSH to build the Kubernetes cluster software in those instances.

Exposing or consuming add-ons from the Pivotal Services Marketplace : This component allows you to services via the Open Service Broker

Integration with tools you know: An attractive part of using PKS in your existing vSphere Datacenter is that you get to remain in your comfortable plush slippers. You can use vRealize Operations to monitor your Kubernetes Clusters, vRealize Log Insight to collect logs from PKS and K8s clusters and if you're lucky enough to have a Wavefront, you get the Cadillac DeVille of PKS monitoring solutions: it does auto-discovert of new kubernetes clusters, comes with a bunch of cool dashboards and much more.

Get started with Installing PKS

As it is today, the PKS installation is a fairly long process with a few moving parts, but it will no doubt get better with time. In the mean time, here are a few great resources to help you get started :

The answer to that question is not as straightforward as it may seem. Nowadays, it seems like everyone and their mother is launching a product based on Kubernetes in one way or another. It's easy to get lost in a sea of vendors with similar solutions when learning about a new type of product. I find it helpful to learn about the top competing products in that space and understand what added value their products give to users.

Most of the major Kubernetes offerings are not primarily focused on On-Premises deployments, they are built by and for public cloud. AWS, Azure and Google Cloud each have their own, not to mention that VMware also has one as a part of their VMware Cloud on AWS suite of products called VMware Kubernetes Engine (VKE).

So the crown in the Kubernetes-as-a-service-on-premises space is still up for grabs. Here are a few of the bigger names in the game that are reaching for it :

Red Hat Openshift

Openshift takes a slightly different approach to solve the Kubernetes problem, it's geared towards installing and managing a Kubernetes cluster rather than offering Kubernetes Clusters as-a-Service. Word on the street is that Openshift is a great product though, if it fits your use case, it's probably worth looking into.

Heptio is a company that is gaining steam fast and already has a cool set of Kubernetes related projects to work with and the fact that their Kubernetes is subscription-based can make your life a little easier than having to deploy it yourself.

Another big player that could pose a real threat to PKS in the near future is the recently announced Google Container Engine (GKE) On-Prem. Needless to say, if Google really sets their mind to it and plays their cards right, they could take home a big slice of the pie, even for Private Cloud environments.

Integration with other VMware products that you already use

There's a few interesting ways you can interact with PKS, other than the PKS CLI. Here are some of them :

vCenter HTML5 vSphere Client Fling (Coming soon!) – This will allow you to keep an eye on your Kubernetes clusters from a high-level.

vRealize Operations Management Pack for Kubernetes – This will help you monitor Kubernetes clusters created by PKS, obvs. Keep in mind, monitoring is not quite automatically added for newly created clusters, you'll have to cook up some of your own automation for that.

Wavefront integration for PKS – Wavefront is probably the best integration on here, with autodiscovery of new clusters and very eye-friendly dashboards.

You can bet that these integrations will just get better and more numerous with time.

The Bottom Line

I'm not exactly out here saying that Pivotal Container Service is the best thing since sliced bread, (because those that know me, know I'm a huge fan of bread) but PKS is pretty darn close and it gives VMware shops an easily accessible gateway drug to get Kubernetes clusters up and running quickly in their organization.

There are a few very clever concepts and open source tools that when combined, make for a great product on paper, in your lab and soon we'll know if it will survive the battle in a enterprise grade production environments.