Upgrading PKS with NSX-T from 1.0.x to 1.1

Yesterday, Pivotal Container Service 1.1 dropped and, as it’s something I’ve been actively learning in my lab, I wanted to jump on the upgrade straight away. PKS with NSX-T is a really hot topic right now and I think it’s going to be a big part of the future CNA landscape.

My Lab PKS 1.0.4 deployment is configured as a “NO-NAT with Logical Switch (NSX-T) Topology” as depicted in the diagram below (from the PKS documentation). My setup has these network characteristics:

The PKS control plane is deployed inside of the NSX-T network. Both the PKS control plane components (VMs) and the Kubernetes Nodes use routable IP addresses.

I used William Lam’s excellent series on PKS with NSX-T to configure a lot of the settings, so I am going to assume a familiarity with that series. If not, I suggest you start there to get a good understanding of how everything is laid out.

It will be no surprise, given my impending move to the VMware PSO NSX Practice, that this morning I’ve been focussing on NSX-T. The two sessions I attended were the Introduction to NSX-T Architecture and Integrating NSX-T with Kubernetes. In a weird twist of scheduling, the Kubernetes session was before the introduction session, but it worked out OK.

I found the Kubernetes session really enjoyable and really felt like the speakers delivered a great overview of the integration and how they work together. I was pleasantly surprised with how familiar a lot of the concepts were, coming from an NSX-v and vRealize Automation background. It’s similar, but different! This is something I can really get my teeth into.

Moving into the NSX-T architecture session, Kevin and Dimitri walked through the components and functions of NSX-T and demonstrated integration with OpenStack as a cloud management platform to automate deployments. Again I was pleased with the familiarity of all the concepts and terminology. (more…)

The Speed of Change in IT Infrastructure (Opinion)

Over the past 6 months I have been dwelling more and more on the obvious speed of change and development in IT Infrastructure. What do I mean? Well each year there is the new hotness, the next thing/innovation you are told you need or should have.

In most cases the innovations and new tech are ground breaking awesomeness and most certainly offer new opportunities for the infrastructure masses.

I am all for progress, if you are not moving forward and regularly looking for sensible ways to improve what you do and the infrastructure you use then I really do think you are in the wrong industry.

However what I am beginning to take exception to is the insistence each time that if you are not doing/adopting the next new thing in a short space of time you informed you will be irrelevant before you know it. (insert suitable words that exclaim you are not one of the cool kids)

vRA7.2 and vSphere Integrated Containers

One of the cool new features released with vRealize Automation 7.2 was the integration of VMware Admiral (container management) into the product, and recently VMware made version 1 of vSphere Integrated Containers generally available (GA), so I thought it was time I started playing around with the two.

In this article I’m going to cover deploying VIC to my vSphere environment and then adding that host to the vRA 7.2 container management.

Deploying vSphere Integrated Containers

VIC is deployed using a command line interface, which deploys a vApp and a container host VM onto your ESXi host or vSphere cluster. There are a LOT of different ways to configure VIC so I strongly suggest you read and digest the VIC Installation Guide. For the sake of simplicity, I’m going to deploy as basic a setup as I can figure out. (more…)