Share this:

VMworld 2016 in Las Vegas is over and there is a ton of content available already. The General Sessions (Monday & Tuesday) give a great overview about key announcements and especially the keynote from Monday goes beyond VMware technology and takes a look at the future of IT. Pat Gelsinger also introduced VMware Cloud Foundation and the Cross-Cloud Architecture:

Share this:

I just got back from a fantastic VMworld 2016 in Las Vegas and want to share some very good news with you. The team has already put VMworld session recordings online (540+ recordings so far!) – and this time, they will be accessible to everyone.

So if you were not able to attend VMworld in Las Vegas or want to review some of the announcements in more detail, you can head over to VMworld.com and check out the content anytime. It’s also a great preparation for VMworld in Barcelona later this year! Check out the Schedule Builder for VMworld Barcelona and prepare your experience today!

Share this:

Last week, I had an interesting conversation with my friend Michael on vSphere Integrated Containers (VIC) in it’s current version 0.4. We discussed some of the key concepts and how they relate to other container implementations out there. I decided to summarize the key observations with a little more detail here as I expect this information to be interesting for operations teams once they start running VIC.Please note: this is based on the currently available Open Source VIC project in version 0.4 running on vSphere 6.0 in my homelab.For simplicity reasons, I decided to go with a “standalone ESXi” installation of my Virtual Container Host (VCH) in this example.

More details about the inner workings can be found in the VIC 0.4 blogposts by Cormac that are also listed in the link section below. In this post I’d like to focus more on the topic of state information and how this is handled in VIC 0.4.

First of all, it is important to understand the difference between VCHs in VIC in comparison to other (in this case linux-based) container solutions. While each container in a N:1 model (containers:linux) has its private namespace, the underlying shared kernel provides the container control plane to look into containers and perform process-related actions (start, stop, …). Runtime environment and control plane are directly coupled.

In VIC, the runtime/execution environment of the container is a so called containerVM (based on PhotonOS) which is decoupled from it’s “control plane”, the Virtual Container Host itself. This creates a new layer of abstraction where communication flow but also state information needs to be captured and made available.

To establish a secure communications path between these two components, VIC also introduces the concept of a Tether to connect into the actual containerVM. This concept is part of the Port Layer Abstractions that allows VIC to be extensible. More details are described on the VIC Container Abstractions documentation page.

Here, we also find the boot disk that got transferred with the deployment of the VCH:
ide0:0.deviceType = "cdrom-image"
ide0:0.fileName = "appliance.iso"
ide0:0.present = "TRUE"

The general approach for storing state information is described in the Configuration persistence mechanism overview documentation. According to this, VIC actually makes use of the vSphere extraConfig and guestinfo mechanisms to store relevant information. But where do extraConfig and guestinfo actually reside? In a normal vSphere VM, this information is stored in the VMX file of the VM (and remember, a container in VIC actually is a VM – the containerVM).

Starting a simple “hello-world” container should trigger the whole workflow that also creates a new VM. But let’s go through it step by step:

So our container ran as ID 2cf7f483bf6e. How does that containerVM actually look on our standalone ESXi host and even more interestingly, where does the information about the container (from docker ps -a) come from?

First of all, there is a newly created VM named 2cf7f483bf6e7f32daa53f51ca388d5fb153f78d3a74d313318099086638ad58 – just as expected. Looking at the VMX file, we’ll find a lot of session information that we already found in docker ps -a:
guestinfo./common/name = "jolly_panini"
guestinfo./sessions|2cf7f483bf6e7f32daa53f51ca388d5fb153f78d3a74d313318099086638ad58/common/name = "jolly_panini"
guestinfo./sessions|2cf7f483bf6e7f32daa53f51ca388d5fb153f78d3a74d313318099086638ad58/cmd/Path = "/hello"
guestinfo./repo = "hello-world"

In summary, all container state information is kept close to the containerVM, stored in the VMX file. VCH and containerVM use the ISO-files that are tranferred during the vic-machine install process. VIC also introduces a new level of abstraction between control plane and execution environment that allows VIC to be extensible for future usecases.

Share this:

I just had to reset my homelab Intel NUC’s ESXi 6.0 network configuration because I wanted to test a specific setting in vSphere Integrated Containers. Unfortunately, the Intel NUC only has one physical uplink and that uplink (and VMkernel Portgroup) was configured on a Distributed vSwitch – I needed it on a Standard vSwitch for the test. Migrating the VMkernel Portgroup from the Distributed to a Standard vSwitch was a little challenging and I didn’t want to set up an external monitor to use the Direct Console User Interface (DCUI). But with the help of William’s ESXi virtual appliance and some hints in the vSphere documentation, I was able to reproduce the necessary keyboard inputs and perform it with only a USB keyboard attached to the NUC. Instead of summarizing it only for myself, I though I’ll share it here as I couldn’t find similar instructions on google.

Please don’t do this in a production environment, blindly configuring a system isn’t a good idea.

What is actually going on if you could view DCUI? First, you need to use/press F2 (and potentially “fn” or similar) to get into ESXi’s DCUI system management:

It will ask you to authenticate first (pressing TAB – <root_password> – ENTER):

Then, you need to go to “Network Restore Options” in the System Customization menu (pressing DOWN – DOWN – DOWN – DOWN – ENTER):

And in the “Network Restore Options”, you’ll have the option to “Restore Standard Switch” (pressing DOWN – ENTER – F11):

After selecting “Standard Switch”, you’ll need to confirm a new dialog with “F11” and then a new vSwitch will be created on your host. Mine worked like a charm, I found a new Standard vSwitch with vmk0 using my “old” management IP address for ESXi.