Support for additional Docker commands. For the list of Docker commands that this release supports, see Supported Docker Commands in Developing Container Applications with vSphere Integrated Containers. (Link)

You can also use vic-machine upgrade to upgrade the Virtual Container Hosts. From the Upgrade Guide:

When you upgrade a running VCH, the VCH goes temporarily offline, but container workloads continue as normal during the upgrade process. Upgrading a VCH does not affect any mapped container networks that you defined by setting the vic-machine create --container-network option. The following operations are not available during upgrade:

You cannot access container logs

You cannot attach to a container

NAT based port forwarding is unavailable

IMPORTANT: Upgrading a VCH does not upgrade any existing container VMs that the VCH manages. For container VMs to boot from the latest version of bootstrap.iso, container developers must recreate them.

With the release of vSAN 6.6 and vCenter 6.5d, you might want to test out VIC 1.1 in your test/lab environment and leverage it to build a great platform for your development teams. Speaking of compatibility:

There is also a new demo video that shows the product & the updated User Interfaces in more detail. Check out the video here:

Time to update the lab!

Share this:

Only few days ago, the vSphere Integrated Containers team released the newest version 0.7 on GitHub and Bintray. I just want to summarize a few resources for tests with this release here and document some gotchas that have already been raised. Remember: this code is still a beta release so don’t deploy it to production immediately. You can also read up on the announcement of VIC as part of vSphere 6.5 in the official press release from VMworld.

During the installation, you can now specify a fixed IP address instead of DHCP for your Virtual Container Host (VCH) – this is one of the new features in the 0.7 release. Please make sure to use –dns-server with your vic-machine command to set the DNS server address in the VCH. Otherwise it will use the network gateway which results in some timeout errors during the installation. There is already an issue documented at https://github.com/vmware/vic/issues/3060.

If you deploy VIC in your environment and encounter any issues, please open a issue on GitHub (https://github.com/vmware/vic/issues). You can also reach out to myself via Twitter and I’ll try to get back to you as soon as possible.

Share this:

I just had to reset my homelab Intel NUC’s ESXi 6.0 network configuration because I wanted to test a specific setting in vSphere Integrated Containers. Unfortunately, the Intel NUC only has one physical uplink and that uplink (and VMkernel Portgroup) was configured on a Distributed vSwitch – I needed it on a Standard vSwitch for the test. Migrating the VMkernel Portgroup from the Distributed to a Standard vSwitch was a little challenging and I didn’t want to set up an external monitor to use the Direct Console User Interface (DCUI). But with the help of William’s ESXi virtual appliance and some hints in the vSphere documentation, I was able to reproduce the necessary keyboard inputs and perform it with only a USB keyboard attached to the NUC. Instead of summarizing it only for myself, I though I’ll share it here as I couldn’t find similar instructions on google.

Please don’t do this in a production environment, blindly configuring a system isn’t a good idea.

What is actually going on if you could view DCUI? First, you need to use/press F2 (and potentially “fn” or similar) to get into ESXi’s DCUI system management:

It will ask you to authenticate first (pressing TAB – <root_password> – ENTER):

Then, you need to go to “Network Restore Options” in the System Customization menu (pressing DOWN – DOWN – DOWN – DOWN – ENTER):

And in the “Network Restore Options”, you’ll have the option to “Restore Standard Switch” (pressing DOWN – ENTER – F11):

After selecting “Standard Switch”, you’ll need to confirm a new dialog with “F11” and then a new vSwitch will be created on your host. Mine worked like a charm, I found a new Standard vSwitch with vmk0 using my “old” management IP address for ESXi.

The vSphere ESXi hypervisor provides a high-performance and competitive platform that effectively runs many Tier 1 application workloads in virtual machines. By default, ESXi has been heavily tuned for driving high I/O throughput efficiently by utilizing fewer CPU cycles and conserving power, as required by a wide range of workloads.

However, Telco and NFV application workloads are different from the typical Tier I enterprise application workloads, in that they tend to be any combination of latency sensitive, jitter sensitive, or demanding high packet rate throughputs or aggregate bandwidth, and therefore need to be tuned for best performance on vSphere ESXi.

This white paper summarizes the findings and recommends best practices to tune the different layers of an application’s environment for Telco and NFV workloads.