Microsoft's been pretty busy integrating Docker containers into Windows Server 2016, and some of the terminology can be confusing. Today, I'm going to teach you what Hyper-V containers are and how to use them in the context of Windows Server 2016 Technical Preview 4 (TP4).

First of all, recall that a Docker container is an isolated application and/or operating system instance with its own private services and libraries.

Windows Server 2016 supports two types of Docker containers. Windows Server containers are containers intended for "high trust" environments, where you as a systems administrator aren't as concerned about data leakage among containers running on the same host or leakage between the containers and the host operating system.

By contrast, Hyper-V containers are Docker containers that are more fully isolated from (a) other containers and (b) the container host computer. As you can see in the following architectural drawing, what sets Hyper-V containers apart from Windows Server containers is that Hyper-V containers have their own copy of the Windows operating system kernel and a dedicated user space.

Architectural diagram of Windows Server containers

The main confusion I've had in the past concerning Hyper-V containers is mistaking the containers for Hyper-V virtual machines. As you'll see in a moment, Hyper-V containers do not appear to the container host's operating system as VMs. Hyper-V is simply the tool Microsoft used to provide higher isolation for certain container workloads.

One more point before we get started with the demo: the container deployment model (Windows Server vs. Hyper-V containers) is irrespective of the underlying container instance and image. For instance, you can build a container running an ASP.NET 5 Web application and deploy containers from that image by using either the Windows Server or Hyper-V container type.

Okay, this is potentially confusing, so pay close attention to the following Visio drawing and my accompanying explanation. We're using the Windows Server 2016 TP4 build, which I've downloaded to my Windows 8.1 administrative workstation.

To enable the Docker container functionality in the TP4 image, we need to perform three actions:

Is your mind wrapped around what we plan to do? Starting from our hardware host (A), we deploy a Windows Server 2016 TP4-based VM (B), run the setup script, which creates a container host VM (C). Finally, we can play with containers themselves (D).

Restart the VM and open the Hyper-V Manager tool. We need to ensure we have an external switch defined; the container host VM creation script will fail if we don't. I will show you my external switch, appropriately named External Switch, in the following screenshot:

We need an external Hyper-V switch to build our container host

Next, we'll download the VM creation script and save it to our C: drive as a .ps1 script file:

In any event, we need to start an elevated PowerShell session in the container host VM that I named conhost1.

Although we can use either native Docker commands or Windows PowerShell to manage containers, I choose to stick with PowerShell for the sake of today's example. Run the following statement to see all the container-related PowerShell commands:

1

Get-Command-Module containers

Let's validate that we have the Server Core and Server Nano containers available to us:

Note: I couldn't get a new Hyper-V container to start in my environment (remember that at this point we're dealing with super pre-release code). Thus, I'll start by creating the container as a Windows Server container, and then we'll convert it on the fly later.

Before we connect to the corecont container, let's quickly check the container host's IPv4 address (you'll see why in a moment):

1

2

3

4

Get-NetIPAddress|Select-Object-PropertyIPv4Address

IPv4Address

-----------

172.16.0.1

In the preceding output, I decided to show you only the internal NAT address that the container host shares with its local containers.

Now we can use PowerShell remoting to log into the new container and check the container's IPv4 address:

1

2

3

4

5

Enter-PSSession-containerName'corecont'-RunAsAdministrator

[corecont]:PSC:\>Get-NetIPAddress|Select-Object-PropertyIPv4Address

IPv4Address

-----------

172.16.0.2

What I wanted to show you there is that the container is indeed a separate entity from the container host.

Let's now exit our PowerShell remote session to return to the container host:

1

[corecont]:PSC:\>Exit-PSSession

We'll use Set-container and the trusty -RuntimeType parameter to change this running container's isolation level on the fly--this is wicked cool:

1

Set-container-Name corecont-RuntimeType HyperV

As a sanity check, run Get-VM on the container host to prove to yourself that our corecont container is not an honest-to-goodness virtual machine:

1

Get-VM

We can also verify that our newly converted Hyper-V is indeed completely isolated from the host. The Csrss.exe process represents the user mode of the Win32 subsystem. You should find that Windows Server containers' Csrss processes show up in a process list on the container host. By contrast, Hyper-V containers' Csrss processes should not.

Sadly, as of this writing I simply could not get my Hyper-V containers to behave correctly. Alpha code and all. Sigh.

At the least, though, I can demonstrate the concept in opposite by testing another Windows Server container I build named corecont2.

I'll connect to the corecont2 container and run a filtered process list:

Finally, I'll exit the remote session and run the same command on the container host:

1

2

3

4

5

6

7

PSC:\>Get-Process-Name csrss|Select-Object-PropertyProcessName,Id

ProcessNameId

-------------

csrss392

csrss468

csrss968

csrss2660

Aha! You can see process ID 968 from the perspective of the container host. I submit to you that once Microsoft tunes their Hyper-V container code in a future Windows Server 2016 TP build, running the previous commands will reveal that Hyper-V containers' Csrss process IDs do not show up from the perspective of the container host.

Hi Stephen. Yes, of course you can join Windows Server containers to an Active Directory domain. For instance, you can use sconfig in a Server Core container, or djoin.exe in a Nano Server container. Thanks for reading! Tim

Has anyone done this on an AMD box? I have 2016 with Hyper-V installed on AMD bare-metal and cannot get containers to start. I'm guessing nested virtualization is needed and only supported on intel processors.

Hey David. That's weird, cuz according to Microsoft, nested virtualization in Windows 10 supports AMD-V (ref: https://blogs.technet.microsoft.com/virtualization/2015/10/13/windows-insider-preview-nested-virtualization/). Remember that the Windows Server 2016 bits are in a preview state, so I wouldn't be surprised if the nested virtualization is only partially functional.

This July, we asked for software tips from the 2017 Microsoft Office National Champions, a set of charming teens who are officially the best at using PowerPoint, Word, and Excel. The Verge recently followed these teens to the World Championship in California, where they tested their Office skills in a contest that out-nerds the spelling bee.

In order to provide industry-standard compliance with the SWIFT 2017 Standards MT release 2017, Microsoft is offering, to customer's with Software Assurance, updates to the flat-file (MT) messaging schemas used with the Microsoft BizTalk Accelerator for SWIFT. The A4SWIFT Message Pack 2017 contains the following: Re-packaging of all SWIFT FIN message types and business rules...

Independent rendering allows the browser to selectively offload graphics processing to an additional CPU thread, so they can be rendered with minimal impact to the user interface thread and the overall visible performance characteristics page, such as silk-smooth scrolling, responsive interactions, and fluid animations. This technique was pioneered in Internet Explorer 11, and is key

Azure Service Bus .NET Standard client is generally available. With it comes support for .NET Core and the .NET framework. And as mentioned in an earlier post it also supports Mono/Xamarin for cross-platform application development. This is only the start of greater things to come.

The Azure Service Bus team is extremely excited to announce general availability of our Java client library version 1.0.0. It allows customers to enjoy a solid Java experience with Azure Service Bus as it comes complete with native functionality.