Sunday, September 27, 2015

In the last blog post, I talked about the architecture of
container images and how to use them in a similar way like our kids are using
Lego.

Today, I want to shift focus a bit and talk more about
management of container life-cycle using Docker in the Windows Server Technical
Preview 3.

If you have any challenges or problems in your IT
business today and ask me for advice, I would most likely point you to
something that adds more abstraction.

Abstraction is key, and is how we have solved big and
common challenges so far in this industry.

When we covered the architecture of containers in part 1,
we compared it with server virtualization.

Both technologies are solving the same challenges. However,
they are doing it at different abstraction layer.

With cloud computing we have had the IaaS service model
for a long time already, helping organization to speed up their processes and
development by leveraging this service model either in a private cloud, public
cloud or both – in a hybrid cloud.

However, being able to spin up new virtual machines isn’t
necessarily the answer to all the problems in the world.

Sure it makes you more agile and let you utilize your
resources far better compared to physical machines, but it is still a machine. A
machine requires management at the OS level, such as patching, backup,
configuration and more. Since you also
have access at the OS level you might end up in a situation where you have to
take actions that involves networking as well.

This is very often where it get complex for organizations
with a lot of developers.

They need to focus, learn and adopt new skillsets, just
to be capable of doing testing of their applications.

Wouldn’t it be nice if they didn’t have to care about
this jungle of complexity at all, knowing nothing about the environment they
will be shipping software into?

Given the fact that there’s different peoples involved
when it comes to software development and managing the environment of the
software, the challenges grows together with the organization itself and scale
becomes a problem.

This is where
containers comes to the rescue – or do they?

Containers has a good approach since all applications
within a container look the same on the outside from the host environment
perspective.

We can now wrap our software together within a container
and ship the container image to a shared repository and don’t deal with any of
the complexity that a managed OS normally require from us.

I have seen this in action, and here’s an example that
normally trigger people’s interest:

1)A developer create something new – or simply
commit some changes to their version control system (GitHub, VSO etc).

2)A new image (Docker in this case) is built with
the application.

3)The new Docker image goes through the entire
testing and approval process.

4)The image is committed to a shared repo.

5)The new Docker image is deployed into
production.

This seems like a well-known story we all have heard in
the IaaS world, right?

Please note that no infrastructure was touched from the
developer perspective during these steps.

This was just one example of real world organizations are
using containers today, and I will cover more good use cases as we move forward
in this blog series.

It is important that we’re honest and admit that new
technologies that gives us more and more capabilities, features and
possibilities, will at the same time introduce us for some new challenges as
well.

With containers, we can easily end up in a scenario where
the situation can remind us a bit about the movie called “Inception” ( https://en.wikipedia.org/wiki/Inception
). It might be hard to know exactly where you are when you are working - and
have access to all the different abstraction layers.

In Technical Preview 3 of Windows Server 2016, Windows
Server containers can be managed both with PowerShell and Docker.

What exactly is
Docker?

Docker has been around for years and ensures automated
deployments into containers by providing an additional layer of abstraction and
automation of OS virtualization on Linux, MAC OS and Windows.

Just as with Windows Server containers, Docker provides
resource isolation by using namespaces to allow independent containers to run
within a single Linux instance, instead of having the overhead of running and
maintaining virtual machines.

Although Linux containers wasn’t something new, it had
been around for years already, Docker made those Linux containers become available
for the general IT guy by simplifying the tooling and workflows.

In Windows Server 2016 TP3, containers can be deployed by
both Docker APIs and the Docker client, and Windows Server Containers. Later,
Hyper-V containers will be available too.

They important thing to note is that Linux containers
will (always) require Linux APIs from the host kernel itself, and Windows
Server Containers will require Windows APIs from the host Windows kernel. So although
you can’t run Linux on Windows or vice versa, you can manage all of these
containers with the same Docker client.

So getting back to the topic here – how to do management
of containers?

Since Docker was first, this blog post will focus on the
management experience by using Docker in TP3.

Note: In TP3,
we are not able to see nor manage the containers if they are created outside of
our preferred management solution. Meaning that containers that are created
with Docker can only be managed by using Docker, and containers created with
PowerShell can only be managed by using PowerShell.

During my testing on TP3, I have run into many
issues/bugs when testing management of containers.

In the following recipe, I would like to point out that
the following has been done:

1)I downloaded the image from Microsoft that
contains the Server Core image with the running container feature in addition to
Docker

2)I joined the container host to my AD domain

3)I enabled the server for remote management and
opened some required firewall ports

4)I learned that everything I would like to test
regarding Docker, should be performed on the container host itself, logged on
through RDP

Once I’ve logged into the container host, I run the
following cmdlet to see my images:

Docker images

This shows two images.

Next, I run the following cmdlet:

Docker ps

This will list all the containers on the system (note
that Docker is only able to see containers created by Docker).

The next thing I’d like to show off, was how to pull an
image from the Docker hub and then run it from my container host. First I get
an overview of all the Images that’s compatible with my system:

Docker search server

I see that Microsoft/iis seems like a good option in my
case, so I run the following cmdlet to download it:

Docker pull Microsoft/iis

This will first download the image and then extract it.

In the screen shot below, you can see all the steps I
have taken so far and the output. Obviously the last part didn’t work as
expected and I wasn’t able to pull the image down to my TP3 container host.

So heading back to the basics then and create a new
container based on an existing image.

Docker run –it –name krnesedemo
windowsservercore powershell

This will:

1)Create a new container based on the Windows
Server Core image

2)Name the container “krnesedemo”

3)Start an interactive PowerShell session since –it
was specified. Note that this is one of the reasons why you have to run this
locally on the container host. The cmdlet doesn’t work remotely

This will literally take seconds, and then my new
container is ready with a PowerShell prompt.

Below you can see that I am running some basic cmdlets to
verify that I am actually in a container context and not in the container host.

Also note the error I get after installing the Web-Server
feature. This is a known error in TP3 that you have to run some cmdlets several
times in order to get the right result. Executing it the second time shows that
it went as planned.

After exiting the session (exit), I will be back at the
container host’s cmdline session.

I run the following cmdlet to see all the running
containers:

Docker ps –a

This shows that the newly created container “krnesedemo”
is running PowerShell in an interactive session, when it was started and when I
exited it.

Now, I want to commit the changes I did (installed
Web-Server) and create a new image with the following cmdlet:

Docker commit
krnesedemo demoimage

In my environment, this cmdlet takes a few minutes to
complete. I also experienced some issues when the container was running prior
to executing this command. So my advice would be to run “Docker stop “container
name” “ prior to committing it.

After verifying that the image has been created (see
picture below), I run the following cmdlet to create a new container based on
the newly created image:

Docker run –it –name demo02
demoimage powershell

We have now
successfully created a new container based on our newly created image, and
through the interactive session we can also verify that the Web-Server is
present.

Next time I
will dive more into the PowerShell experience and see how you can leverage your
existing skillset to create a good management platform for your Windows
Containers.

Monday, September 7, 2015

In Part One, I covered the concept of Containers,
compared to server virtualization in a Microsoft context.

Today, I want to highlight the architecture of container
images and how you can use them as building blocks to speed up deployment.

Before we start

If you have a background in Server Virtualization, you
are probably very familiar with VM templates.

A VM template is a sysprep’d image that is generalized
and can be deployed over and over again. It is normally configure with its
required components and applications and kept up to date with the latest
patches.

A VM template contains the complete operating system (and
eventually its associated data disk(s)) and has been used by administrators and
developers for years when they want do rapidly be able to test and deploy their
applications on top of those VMs.

With Containers, this is a bit different. In the previous
blog post I explained that Containers are basically what we call “OS
Virtualization” and with Windows Server Containers the kernel is shared between
the container host and its
containers.

So, a container image is not the same as a VM image.

Container Image

Think of a container image as a snapshot/checkpoint of a
running container that can be re-deployed many times, isolated in its own user
mode with namespace virtualization.

Since the kernel is shared, it is no need for the
container image to contain the OS partition

When you have a running container, you can either stop
and discard the container once you are done with it, or you can stop and
capture the state and modifications you have made by transforming it into a
container image.

We have two types of container images. A Container OS image is the first layer in
potentially many image layers that make up a container. This image contains the
OS environment and is also immutable – which means it cannot be modified.

A container image is stored in its local repository so
that you can re-use the images as many times you’d like on the container host.
It is also possible to store the images in a remote repository, making them available
for multiple container hosts.

Let us see how the image creation process works with
Windows Server Containers

Working
with Container Images

In the current release, Windows
Server Containers can be managed by Docker client and PowerShell.

This blog post will focus on the
PowerShell experience and show which cmdlets you need to run in order to build
images, just as easy as you would do by playing with Lego J

First, we will explore the properties of a Container Image. An Image contains a Name, Publisher and a version

We are executing - and storing the following cmdlet in a variable: $conimage = Get-ContainerImage -Name "WinSrvCore"

Next, we create a new container based on this image by executing - and storing the following cmdlet in a variable: $con = New-Container -Name "Demo" -ContainerImage $conimage -SwitchName "VM".

Once the container is deployed, we will start it and invoke a command that installs the Web-Server role within this container ( Invoke-Command -ContainerId $con.ContainerId -RunAsAdministrator { Install-WindowsFeature -Name Web-Server } ). You can see that the picture below shows that the blue Lego block is now on top of the brown one (as in layers).

As described earlier in this blog post, we can stop the running container and create an image if we want to keep the state. We are doing that by executing New-ContainerImage -Container $con -Name Web -Publisher KRN -Version 1.0

If we now executes Get-ContainerImage, we have two images. One that has only the ServerCore, and another one that has ServerCore and the Web-Server Role installed.

We will repeat the process and create a new container based on the newly created Container Image.

In this container, we will install a web application too. The grey Lego block on top of the blue shows that this is an additional layer.

We are then stopping the running container again and creates another container image, containing the web application too.

In the local repository, we have now three different container images in a layered architecture.

Hopefully you found this useful, and I will soon be back with part three of this blog series.

Sunday, September 6, 2015

It is Sunday evening. I am doing what I’ve been doing for
the last 5-6 years. I am literally preparing myself for the upcoming week at
work.

There’s so much things to do, so much to learn, so much
to share and achieve during a normal week of work.

In today’s industry, things are happening at a cadence we’ve
never seen before. Cloud computing has definitively lead to an extreme pace of
innovation that is really hard to keep up with, unless you are all in and have
chosen the right battles to fight.

I am currently the CTO for one of the largest SI in the Nordics,
which of course requires a lot from me. In addition, I am also a Microsoft MVP
within Cloud & Datacenter that takes a lot of my time outside of work
hours.

Attending several conferences each and every year,
visiting customers around the entire globe and spending a lot of my time doing
research & development, I had over 200 flights in 2015.

It is all good. I have been doing this for the last 5-6 years. I am used to it and I can admit it is also my passion.

At home, I have my significant other (Kristine) that takes care of
our house and our 4 kids. Ideally, she would love to see that I was able to
spend more time at home with her and our kids. But she know what I am and what
I do. We have an agreement that I am able to focus more on my work, while she
takes care of the things that are happening at home. Again, I have been doing
this for the last 5-6 years and it’s all good.

I grew up in a valley where my parents had a farm with cattle’s.
The situation there was quite similar to what we are practicing in our family. My
dad was doing the work on the farm while my mum took care of our house and me
and my two elder brothers.

I and my brothers helped him as good – and as much as we
could when we grew up. We learned that quality was important. That we had to do
things properly. If we took any shortcuts, it would backfire. We learned it
back then. We learned it the hard way.

I learned a lot of growing up on a farm under these
circumstances. No matter what happened, my dad always had to do his work. In the
end of the day, all of his cows were depending on him. They needed their daily
care, food, water and much more. He could never say that “I don’t feel well
today, I have to call in sick”. No. it wasn’t possible.

He did his job in order to provide for his family, which
was the most important thing in his life.

The same thing can be said about my father in law. His work
was the foundation for his family. Working with logistics, he went to work
every day and answered any call he might receive outside of work hours to help
his co-workers no matter what the situation was. He did his work with pride
too.

It is September now. The 9th month in the
year. We have had some significant losses during the last 10 months. My father
in law passed away in December after losing the battle against liver cancer. He
fought to the bitter end. Although he had cancer and was under treatment
(chemotherapy), he was showing up on work every day, doing more than people
expected from him – given the circumstances.

He wasn’t aware of any other way. He had been doing this
for over 30 years, and it was all good.

Late in March this year. It was my father’s turn. He suddenly
passed away, unexpectedly and shocking for all of us. The last thing he did before
he passed away was to complete a project that he had promised our two sons.

A couple of years ago, my father almost lost his right
hand in an accident while working on his farm. Due to this injury, he had to
let go of his cattle as he couldn’t take care of them anymore.

However, he still had a lot of buildings and land to
maintain and kept up with that.

He had been doing this for over 40 years, and it was all
good.

Looking back at these two events makes me sad. It is
always very hard and tough when you are losing someone you love, that plays an
significant role for the entire family.

Back when it happened, I was sad, I was devastated. I did
my best to take care of my own family, our children. I still am and we work
through our losses every day as a unit.

But I didn’t stop. I continued to work. I sat down on every Sunday, just like this one and prepared for the upcoming week.

I learned a lot from these two men. My core values cannot be
questioned and is something I live and breathe every day. I take care of the
one’s I love, I provide for them and I do what I am.

I honor these men through my actions, my commitments and
my passion. I have been doing this for the last 5-6 years, and it’s all good.

You have heard a lot about it lately, Microsoft is
speeding up on their container investment and we can see the early beginning in
Windows Server 2016 Technical Preview 3.

But before we start to go deep into the container
technology in TP3, I would like to add some more context so that you more
easily can absorb and understand what exactly is going on here.

Server Virtualization

Container technologies belongs to the virtualization
category, but before we explain the concept and technology that gives us “containerization”,
we will take a few steps back and see where we are coming from.

Server (virtual machine) virtualization is finally
mainstream for the majority of the industry by now.

We have been using virtualization in order to provide an
isolated environment for guest instances on a host to increase machine density,
enable new scenarios, speed up test & development etc.

Server virtualization gave us an abstraction where every
virtual machine were in the belief of that they had their own CPU, I/O
resources, memory and networking.

In the Microsoft world, we first started with server
virtualization using a type 2 hypervisor, such as Virtual Server and Virtual PC
– where all the hardware access was emulated through the operating system
itself, meaning that the virtualization software was running in user mode, just
as every other application on that machine.

So a type 2 Hypervisor have in essence two hardware
abstraction layers, turning them all into bad candidates for real world
workloads.

This changed with Hyper-V in Windows Server 2008, where
Microsoft introduced their first type 1 hypervisor.

Hyper-V is a microkernelized hypervisor that implements a
shared virtualization stack and a distributed driver model that is very
flexible and secure.

With this approach, Microsoft had finally a hypervisor
that could run workloads considered as “always-on” and also based on x64
architecture.

I don’t have to go through the entire story of Hyper-V,
but to summarize: Hyper-V in these days reminds you a bit of VMware – only it
is better!

As stated earlier, server virtualization is key and a common requirement for cloud computing. In
fact, Microsoft wouldn’t have such a good story today if it wasn’t for the investment
they made in Hyper-V.

If you look closely, the Cloud OS vision with the entire “cloud
consistency” approach derives from the hypervisor itself.

Empowering IaaS

In Azure today, we have many sophisticated offerings
around the Infrastrucutre as a Service delivery model, focusing on core compute,
networking and storage capabilities. Also, they have taken this a step further
where we can use something called VM extensions in our virtual machines, so
that during provisioning time – or post deployment, we can interact with the
virtual machine operating system to perform some really advanced stuff. Examples
here could be deployment and configuration of a complex LoB application.

Microsoft Azure and Windows Azure Pack (Azure
technologies on-prem) has been focusing on IaaS for a long time, and today we
have literally everything we need to use any of these cloud environments to
rapidly instantiate new test & dev environments, spinning up virtual
machine instances in isolated networks and fully leverage the software-defined
datacenter model that Microsoft provides.

But what do we do when virtual machines aren’t enough? What
if we want to be even more agile? What if we don’t want to sit down and wait
for the VM to be deployed, configured and available before we can verify our
test results? What if we want to maximize our investments even further and
increase the hw utilization to the maximum?

This is where containers comes handy and provides us with
OS virtualization.

OS Virtualization

Many people have already started to compare Windows Server Containers with technologies such as Server App-V and App-V (for desktops).

Neither of these comparisons are really true, as Windows Server Containers covers a lot more and has some fundamental differences when looking at the architecture and use cases.

The concept, however, might be similar, as App-V technologies (both for server and desktop) aimed to deliver isolated application environments, in its own sandbox. Things could either be executed locally or streamed from a server.

Microsoft will give us two options when it comes to
container technology:

Windows Server Containers and Hyper-V Containers.

Before you get confused or starts to raise questions: You can
run both Windows Server Containers and Hyper-V Containers within a VM
(where the VM is the Container host). However, using Hyper-V Containers would require
that Hyper-V is installed.

In Windows Server
Container, the container is a process that executes in its own isolated user mode of the operating
system, but where the kernel is shared between the container host and all of its containers.

To achieve isolation between the Containers and the
Container Hosts, namespace virtualization is used to provide independent
session namespace and kernel object namespace isolation per container

In addition, each container is isolated behind a network
compartment using NAT (meaning that the container host has a Hyper-V Virtual
Switch configured, connected to the containers).

For applications executing in a container process, all
file and registry changes are captured through their respective drivers (file
filter driver and registry filter). System state are shown as read-only to the
application.

With this architecture of Windows Server Containers, it
is very likely that this is an ideal approach for applications within the same
trust boundary since the host kernel and APIs are shared among the containers. Windows
Server Containers is the most optimized solution when reduced start-up time is
important to you.

On the other hand, we also have something called Hyper-V Containers (this is not
available in Technical Preview 3).

A Hyper-V Container provides the same capabilities as Windows
Server Containers, but has its own (isolated) copy of the Windows kernel and
memory directly assigned to them. There is of course pros and considerations
with every type of technology, and with Hyper-V Containers you will achieve
more isolation and better security, but have a less efficient start-up and
density compared to Windows Server Containers.

The following two pictures shows the difference between server
virtualization and OS virtualization (Windows Server Containers)

Server Virtualization

OS Virtualization

So, what are the use cases for Windows Server Containers?

It is still early days with Windows Server 2016 Technical
Preview 3 so things are subject to change.

However, there are things we need to start to think about
right now when it comes to how to leverage containers.

If you take a closer look at Docker (which has been doing
this for a long time already), you might get a hint of what you can achieve
using container technology.

Containers aren’t necessarily the right solution for all
kind of applications, scenarios and tools you may think of, but gives you a unique
opportunity to speed up testing, development and to effectively enable DevOps
scenarios that embraces continuous delivery.

Containers can be spun up in seconds and we all know that
having multiple new “objects” in our environment can also lead to a demand of
control and management that also introduces us for a new toolset.

I am eager to share more of my learning of Windows Server
Containers with you, and will shortly publish part two of this blog series.