During Ignite earlier this year, Nano Server was
introduced by the legend himself, Mr. Snover.

Let us be very clear: Nano Server is not even comparable
to Server Core, that Microsoft has
been pushing since the release of it, where you run a full Windows Server without
any graphical user interface. However, some of the concepts are the same and
applicable when it comes to Nano.

Some of drivers for Nano Server was based on customer
feedback, and you might be familiar with the following statements:

-Reboots
impact my business

Think about Windows Server in general, not just Hyper-V
in a cluster context – which more or less deals with reboots.

Very often you would find yourself in a situation where
you had to reboot a server due to an update – of a component you in fact wasn’t
using, nor aware of was installed on the server (that’s a different topic, but
you get the point).

-What’s up
with the server image? It’s way too big!

From a WAP standpoint, using VMM as the VM Cloud
Provider, you have been doing plenty of VM deployments. You normally have to
sit and wait for several minutes just for the data transfer to complete. Then there’s
the VM customization if it’s a VM Role, and so on and so forth. Although thing
has been improving over the last years with Fast-File-Copy and support of ODX,
the image size is very big. And don’t forget - this affects backup, restore and
DR scenarios too, in addition to the extra cost on our networking fabric
infrastructure.

-Infrastructure
requires too many resources

I am running and operating a large datacenter today,
where I have effectively been able to standardize on the server roles and
features I only need. However, the cost per server is too high when it comes to
utilization, and really make an impact on the VM density.

Nano Server is designed for the Cloud, which means it’s
effective and goes along with a “Zero-footprint” model. Server Roles and
optional features live outside of the Nano Server itself, and we have
stand-alone packages that we adds to the image by using DISM. More about that
later.

Nano Server is a “headless”, 64-bit only, deployment
option for Windows Server that according to Microsoft marketing is refactored
to focus on “Cloud OS Infrastructure” and “Born-in-the-cloud applications”.

The key roles and features we have today is the
following:

-Hyper-V

Yes, this is (If you ask me) the key – and the flagship
when it comes to Nano Server. You might remember the stand-alone Hyper-V server
that was based on the Windows Kernel but only ran the Hyper-V Role? Well, the
Nano Server is much smaller and only
is based on Hyper-V, sharing the exact same architecture as the Hypervisor we
know from the GUI based Windows Server edition.

-Storage
(SOFS)

As you probably know already, compute without storage is
quite useless, given the fact that Virtual Machines is nothing but a set of
files on a disk J

With a package for storage, we are able to instantiate
several Nano Servers with the storage role to act as storage nodes based on
Storage Spaces Direct (shared-nothing storage). This is very cool and will of
course qualify for its own blog post in the near future.

-Clustering

Both Hyper-V and Storage (SOFS) relies (in many
situations) on the Windows Failover Cluster feature. Luckily, the cluster
feature servers as its own package for Nano Server and we can effectively
enable critical infra roles in a HA configuration using clustering.

-Windows
Container

This is new in TP3 – and I suggest you read Aidan’s blog
about the topic. However, you won’t be able to test/verify this package on Nano
Server in this TP, as it is missing several of its key requirements and dependencies.

-Guest
Package

Did you think that you had to run Nano Server on your
physical servers only? Remember that Nano is designed for the “born-in-the-cloud
applications” too, so you can of course run them as virtual machines. However,
you would have to add the Guest Package to make them aware that they are
running on top of Hyper-V.

In addition, we have packages for OEM Drivers (package of
all drivers in Server Core), OneCore ReverseForwarders and Defender.

Remote Management

Nano Server is all about being effective, leverage the
cloud computing attributes, being effective, scalable and achieve more. In order
to do so, we must understand that Nano Server is all about remote management.

With a subset of Win32 support, PowerShell Core, ASP.NET5,
we aren’t able to use Nano Server for everything.
But that is also the point here.

Although Nano is refactored to run on CoreCLR, we have
full PowerShell language compatibility and remoting. Examples here are
Invoke-Command, New-PSSession, Enter-PSSession etc.

Getting started
with Nano Server for Compute

Alright, so let us get over to some practical examples on
how to get started with Nano Server for Compute, and how to actually do the configuration.

I must admit, that the experience of installing and configuring
Nano wasn’t state of the art in TP2.

Now, in TP3, you can see that we have the required
scripts and files located on the media itself, which simplifies the process.

1.Mount the media and dot-source the ‘convert-windowsimage.ps1’
and ‘new-nanoserverimage.ps1’ script in a PowerShell ISE session

2.Next, see the following example on how to create
a new image for your Nano server (this will create a VHD that you could either upload to a WDS if you want to
deploy it on a physical server, or mount it to a virtual machine

3.By running the cmdlet, you should have a new
image

In our example, we uploaded the vhd to our WDS (Thanks
Flemming Riis for facilitating this).

If you pay close attention to the paramhash table, you
can see the following:

$paramHash= @{

MediaPath
='G:\'

BasePath
='C:\nano\new'

TargetPath
='C:\Nano\compute'

AdministratorPassword
=$pass

ComputerName
='nanohosttp3'

Compute
=$true

Clustering
=$true

DriversPath
="c:\drivers"

EnableIPDisplayOnBoot
=$True

EnableRemoteManagementPort
=$True

Language
='en-us'

DomainName
='drinking.azurestack.coffee'

}

Compute = $true and Clustering = $true.

This means that both the compute and the clustering package will be added to the image. In addition,
since we are deploying this on a physical server, we learned the hard way
(thanks again Flemming) that we needed some HP drivers for networks and storage
controller. We are therefore pointing to the location (DriversPath = “c:\drivers” ) where we extracted the drivers so they
get added to the image.

Through this process, we are also pre-creating the
computer name object in Active Directory as we want to domain join the box to “drinking.azurestack.coffee”.

If you pay attention to the guide at Technet, you can see
how you can set a static IP address on your Nano Server. We have simplified the
deployment process in our fabric as we are rapidly deploying and decommissioning
compute on the fly, so all servers get their IP config from a DHCP server.

Once the servers were deployed (this took literally under
4 minutes!), we could move forward and very that everything was as we desired.

1)Nano Servers were joined to domain

2)We had remote access to the nano servers

Since Nano Server is all about remote management, we used
the following PowerShell cmdlets in order to configure the compute nodes,
create the cluster etc.

The following screen shot shows the Nano Cluster that is running
a virtual machine with Nano Server installed:

NB: I am aware
that my PowerShell cmdlets didn’t configure any VMswitch as part of the
process. In fact, I have reported that as a bug as it is not possible to do so
using the Hyper-V module. The VM switch was created successfully using the
Hyper-V Manager console.

Happy Nano’ing, and I will cover more later.

(I also hope that I will see you during our SCU session
on this topic next week)

In the modern world
where organizations are facing new challenges to be more competitive, they are
looking for better ways to improve the quality and efficiency of their IT
Service delivery using ITIL framework. Gain valuable insights and best
practices on how you can adopt the ITIL framework to Microsoft System Center
and OMS from real world experiences together with Savision’s Jonas Lenntun, and
Microsoft MVPs Robert Hedblom, Kristian Nese, Kevin Greene and Thomas Maurer.

On Tuesday, I will
have the “Early Morning Discussion – Microsoft Azure Stack” together with
Thomas Maurer.

In this session we
will walk you through how Nano Server is changing the fundamental way we look
at fabric servers and workloads. Nano Service will change the way we build
servers and solve fundamental challenges which we have encountered over the
past years embracing cloud fundamentals.

I can guarantee you a lot of breathtaking demos during
this session.

(Although the expected level of this session will be 200,
there will definitively be a lot of PowerShell code to cover, since Nano Server
is a headless x64 server without any local console).

On Wednesday, I
will go solo and talk about “Modern Application Modeling and Configuration for
Infrastructure Clouds”.

For more than two
decades, the way to manage applications on enterprise distributed systems has
followed consistent patterns, and has proven to be very effective. But new
paradigms have emerged and are changing how IT is delivering business value,
and how IT interacts with business units and end users. Among these new
paradigms are: cloud computing (including multi-tenancy and self-service),
DevOps, outsourcing, hosting, and more. These paradigms come with different
layers and assignments of responsibilities, that underlying technologies must
implement for the end-to-end process to remain efficient, scalable and
flexible. This session goes through these changes, explains how Microsfot
solutions are adapting to them, and summarizes the vision for modern
application management in infrastructure as a service (whether on-prem, or in
the public cloud or both).

This should be a very interesting session to follow,
where we will walk down the memorial lane and see where we eventually ends up
and how to deal with it.

Later on
Wednesday, I will do my last session – and I am really looking forward to this
one, as it is about a subject that is very close to my heart: “Deep-dive on
Azure Resource Manager”.

Join me to take the
shortcut on Azure Resource Manager (ARM). ARM will definitively have an impact
on your career, and probably has already. Once Azure Stack arrives on-prem, we
will have a true consistency through ARM that will change the way we are
modeling and delivering our services to the clouds. During this session, you
will learn how a template is constructed and how to create and deploy your
cloud resources.

Please note the following:

The ARM session is level 400 – and also a side session. That
means there will only be room for 15 persons.

After the session,
I really need to jump into a taxi and get to the airport.

Monday, August 3, 2015

One of the most frequently asked questions I get from my
customers is something like this:

“We have a multi-tenant environment where everything is
now software-defined, including the network by using network virtualization. As
a result of that, we can no longer provide value added services to these
customers, as we don’t have a network path into the environments”.

I won’t get into all of those details, but a common
misunderstanding nowadays is that both enterprises and service providers expect
that they will be able to manage their customers in the same way as they always
have been doing.

The fact that many organizations are now building their
cloud infrastructure with several new capabilities, such as network
virtualization and self-servicing, makes this very difficult to achieve.

I remember back at TechDays in Barcelona, when I got the
chance to talk with one of the finest Program Manager’s at Microsoft, Mr. Ben
Armstrong.

We had a discussion about this and he was (as always)
aware of these challenges and sad he had some plans to simplify service management
in a multi-tenant environment directly in the platform.

As a result of that, we can now play around with PowerShell Direct in Windows Server
2016 Technical Preview.

Background

Walking down the memorial lane, we used to have Virtual
Server and Virtual PC when we wanted to play around with virtualization in the
Microsoft world. Both of these solutions were what we call a “type 2 hypervisor”,
where all the hardware access was emulated through the operating system that
was actually running the virtual instances.

With Windows Server 2008, we saw the first version of
Hyper-V which was truly a type 1 hypervisor.

In the architecture of Hyper-V – and also the reason why
I am telling you all of this, is that we have something called VMBus.

The VMBus is a communication mechanism (high-speed
memory) used for interpartition communication and device enumeration on systems
with multiple active virtualized partitions. The VMBus is responsible for the
communication between the parent partition (the Hyper-V host) and the child
partition(s) (virtual machines with Integration Components installed/enabled).

As you can see, the VMBus is critical for communication
between host and virtual machines, and we are able to take advantage of this
channel in several ways already.

In Windows Server 2012 R2, we got the following:

·Copy-VMFile

Copy-VMFile let you copy file(s) from a source path to a
specific virtual machine running on the host. This was all done within the
context of the VMBus, so there’s no need for network connectivity to the
virtual machines at all. For this to work, it required you to enable “Guest
Services” on the target VMs as part of the integration services.

Here’s an example on how to achieve this using
PowerShell:

# Enable guest services

Enable-VMIntegrationService-Name'Guest Service
Interface'-VMNamemgmtvm-Verbose

Another feature that was shipped with Windows Server 2012
R2 was something called “Enhanced Session Mode”. This would leverage a RDP
session via the VMBus.

Using RDP, we could now logon to a virtual machine
directly from Hyper-V Manager and even copy files in and out of the virtual
machine. In addition, USB and printing would also now be possible – without any
network connectivity from the host to the virtual machines.

And now back to the point. With Windows Server 2016, we
will get PowerShell Direct.

With PowerShell Direct we can now in an easy and reliable
way run PowerShell cmdlets and scripts directly inside a virtual machine
without relying on technologies such as PowerShell remoting, RDP and VMConnect.

Leveraging the VMBus architecture, we are literally
bypassing all the requirements for networking, firewall, remote management –
and access settings.

However, there are some
requirements in the time of writing this:

·You must be connected to a Windows 10 or a
Windows Server technical preview host
with virtual machines that are running Windows 10 or Windows Server technical preview
as the guest operating system

·You must be logged in with Hyper-V Admin creds
on the host

·You need
user credentials for the virtual machine!

·The virtual machine that you want to connect to
must run locally on the host and be booted

Clearly, it should be obvious that both the host and the
guest need to be on the same OS level. The reason for this is that VMBus is
relying on the virtualization service client in the guest – and the
virtualization service provider on the host, which need to be the same version.

But what’s interesting to see here is that in order to
take advantage of PowerShell Direct, we need to have user credentials for the
virtual machine’s guest operating system itself.

Also, if we want to perform something awesome within that
guest, we probably need admin permission too – unless we are able to dance
around with JEA, but I have been able to test that yet.

VERBOSE: Time taken for configuration job to complete is 115.028
seconds

In this example I am using one of the built-in DSC
resources in Windows Server. If I wanted to do more advanced configuration that
would require custom DSC resources, I would have to copy those resources to the
guest using the Copy-VMFile cmdlet first. All in all, I am able to do a lot
around vm management with the new capabilities through VMBus.

So, what can we expect to see now that we have the
opportunity to provide management directly, native in the compute platform
itself?

Let me walk you through a scenario here where the tenant
wants to provision a new virtual machine.

In Azure Pack today, we have a VM extension through the
VM Role. If we compare it to Azure and its new API through Azure Resource
Manager, we have even more extension to play around with.

These extensions gives us an opportunity to do more than
just OS provisioning. We can deploy – and configure advanced applications just
the way we want to.

Before you continue to read this, please note that I am
not saying that PowerShell Direct is a VM extension, but still something useful
you can take advantage of in this scenario.

So a tenant provision a new VM Role in Azure Pack, and the
VM Role is designed with a checkbox that says “Enable Managed Services”.

Now, depending on how each service provider would like to
define their SLA’s etc, the tenant has now made it clear that they want managed
services for this particular VM Role and hence need to share/create credentials
for the service provider to interact with the virtual machines.

I’ve already been involved in several engagements in this
scope and I am eager to see the end-result once we have the next bits fully
released.

Thanks to the Hyper-V team with Ben and Sarah, for
delivering value added services and capabilities on an ongoing basis!