Virtualization and some coffee

Monday, March 2, 2015

Every now and then, it comes a time when I really need to
ramp up on certain things.

It can be a new technology, a new product, or a new way of doing things.

This kind of journey is never easy, and I am that kind of
person who doesn’t stop before I have a certain level of satisfaction. I expect
a lot from myself and have a crazy self-discipline.

Starting early this year, I went deep into DSC to learn
more about something that will be impossible to avoid in the next couple of
months.

Before continuing,
I just want you to know that this will not be yet another blog post that
explains the importance of Powershell, which you need to learn ASAP or else you
will "flip burgers in the future".

A result of have working with Azure Pack and Azure for
the last years has made me much more creative.

Instead of having our out-of-the-box products where we
were limited by the actions provided by the GUI, we can now easily create our
own custom solutions where integrating several APIs, modules and so on to
create new opportunities for our business.

Let us stop for a second on Azure. Microsoft Azure.

We have been talking about the Cloud OS and cloud
consistency for over a year now and we should all be very familiar with MS
vision and strategy around this topic.

Especially “Mobile first, Cloud first” will give us a
hint that whatever comes will appear in Microsoft Azure first.

In the context of DSC, we can see that we can leverage
some Azure VM Extensions and Features in our IaaS VMs today.

And that is really the background of this blog post.

Microsoft Azure provides us with several VM Extensions,
either directly by Microsoft or some third-parties to enable security, runtime,
debugging, management and other features that will boost your productivity
working with IaaS VMs in Azure.

When you deploy a virtual machine in the Azure portal,
you can decide whether or not the VM Extension should be enabled.

We have several extensions available, all depending on
what we are trying to achieve.

The extensions I find most interesting belongs to the
category of “Deployment and Configuration Management”.

First, let us talk about a VM extension for “MSEnterpriseApplication”.

Using this Extension, we will effectively implements
features that supports VM Roles resource extensions, the same we can leverage
on-premises with Azure Pack and Service Provider Foundation.

To add this extension, the VM must already exist in Azure
and have the Azure Guest Agent pre-installed.

Running the following cmdlet using the Azure module gives
us more details about the extension

With this extension enabled in the VM, we can use the VM
Role Authoring tool to author our resource extension (that is the package that
we normally import to VMM which contains the application payload). The latest
version let us deploy directly to Azure.

If you rather want to use Powershell, you should view the
Powershell functionality of the tool and save only the portion of the script that
assigns a value to $plainSettings in a text file.

From here, you can store the text file in a variable
($plainSettings) and update your VM with the following cmdlet:

So, given the fact that we now have a single tool where
we can author and deploy our resource extensions
(application payload) to IaaS VMs in both WAP and Azure is good news,
however, it is not idempotent.

This is where Desired State Configuration comes into the
picture.

Been built on the Common Information Model (CIM) and uses
Windows Remote Management (WinRM) as the communication mechanism, DSC is like
putting steroids into your Powershell scripts.

I know I will get a lot of Powershell experts on my neck
here, but that is at least one way to visualize what DSC is.

Let us say you create a script, deploy it to a node and
then you are done.

If someone makes any changes to that configuration
afterwards, the Powershell script would not care nor notice.

A Desired State Configuration can ensure that there won’t
be any configuration drift and apply and monitor (for example) the
configuration.

This is handled by the Local Configuration Manager (LCM)
which you can consider as an “agent”, although it is not an agent per
definition.

So, looking at the capabilities of DSC, we can quickly understand
how important this will be for any in-guest management solution moving forward.

The requirement of using Azure Powershell DSC VM
extension is that you must have Azure Powershell module installed. The DSC
extension handler has a dependency on Windows Management Framework (WMF)
version 5 – which is currently in preview and only supported by 2012 R2. WMF
5.0 will automatically be installed in your IaaS VM as a Windows Update once
enabled, and require a reboot.

The following cmdlets are specific to DSC:

Publish-AzureVMDscConfiguration
– will upload a DSC script to Azure blob storage, that later will be applied to
your IaaS VMs using the Set-AzureVMDscExtension
cmdlet

Get-AzureVMDscExtension
– Gets the settings of the DSC extension on a particular VM

Remove-AzureVMDscExtension
– Will remove the DSC extension from a VM

Set-AzureVMDscExtension
– Configures the DSC extension on a VM

Here’s a very easy example on how to apply a DSC script
to your VM in Azure, assuming you have the script already created.

Once this cmdlet is executed, the following will happen
within the VM:

1)WMF
5.0 is downloaded and installed (the latest version) on the server

2)The
extension handler looks in the specified Azure container (which is defined when
you connect with your subscription) for the .zip file

3)Then
the archive is unpacked and any dependent modules are moved into the PS Module
path and runs the specified configuration function

Adding that this will also accept parameters gives you an
understanding of how flexible, dynamic and powerful the DSC VM Extension will
be.

Now, this was all about Microsoft Azure.

What about the things that are taking place in Azure
Pack?

I briefly mentioned the VM Role Authoring Tool in this blog
post which will be playing an important role in this setting.

The research I have been doing this year isn’t easy to
put within a single blog post, especially not if I should describe all the
errors and mistakes I have done as part of this journey J

I have been trying to simulate the Azure experience in
Windows Azure Pack, but unfortunately, that is an impossible challenge as we
don’t have the same possibilities when it comes to the interaction through the
API. I am only able to achieve some
of the good parts, but that again will qualify for some blog posts in the near
future.

Before you start thinking “no, it is not that hard to
simulate the exact experience”, I would like to remind you about that
everything I do in this context, will always be using Network Virtualization
with NVGRE, so there is no data-channel from the datacenter into the tenant
environment what so ever.

If you think this is interesting, to learn more about DSC
with Azure and Azure Pack, I have to point out the spectacular blog post series
by Ben Gelens, where he has done a very good job explaining the complete setup
of an entire DSC environment (using Pull) including the authoring of the
required VM Role.

Monday, February 23, 2015

Ever since the release of Windows Azure Pack, I’ve been a
strong believer of software-defined datacenters powered by Microsoft
technologies. Especially the story around NVGRE has been interesting and
something that Windows Server, System Center and Azure Pack are really
embracing.

·Show how you should design VMM to deliver – and use
dedicated VLANs to your tenants

·Show how to structure and design your hosting
plans in Azure

·Customize the plan settings to avoid confusion

How to design VMM
to deliver – and use dedicated VLANs to your tenants

Designing and implementing a solid networking structure
in VMM can be quite a challenging task.

We normally see that during setup and installation of
VMM, people don’t have all the information they need. As a result, they have
already started to deploy a couple of hosts before they are actually paying
attention to:

1)Host
groups

2)Logical
networks

3)Storage
classifications

Needless to say, it is very difficult to make changes to
this afterwards when you have several objects in VMM with dependencies and deep
relationship.

So let us just assume that we are able to follow the
guidelines and pattern I’ve been using in this script:

The fabric controller script will create host groups
based on physical locations with child host groups that contains different
functions.

For all the logical networks in that script, I am using “one
connected network” as the network type.

This will create a 1:Many mapping of the VM network to
each logical network and simplify scalability and management.

For the VLANs networks though, I will not use the network
type of “one connected network”, but rather use “VLAN-based independent
networks”.

This
will effectively let me create a 1:1 mapping of a VM network to a specific
VLAN/subnet within this logical network.

The following screenshot shows the mapping and the design
in our fabric.

Now the big question: why VLAN-based independent network
with a 1:1 mapping of VM network and VLAN?

As I will show you really soon, the type of logical
network we use for our tenant VLANs gives us more flexibility due to isolation.

When we are adding the newly created logical network to a
VMM Cloud, we simply have to select the entire logical network.

But when we are
creating Hosting Plans in Azure Pack admin portal/API, we can now select the
single and preferred VM Network (based on VLAN) for our tenants.

The following screenshot from VMM shows our Cloud that is
using both the Cloud Network (PA network space for NVGRE) and Tenants VLAN.

So once we have the logical network enabled at the cloud
level in VMM, we can move into the Azure Pack section of this blog post.

Azure Pack is multi-tenant by definition and let you –
together with VMM and the VM Cloud resource provider, scale and modify the
environment to fit your needs.

When using NVGRE as the foundation for our tenants, we
are able to use Azure Pack “out of the box” and have a single hosting plan –
based on the VMM Cloud where we added our logical network for NVGRE, and
tenants can create and manage their own software-defined networks. For this, we
only need a single hosting plan as every tenant is isolated on their own
virtualized network.

Of course – there might be other valid reasons to have
different hosting plans, such as SLA’s, VM Roles and other service offerings. But
for NVGRE, everyone can live in the same plan.

This changes once you are using VLANs. If you have a dedicated
VLAN per customer, you must add the dedicated VLAN to the hosting plan in Azure
Pack. This will effectively force you to create a hosting plan per tenant, so
that they are not able to see/share the same VLAN configuration.

The following architecture shows how this scales.

In the hosting plan in Azure Pack, you simply add the
dedicated VLAN to the plan and it will be available once the tenant subscribe
to this subscription.

Bonus info:

With the update rollup 5 of Azure Pack, we have now a new
setting that simplifies the life for all the VLAN tenants out there!

I’ve always said that “if you give people too much
information, they’ll ask too many questions”.

It seems like the Azure Pack product group agree on this,
and we have now a new setting at the plan level in WAP that says “disable
built-in network extension for tenants”.

So let us see how this looks like in the tenant portal
when we are accessing a hosting plan that:

This will ease on the confusion for these tenants, as
they were not able to manage any network artefacts in Azure Pack when VLAN was
used. However, they will of course be able to deploy virtual machines/roles into the VLAN(s) that are available
in their hosting plan.

Sunday, February 15, 2015

I just assume that you have read Marc van Eijk's well described blog post about the new enhancement with Update Rollup 5 for SCVMM, where we can now effectively turn off differential disks for all our new VM Role deployments with Azure Pack.

As a result of this going public, I have uploaded a new version of my SCVMM Fabric Controller script, that now will add another custom property to all the IaaS Clouds in SCVMM, assuming you want static disks to be default.

Monday, February 2, 2015

Sharing VNet between subscriptions in Azure Pack

From time to time, I get into discussions with customers
on how to be more flexible around networking in Azure Pack.

Today each subscription is a boundary. Meaning, a
co-admin can have access to multiple subscriptions, but you are not allowed to “share”
anything between those subscriptions, such as virtual networks.

So here’s the
scenario.

A tenant subscribes to multiple subscriptions in Azure
Pack. Each subscription is based on its associated Hosting Plan, which is something
that is defined and exposed by the service administrator (the backend side of
Azure Pack). A Hosting Plan can contain several offerings, such as VM Clouds,
web site Clouds and more. The context as we move forward is the VM Cloud.

Let us say that a customer has two subscriptions today.
Each subscription has their own tenant administrator.

Monday, January 26, 2015

While deploying Windows Azure Pack, several factors plays
its part when it comes to the design and layout of the solution. As you may be
aware of, Windows Azure Pack contains a lot of different sites, APIs and
resource providers – just so that you can enable and realize Azure technologies
within your own datacenter.

It’s more than a glorified self-service portal so the
requirements for the design, load and can be overwhelming for some customers.

Before I get to the big point of this blog post, I would
like to put it into some context first.

Normally at customer sites, we see the following
different designs when it comes to Windows Azure Pack.

Express

Organizations who want to just test and play around are
deploying the single install, express setup of Windows Azure Pack. This will
install all the sites and APIs onto a single virtual machine, and the
organizations can easily add resource providers to start testing the powerful
cloud enabled tool.

Although I have seen some examples where the Express
setup has been used in production, it is far from what we recommend. The public
facing parts of Azure Pack, such as the Tenant Public API, Tenant Site and
eventually Tenant Authentication Site
are directly exposed on the internet. Having everything on the same virtual
machine will increase the attack surface as well as lead to performance, HA and
scale issues.

Configuration
requirements using this design:

There aren’t any hard requirements using the Express
solution as we like to think that people are only using it in lab and test.
However, if you want to make it available and actually use it across firewalls,
you will have to perform the following:

·Reconfigure tenant site (FQDN, certificate and
port)

·Reconfigure tenant authentication site (FQDN,
certificate and port)

·Reconfigure tenant public API (FQDN, certificate
and port)

Optional:

·Reconfigure admin site (FQDN, certificate and
port)

·Reconfigure admin authentication site (FQDN,
certificate and port)

Basic

For some of the smaller customers where HA is not the
most important thing, we often see a basic implementation of Windows Azure
Pack. This means that we have a single virtual machine running the high-privileged
services – such as the Admin API, Admin Site, Tenant API and eventually Admin Authentication site together with the
default resource providers. This virtual machine is located behind the firewall
and in most cases within the same Active Directory Domain with its resource
providers (SCVMM+SPF, SQL, ServiceBus, WebSites etc).

For the public facing part (the parts mentioned before,
directly exposed on the internet) they use another – dedicated virtual machine
which might be located in DMZ and available on the internet.

Of course, both the high-privileged VM and the internet
facing VM are running on a Hyper-V cluster so that the VMs themselves are
highly available.

Configuration
requirements using this design:

I strongly recommend using a highly available WAP design
whenever you plan to put it into production. But in this design, the only
presence of HA is at the hypervisor level.

The most common
design of Windows Azure Pack and what’s normally at least what I am
recommending, Is where we have at least two virtual machines for the
high-privileged servers, configured as highly available behind a load balancer,
and the same for the internet facing part.

This will indeed require load balancers and VIPs, but
also some additional reconfiguration when it comes to the Azure Pack
environment.

Configuration
requirements using this desing

Having the high-privileged services as well as the
internet facing parts scaled across several virtual machines, helps us to
address performance, availability and scale issues.

You will have to perform the following reconfiguration to
make this work:

So whenever you plan to scale out and ensure HA across
all sites and APIs, you have to reconfigure the components as mentioned with
the Minimal Distribution design. The same rules apply if you intend to be more
drastic around this, having dedicated VMs for each and every site and API. The
reconfiguration is still mandatory.

Windows Azure Pack has been available for over a year
now, and the majority of organizations are adopting the VM Cloud resource
provider. The good thing here is that even if you have scaled out the SPF
endpoint, you are simply adding the endpoint to the admin API and everything is
handled.

There’s really not much reconfiguration required if you
have configured SPF correctly with FQDN and certificates upfront.

What’s more of a concern is when you want to add resource
providers such as SQL server(s) and/or MySQL server(s).

By default, when you install the first high-privileged
server with the admin API, admin site and so on, you also get the default
resource providers added, such as SQL, MySQL, Usage, Monitoring, Servicebus and
Marketplace. The FQDN’s are bound to the computer name of this machine.

Once you add the second – or even third VM that should be
located behind a load balancer together with the first VM, these resource
providers must also be reconfigured so that you are not pointing toward an
individual virtual machine, but towards a FQDN that is associated with a VIP
behind the load balancer.

Reconfiguring the
default Resource Providers – and why that can be a pain

In order to reconfigure the Windows Azure Pack portals,
APIs and resource providers, we have to instrument the databases in a supported
way. The supported way is through Powershell, and together with my good friend
Flemming Riis, we have convered how to reconfigure the high-privileged services
– as well as the internet facing parts in some earlier blog posts.

As a result of that, I won’t cover it over again, but rather
refer to those URL’s, hoping you will notice them, read them and continue
reading this blog post as I am about to reach my point.

You are probably familiar with the reconfiguration of the
tenant and admin stuff by now, and understand that we have several sets of APIs
and portals involved. In the end of the day, everything here should interact
nicely together, being able to reach each other and expose the right set of
information to both an administrator and a tenant.

If we look at the resource providers we are dealing with
directly in the database, we can see that we have several endpoints to each and every resource provider.

We have an endpoint for the resource provider when it
comes from the admin API, and we have an endpoint for the resource provider
when coming from the tenant site and API.

In addition, each resource provider have an endpoint for
usage and notification too.