Monday, January 26, 2015

While deploying Windows Azure Pack, several factors plays
its part when it comes to the design and layout of the solution. As you may be
aware of, Windows Azure Pack contains a lot of different sites, APIs and
resource providers – just so that you can enable and realize Azure technologies
within your own datacenter.

It’s more than a glorified self-service portal so the
requirements for the design, load and can be overwhelming for some customers.

Before I get to the big point of this blog post, I would
like to put it into some context first.

Normally at customer sites, we see the following
different designs when it comes to Windows Azure Pack.

Express

Organizations who want to just test and play around are
deploying the single install, express setup of Windows Azure Pack. This will
install all the sites and APIs onto a single virtual machine, and the
organizations can easily add resource providers to start testing the powerful
cloud enabled tool.

Although I have seen some examples where the Express
setup has been used in production, it is far from what we recommend. The public
facing parts of Azure Pack, such as the Tenant Public API, Tenant Site and
eventually Tenant Authentication Site
are directly exposed on the internet. Having everything on the same virtual
machine will increase the attack surface as well as lead to performance, HA and
scale issues.

Configuration
requirements using this design:

There aren’t any hard requirements using the Express
solution as we like to think that people are only using it in lab and test.
However, if you want to make it available and actually use it across firewalls,
you will have to perform the following:

·Reconfigure tenant site (FQDN, certificate and
port)

·Reconfigure tenant authentication site (FQDN,
certificate and port)

·Reconfigure tenant public API (FQDN, certificate
and port)

Optional:

·Reconfigure admin site (FQDN, certificate and
port)

·Reconfigure admin authentication site (FQDN,
certificate and port)

Basic

For some of the smaller customers where HA is not the
most important thing, we often see a basic implementation of Windows Azure
Pack. This means that we have a single virtual machine running the high-privileged
services – such as the Admin API, Admin Site, Tenant API and eventually Admin Authentication site together with the
default resource providers. This virtual machine is located behind the firewall
and in most cases within the same Active Directory Domain with its resource
providers (SCVMM+SPF, SQL, ServiceBus, WebSites etc).

For the public facing part (the parts mentioned before,
directly exposed on the internet) they use another – dedicated virtual machine
which might be located in DMZ and available on the internet.

Of course, both the high-privileged VM and the internet
facing VM are running on a Hyper-V cluster so that the VMs themselves are
highly available.

Configuration
requirements using this design:

I strongly recommend using a highly available WAP design
whenever you plan to put it into production. But in this design, the only
presence of HA is at the hypervisor level.

The most common
design of Windows Azure Pack and what’s normally at least what I am
recommending, Is where we have at least two virtual machines for the
high-privileged servers, configured as highly available behind a load balancer,
and the same for the internet facing part.

This will indeed require load balancers and VIPs, but
also some additional reconfiguration when it comes to the Azure Pack
environment.

Configuration
requirements using this desing

Having the high-privileged services as well as the
internet facing parts scaled across several virtual machines, helps us to
address performance, availability and scale issues.

You will have to perform the following reconfiguration to
make this work:

So whenever you plan to scale out and ensure HA across
all sites and APIs, you have to reconfigure the components as mentioned with
the Minimal Distribution design. The same rules apply if you intend to be more
drastic around this, having dedicated VMs for each and every site and API. The
reconfiguration is still mandatory.

Windows Azure Pack has been available for over a year
now, and the majority of organizations are adopting the VM Cloud resource
provider. The good thing here is that even if you have scaled out the SPF
endpoint, you are simply adding the endpoint to the admin API and everything is
handled.

There’s really not much reconfiguration required if you
have configured SPF correctly with FQDN and certificates upfront.

What’s more of a concern is when you want to add resource
providers such as SQL server(s) and/or MySQL server(s).

By default, when you install the first high-privileged
server with the admin API, admin site and so on, you also get the default
resource providers added, such as SQL, MySQL, Usage, Monitoring, Servicebus and
Marketplace. The FQDN’s are bound to the computer name of this machine.

Once you add the second – or even third VM that should be
located behind a load balancer together with the first VM, these resource
providers must also be reconfigured so that you are not pointing toward an
individual virtual machine, but towards a FQDN that is associated with a VIP
behind the load balancer.

Reconfiguring the
default Resource Providers – and why that can be a pain

In order to reconfigure the Windows Azure Pack portals,
APIs and resource providers, we have to instrument the databases in a supported
way. The supported way is through Powershell, and together with my good friend
Flemming Riis, we have convered how to reconfigure the high-privileged services
– as well as the internet facing parts in some earlier blog posts.

As a result of that, I won’t cover it over again, but rather
refer to those URL’s, hoping you will notice them, read them and continue
reading this blog post as I am about to reach my point.

You are probably familiar with the reconfiguration of the
tenant and admin stuff by now, and understand that we have several sets of APIs
and portals involved. In the end of the day, everything here should interact
nicely together, being able to reach each other and expose the right set of
information to both an administrator and a tenant.

If we look at the resource providers we are dealing with
directly in the database, we can see that we have several endpoints to each and every resource provider.

We have an endpoint for the resource provider when it
comes from the admin API, and we have an endpoint for the resource provider
when coming from the tenant site and API.

In addition, each resource provider have an endpoint for
usage and notification too.