How can this guide help you? As a medium-sized hosting provider you can use this solution guide to understand the solution design and implementation steps we recommend to deploy a scalable network infrastructure to support infrastructure as service (IaaS). Provisioning tenant networks can be expensive to operate and complex to manage.

This guide helps you deploy a prescriptive and tested IaaS virtual network infrastructure solution that is cost-effective, flexible, scalable, and easy to manage. In addition, it provides your tenants with a simpler, cost-effective way to connect their datacenters to yours to deploy their hybrid cloud solutions.

The following diagram illustrates the problem that this guide addresses. An individual gateway must be provisioned for each tenant, which requires significant configuration and VLANs only scale up to about 1,000 tenants.

This section describes the scenario, problem, and goals for an example organization.

Scenario

A medium-sized hosting provider offers IaaS to its customers. They recently started offering a virtual network service, based on customer demand.

The Marketing Department within the hosting provider has been so successful marketing the virtual networking service that the customer demand for it is increasing fast.

Problem statement

The hosting provider’s current virtual network service offering doesn’t scale well, and is inefficient and expensive to operate. For example:

Their current design requires two gateways for every tenant (for redundancy), and each pair of gateways requires a public IP address. As the number of tenants has increased, the number of gateways required to support them has increased linearly. This is difficult for the hosting provider to manage. Adding two gateways per tenant is not a cost-effective solution for them.

If a tenant needs to connect multiple sites, then each tenant site also requires a separate gateway.

They’re not currently using an industry standard routing protocol, which requires an administrator to manually administer network routes. This is inefficient and subject to configuration errors.

The current design utilizes VLANs for network isolation. Their network switches only support 1,000 VLANs, which limits their ability to scale beyond that. Moving a tenant virtual machine to a different host located on a different physical location often requires an IP address change and switch reconfiguration. This issue makes moving tenant virtual machines very difficult and provides them little flexibility in their datacenter infrastructure.

Organization goals

The hosting provider needs high availability, cost efficiency, and simplified management to deliver better and cost competitive services to meet their increased customer demand. They want to implement a new solution with the following attributes:

The ability to deploy gateways that can connect multiple tenant networks and multiple sites per tenant at the same time.

The ability to use an industry standard routing protocol, and enable a scalable virtual network isolation protocol that isn’t limited by current VLAN technologies.

The ability to provide isolated tenant networks using a technology that scales well as the number of tenants and their workloads increase.

A manageable virtual network design that has an easy-to-use management interface that allows them to manage their virtual networks, IP address spaces, and gateways all in one location. This makes it easier and more efficient for them to manage many tenants at a time.

The ability to provide a common self-service portal for tenants, which allows them to efficiently place their computing resources where they best meet their business needs.

The ability to provide easy-to-follow guidance for their customers so that they can easily connect their on-premises network to the hosting provider’s through a secure site-to-site virtual private network (VPN). This guidance will include router configuration guidance that details required protocols, settings, and end-point addresses.

The following diagram shows the recommended design for this solution, which connects each tenant’s network to the hosting provider’s multi-tenant gateway using a single site-to-site VPN tunnel. This enables the hosting provider to support approximately 100 tenants on a single gateway cluster, which decreases both the management complexity and cost. Each tenant must configure their own gateway to connect to the hosting provider gateway. The gateway then routes each tenant’s network data and uses the “Network Virtualization using Generic Routing Encapsulation” (NVGRE) protocol for network virtualization.

Multi-tenant networking solution design

The following table lists the elements that are part of this solution design and describes the reason for the design choice.

Solution design element

Why is it included in this solution?

Windows Server 2012 R2

Provides the operating system base for this solution. We recommend using the Server Core installation option to reduce security attack exposure and to decrease software update frequency.

Windows Server 2012 R2 Gateway

Is integrated with Virtual Machine Manager to support simultaneous, multi-tenant site-to-site VPN connections and network virtualization using NVGRE. For an overview of this technology, see Windows Server Gateway.

All the physical hosts are configured as failover clusters for high availability, as well as many of virtual machine guests that host management and infrastructure workloads.

The site-to-site VPN gateway can be deployed in 1+1 configuration for high availability. For more information about Failover Clustering, see Failover Clustering overview.

Scale-out File Server

Provides file shares for server application data with reliability, availability, manageability, and high performance. This solution uses two scale-out file servers: one for the domain that hosts the management servers and one for the domain that hosts the gateway servers. These two domains have no trust relationship. The scale-out file server for the gateway domain is implemented as a virtual machine guest cluster. The scale-out file server for the gateway domain is needed because you will not be able to access a scale-out file server from an untrusted domain.

Provides a way to connect a tenant site to the hosting provider site. This connection method is cost-effective and VPN software is included with Remote Access in Windows Server 2012 R2. (Remote Access brings together Routing and Remote Access service (RRAS) and Direct Access). Also, VPN software and/or hardware is available from multiple suppliers.

Windows Azure Pack

Provides a self-service portal for tenants to manage their own virtual networks. Windows Azure Pack provides a common self-service experience, a common set of management APIs, and an identical website and virtual machine hosting experience. Tenants can take advantage of the common interfaces, such as Service Provider Foundation) which frees them to move their workloads where it makes the most sense for their business or for their changing requirements. Though Windows Azure Pack is used for the self-service portal in this solution, you can use a different self-service portal if you choose.

Provides Service Provider Foundation (SPF), which exposes an extensible OData web service that interacts with VMM. This enables service providers to design and implement multi-tenant self-service portals that integrate IaaS capabilities that are available on System Center 2012 R2.

You’ll want to ensure that your design is fault tolerant and is capable of supporting your stated availability terms.

Tenant virtual machine Internet access requirements

Consider whether or not your tenants want their virtual machines to have Internet access. If so, you will need to configure the NAT feature when you deploy the gateway.

Infrastructure physical hardware capacity and throughput

You’ll need to ensure that your physical network has the capacity to scale out as your IaaS offering expands.

Site-to-site connection throughput

You’ll need to investigate the throughput you can provide your tenants and whether site-to-site VPN connections will be sufficient.

Network isolation technologies

This solution uses NVGRE for tenant network isolation. You’ll want to investigate if you have or can obtain hardware that can optimize this this protocol. For example, network interface cards, switches, and so on.

Authentication mechanisms

This solution uses two Active Directory domains for authentication; one for the infrastructure servers and one for the gateway cluster and scale-out file server for the gateway. If you don’t have an Active Directory domain available for the infrastructure, you’ll need to prepare a domain controller before you start deployment.

To help with capacity planning, you need to determine your tenant requirements. These requirements will then impact the resources that you need to have available for your tenant workloads. For example, you might need more Hyper-V hosts with more RAM and storage, or you might need faster LAN and WAN infrastructure to support the network traffic that your tenant workloads generate.

Use the following questions to help you plan for your tenant requirements.

Design consideration

Design effect

How many tenants do you expect to host, and how fast do you expect that number to grow?

Determines how many Hyper-V hosts you’ll need to support your tenant workloads.

Using Hyper-V Resource Metering may help you track historical data on the use of virtual machines and gain insight into the resource use of the specific servers. For more information, see Introduction to Resource Metering on the Microsoft Virtualization Blog.

What kind of workloads do you expect your tenants to move to your network?

Can determine the amount of RAM, storage, and network throughput (LAN and WAN) that you make available to your tenants.

What is your failover agreement with your tenants?

Affects your cluster configuration and other failover technologies that you deploy.

Plan your failover cluster strategy based on your tenant requirements and your own risk tolerance. For example, the minimum we recommend is to deploy the management, compute, and gateway hosts as two-node clusters. You can choose to add more nodes to your clusters, and you can guest cluster the virtual machines running SQL, Virtual Machine Manager, Windows Azure Pack, and so on.

For the SQL high availability option for this solution, we recommend AlwaysOn Failover Cluster Instances. With this design, all the cluster nodes are located in the same network, and shared storage is available, which makes it possible to deploy a more reliable and stable failover cluster instance. If shared storage is not available and your nodes span different networks, AlwaysOn Availability Groups might be a better solution for you.

Determine your gateway requirements

You need to plan how many gateway guest clusters are required. The number you need to deploy depends on the number of tenants that you need to support. The hardware requirements for your gateway Hyper-V hosts also depend on the number tenants that you need to support and the tenant workload requirements.

For capacity planning purposes, we recommend one gateway guest cluster per 100 tenants.

The design for this solution is for tenants to connect to the gateway through a site-to-site VPN. Therefore, we recommend deploying a Windows Server gateway using a VPN. You can configure a two-node Hyper-V host failover cluster with a two-node guest failover cluster using predefined service templates available on the Microsoft Download Center (for more information, see How to Use a Server Running Windows Server 2012 R2 as a Gateway with VMM).

Design consideration

Design effect

How will your tenants connect to your network?

If tenants connect through a site-to-site VPN, you can use Windows Server Gateway as your VPN termination and gateway to the virtual networks.

This is the configuration that is covered by this planning and design guide.

If you use a non-Microsoft VPN device to terminate the VPN, you can use Windows Server Gateway as a forwarding gateway to the tenant virtual networks.

If a tenant connects to your service provider network through a packet-switched network, you can use Windows Server Gateway as a forwarding gateway to connect them to their virtual networks.

Important

You must deploy a separate forwarding gateway for each tenant that requires a forwarding gateway to connect to their virtual network.

Plan your network infrastructure

For this solution, you use Virtual Machine Manager to define logical networks, VM networks, port profiles, logical switches, and gateways to organize and simplify network assignments. Before you create these objects, you need to have your logical and physical network infrastructure plan in place.

In this step, we provide planning examples to help you create your network infrastructure plan.

The diagram shows the networking design that we recommend for each of the physical nodes in the management, compute, and gateway clusters.

Networking design for cluster nodes

You need to plan for several subnet and VLANs for the different traffic that is generated, such as management/infrastructure, network virtualization, external (outward bound), clustering, storage, and live migration. You can use VLANs to isolate the network traffic at the switch.

For example, this design recommends the networks listed in the following table. Your exact line speeds, addresses, VLANs, and so on may differ based on your particular environment.

Subnet/VLAN plan

Line speed (Gb/S)

Purpose

Address

VLAN

Comments

1

Management/Infrastructure

172.16.1.0/23

2040

Network for management and infrastructure. Addresses can be static or dynamic and are configured in Windows.

10

Network Virtualization

10.0.0.0/24

2044

Network for the VM network traffic. Addresses must be static and are configured in Virtual Machine Manager.

10

External

131.107.0.0/24

2042

External, Internet-facing network. Addresses must be static and are configured in Virtual Machine Manager.

1

Clustering

10.0.1.0/24

2043

Used for cluster communication. Addresses can be static or dynamic and are configured in Windows.

10

Storage

10.20.31.0/24

2041

Used for storage traffic. Addresses can be static or dynamic and are configured in Windows.

VMM logical network plan

This design recommends the logical networks listed in the following table. Your logical networks may differ based on your particular needs.

Name

IP pools and network sites

Notes

External

Rack01_External

131.107.0.0/24, VLAN 2042

All Hosts

Host Networks

Rack01_LiveMigration

10.0.3.0, VLAN 2045

All Hosts

Rack01_Storage

10.20.31.0, VLAN 2041

All Hosts

Infrastructure

Rack01_Infrastructure

172.16.0.0/24, VLAN 2040

All Hosts

Network Virtualization

Rack01_NetworkVirtualization

10.0.0.0/24, VLAN 2044

All Hosts

VMM VM network plan

This design uses the VM networks listed in the following table. Your VM networks may differ based on your particular needs.

Name

IP pool address range

Notes

External

None

Live migration

10.0.3.1 – 10.0.3.254

Management

None

Storage

10.20.31.1 – 10.20.31.254

After you install Virtual Machine Manager, you can create a logical switch and uplink port profiles. You then configure the hosts on your network to use a logical switch, together with virtual network adapters attached to the switch. For more information about logical switches and uplink port profiles, see Configuring Ports and Switches for VM Networks in VMM.

This design uses the following uplink port profiles, as defined in VMM:

VMM uplink port profile plan

Name

General property

Network configuration

Rack01_Gateway

Load Balancing Algorithm: Host Default

Teaming mode: LACP

Network sites:

Rack01_External, Logical Network: External

Rack01_LiveMigration, Logical Network: Host Networks

Rack01_Storage, Logical Network: Host Networks

Rack01_Infrastructure, Logical Network: Infrastructure

Network Virtualization_0, Logical Network: Network Virtualization

Rack01_Compute

Load Balancing Algorithm: Host Default

Teaming mode: LACP

Network sites:

Rack01_External, Logical Network: External

Rack01_LiveMigration, Logical Network: Host Networks

Rack01_Storage, Logical Network: Host Networks

Rack01_Infrastructure, Logical Network: Infrastructure

Network Virtualization_0, Logical Network: Network Virtualization

Rack01_Infrastructure

Load Balancing Algorithm: Host Default

Teaming mode: LACP

Network sites:

Rack01_LiveMigration, Logical Network: Host Networks

Rack01_Storage, Logical Network: Host Networks

Rack01_Infrastructure, Logical Network: Infrastructure

This design deploys the following logical switch using these uplink port profiles, as defined in VMM:

VMM logical switch plan

Name

Extension

Uplink

Virtual port

VMSwitch

Microsoft Windows Filtering Platform

Rack01_Compute

Rack01_Gateway

Rack01_Infrastructure

High bandwidth

Infrastructure

Live migration workload

Low bandwidth

Medium bandwidth

The design isolates the heaviest traffic loads on the fastest network links. For example, the storage network traffic is isolated from the network virtualization traffic on separate fast links. If you must use slower network links for some of the heavy traffic loads, you could use NIC teaming.

If you use Windows Azure Pack for your tenant self-service portal, there are numerous options you can configure to offer your tenants. This solution includes some of the VM Cloud features, but there are many more options available to you—not only with VM Clouds, but also with Web Site Clouds, Service Bus Clouds, SQL Servers, MySQL Servers, and more. For more information about Windows Azure Pack features, see Windows Azure Pack for Windows Server.

After reviewing the Windows Azure Pack documentation, determine which services you want to deploy. Since this solution only uses the Windows Azure Pack as an optional component, it only utilizes some of the Web Site Clouds features, using an Express deployment, with all the Windows Azure Pack components installed on a single virtual machine. If you use Windows Azure Pack as your production portal however, you should use a distributed deployment and plan for the additional resources required.

Use a distributed deployment if you decide to deploy Windows Azure Pack in production. If you want to evaluate Windows Azure Pack features before deploying in production, use the Express deployment. For this solution, you use the Express deployment to demonstrate the Web Site Clouds service. You deploy Windows Azure Pack on a single virtual machine located on the compute cluster so that the web portals can be accessed from the external (Internet) network. Then, you deploy a virtual machine running Service Provider Foundation on a virtual machine located on the management cluster.

The design includes failover clusters to provide high availability and scalability for the solution.

The following diagram shows the four types of failover clusters that are deployed. Each failover cluster isolates the roles required for the solution.

The following table shows the physical hosts that we recommend for this solution. The number of nodes used was chosen to represent the minimum needed to provide high availability. You can add additional physical hosts to further distribute the workloads to meet your specific requirements. Each host has 4 physical network adapters to support the networking isolation requirements of the design. We recommend that you use a 10 GB/s or faster network infrastructure. 1 Gb/s might be adequate for infrastructure and cluster traffic.

When you deploy Hyper-V hosts and virtual machines, it is extremely important to apply all available updates for the software and operating systems used in this solution. If you don’t do this, your solution many not function as expected.

You can use the steps in this section to implement the solution. Make sure to verify the correct deployment of each step before proceeding to the next step.

This second Active Directory domain will host your Hyper-V host gateway servers and a scale-out file server for gateway storage. This second Active Directory domain should have no trust relationship with your infrastructure domain for security considerations.

Important

Ensure both domains can resolve names in the other domain. For example, you can configure a forwarder at each DNS server to point to the DNS server in the other domain.

Deploy the storage nodes and clusters for the management domain.

A scale-out file server hosts the storage for this solution as file shares. This scale-out file server is configured on physical hosts in the management domain. An additional scale-out file server for the gateway domain is implemented in virtual machines later on the management cluster. For more information about deploying a scale-out file server, see Deploy Scale-Out File Server.

Deploy the management nodes and clusters.

Note

You’ll need to create a temporary virtual switch using Hyper-V Manager so you can install and configure your virtual machines. After VMM is installed, you can define a logical switch in VMM, delete the virtual switch defined in Hyper-V, and configure your hosts to use a virtual switch based on the logical switch defined in VMM.

This host cluster will host the SQL server, VMM, Service Provider Foundation (SPF) server, and scale-out file server (for the gateway domain) virtual machines. The scale-out file server for the gateway domain is implemented in virtual machines and joined to the gateway domain. For more information, see the following topics:

Add a library server, using a share on your scale-out file server. For more information, see How to Add a VMM Library Server or VMM Library Share. When you are prompted to type the computer name, type the name you used when you configured the scale-out file server role. Do not use the cluster name.

Important

When you add a library server, ensure that you use a user account that is different from your VMM service account. If you don’t do this, VMM will silently fail to add the library server and you won’t see any job history indicating an error has occurred.

Disable the Create logical networks automatically setting before you add any hosts. You’ll manually create logical networks with specific settings later. This setting is located in Settings, Network Settings.

You should add the Scale-Out File Server cluster in the Fabric, Storage, File Servers category. You should add the management cluster (and eventually the compute cluster) under All Hosts. To help organize the hosts, you should create additional host groups (for example, Compute, and Management) and place the appropriate clusters in the host groups.

Important

When you deploy a scale-out file server for the gateway domain, you need to open the public Windows Remote Management (HTTP-In) port on both nodes of the guest cluster. This port needs to be opened because the VMM server and gateway cluster exist in separate, untrusted domains and that port is not open by default for the Public profile.

After you add the cluster, you can configure storage locations for the virtual machines that are deployed to nodes in the cluster. Open the Properties page for the cluster and add a share from your scale-out file server on the File Share Storage page.

Create the planned logical networks and associated IP pools.

For this solution, you can create a logical network for External (Internet), Infrastructure, Host Networks (with Cluster IP Pool and Live Migration IP Pool), and Network Virtualization networks. Note that these are sample names—you can use your own names according to your plan. Create the appropriate IP pools for each logical network according to your plan, making sure that the IP address ranges don’t overlap with any existing IP addresses in use.

You configure the Host Networks logical network as a VLAN-based independent network, and configure the others as One connected network.

Select the Microsoft Windows Filtering Platform for the Extensions, select Team for the Uplink mode, and add the three uplink port profiles that you created previously.

Add the following virtual ports: High bandwidth, Infrastructure, Live migration workload, Low bandwidth, and Medium bandwidth.

Create a teamed virtual switch on a management node.

Add a virtual switch to the management host cluster node. This is the node that doesn’t have any virtual machines associated with it.

To do this in VMM, locate the host node on the Fabric, Servers pane, open the; Properties page and add a virtual switch on the New Virtual Switch page.

Add your two fastest physical adapters to form a team and choose the Infrastructure Uplink port profile. Then add two virtual network adapters for Live Migration and Storage.

When you’re done, verify that your virtual switch looks similar to the following:

Important

You might need to make some configuration changes to the physical switch ports that these network adapters are connected to. If you’re using LACP for teaming, you’ll need to configure the switch ports for LACP. If your switch ports are configured in Access Mode (for untagged packets), you need to configure them in Trunk Mode, because tagged packets will be coming from the teamed adapters.

For troubleshooting purposes, you can use the following Windows PowerShell cmdlets:

Get-NetLbfoTeam, Get-NetLbfoTeamMember, and Get-NetLbfoTeamNic

To see other related cmdlets, type Get-command *lbfo*.

Configure your migration settings.

Now that you have your live migration adapter configured on the virtual switch, you can configure your migration settings on each node’s Property, Migration Settings page. Configure your desired settings, and ensure your live migration subnet address has been added and is at the top of the list. The subnet is actually entered as a single IP address with a 32-bit mask: x.x.x.x/32. So, if your live migration virtual network adapter’s address is 10.0.3.6, then the Migration Settings page may look similar to the following:

Live migrate your virtual machines.

Now that you have a host configured with a virtual switch configured using VMM, you can migrate your virtual machines to it so you can prepare the other node the same way.

To migrate your virtual machines, in VMM, select the VMs and Services workspace, select the node in your management cluster that has the virtual machines running on it, right-click the running virtual machine, and click Migrate Virtual Machine. Select the other node and move the virtual machine.

Delete the virtual switch that was originally created using Hyper-V Manager.

Now that you have moved the virtual machines, you can delete the original virtual switch that you created with Hyper-V Manager.

Create a new teamed virtual switch using VMM.

After you delete the old virtual switch, you can create a new teamed virtual switch like you did with the previous node. Follow the previous step to create the virtual switch on this node using VMM.

Live migrate some virtual machines back.

Now that you have both nodes configured with a teamed virtual switch using VMM, you can migrate some of the virtual machines back. For example, move one of the SQL guest cluster nodes so that you have the guest cluster nodes split across the host cluster nodes. Do this for all the other guest clusters.

After this step is complete, you should have both of your management host cluster nodes installed with the management virtual machines and the host node networking configured through VMM.

You can install the compute Hyper-V cluster in a similar manner that you installed the management cluster:

Deploy the Hyper-V hosts and join the management domain.

Cluster the hosts and add the cluster to your VMM Compute Host group.

Create the teamed virtual switch and the live migration and storage virtual adapters for both host nodes like you did for both of the management nodes. When you team the physical adapters, use the Compute Uplink port profile for the adapters.

Add file share storage.

Configure a storage location for the virtual machines deployed to nodes in the cluster. Open the Properties page for the cluster and add a share from your scale-out file server on the File Share Storage page.

Deploy the gateway.

To deploy the Windows Server gateway in Windows Server 2012 R2, you deploy a dedicated Hyper-V host cluster and then deploy the gateway virtual machines using VMM. The Windows Server gateway provides a connection point for multiple tenant site-to-site VPN connections. You follow a similar procedure to deploy the physical hosts, but then you use a VMM service template to deploy the guest cluster virtual machines.

To deploy the Windows Server gateway, use the following procedure:

Deploy the Hyper-V hosts and join the gateway domain.

Cluster the hosts and add the cluster to your VMM Gateway Host group.

Create the teamed virtual switch and the live migration and storage virtual adapters for both host nodes like you did for both of the management and compute nodes. When you team the physical adapters, use the Gateway Uplink port profile for the adapters.

Add file share storage.

Configure a storage location for the virtual machines that are deployed to nodes in the cluster. Open the Properties page for the cluster and add a share from your scale-out file server on the File Share Storage page.

Ensure that you have a file share available from VMM (where you have a Windows Server 2012 R2 .vhd or .vhdx file available). This file will be used by the VMM service template to deploy the gateway virtual machines.

Configure hosts as gateway hosts.

You must configure each gateway Hyper-V host as a dedicated network virtualization gateway. In VMM, right-click a gateway host and click Properties. Click Host Access and click the check box for This host is a dedicated network virtualization gateway, as a result is not available for placement of virtual machines requiring network virtualization.

The service template that you use to deploy the gateway includes a Quick Start Guide document. This document includes some information to setup the infrastructure for the gateway deployment. This information is similar to the information provided in this solution guide. You can skip the infrastructure steps in the Quick Start Guide that are already covered in this solution guide.

When you reach the final configuration steps and run the Add a network service wizard, your Connection String page will look similar to the following:

And the Connectivity property of your gateway network service will look similar to the following:

After this step is complete, verify that two jobs in the log have completed successfully:

Update network service device

Add connection network service device

Tip

If you need to deploy a gateway guest cluster on a regular basis (for example, to address resource demands), you can customize the service template using the Service Template Designer. For example, you can customize the OS Configuration settings to join a specific domain, use a specific product key, or use a specific computer name configuration.

Caution

Do not modify the gateway service template to make the virtual machines highly available. The gateway service template intentionally leaves the Make this virtual machine highly available check box in the Advanced\Availability area unchecked. The virtual machines are configured as nodes of a guest cluster, but it’s important to not change this setting. Otherwise, during failover, the customer addresses (CA) won’t associate with the new provider address (PA) and the gateway will not function properly.

Verify gateway functionality.

Verify that there is connectivity between a test virtual machine and the hosts located on a test tenant network.

Use the following steps to verify that your gateway and VM networks are functioning correctly.

Establish a site-to-site VPN connection.

How you connect your test tenant network will vary depending on the equipment you use to establish the VPN connection. Remote Access (which brings together Direct Access and Routing and Remote Access service (RRAS)) is one way to connect to your gateway. To see an example procedure using RRAS to connect to the gateway, see “Install RRAS on Contoso EDGE1 and create a site-to-site VPN connection to GatewayVM1 running on HNVHOST3” in Test Lab Guide: Windows Server 2012 R2 Hyper-V Network Virtualization with System Center 2012 R2 VMM.

Tip

To connect other VPN devices, the connectivity requirements are similar to the Windows Azure VPN connection requirements. For more information, see About VPN Devices for Virtual Network

View the site-to-site VPN connection on your gateway.

After you establish the VPN connection, you can use some Windows PowerShell commands and some new ping options to verify the VPN connection.

After you verify that you have a successful site-to-site connection to your gateway, you can deploy a test virtual machine and connect it to the test VM network on your hosting service provider network.

After you deploy your test virtual machine, you should verify that it has network connectivity to remote resources in the tenant on-premises network over the Internet through the multi-tenant site-to-site gateway.

A tenant self-service portal allows your tenants to create their own virtual networks and virtual machines with minimal hosting service provider involvement. Service providers can design and implement multi-tenant self-service portals that integrate IaaS capabilities that are available on System Center 2012 R2. Service Provider Foundation exposes an extensible OData web service that interacts with VMM.

Windows Azure Pack is a Microsoft self-service portal solution that integrates with VMM using SPF. It offers a web site portal similar to Windows Azure, so if your tenants are also Windows Azure customers, they will already be familiar with the user interface presented in Windows Azure Pack. To demonstrate Windows Azure Pack features for this solution, an express Windows Azure Pack deployment is used. This deploys the required features on a single server. If you want to deploy Windows Azure Pack in production, you should use the distributed deployment. For more information, see Windows Azure Pack installation requirements.

After you have completed the procedure to register the SPF endpoint for virtual machine clouds, you should see the cloud that you created in VMM on the Windows Azure Pack administrator portal.

From the Windows Azure Pack administrator portal, author a plan that you can use to test with. For example, you could author a plan called Gold Plan with the following properties:

Properties

Settings

Name

Gold Plan

Services

Virtual Machine Clouds

After the plan is created, click it to continue the configuration. Click the Virtual Machine Clouds service and configure the VMM Management Server, Virtual Machine Cloud, and usage limits. Click Save to complete the virtual machine clouds configuration. Click the back button and finally click Change Access to make the plan public.

You may have tenants that want to deploy virtual machines that are directly connected to the Internet. They may have connection needs that require no NAT in the connection path.

Or, you may have tenants that need connectivity directly to a physical network. For example a VLAN with co-located hardware or a packet switched network (such as a multiprotocol label switching (MPLS) network).

You can support these requirements using a forwarding gateway connected to a VM network used exclusively for directly connected virtual machines. You then create subnets on the VM network for each tenant. You can use extended port access control lists to isolate each of the tenant virtual machines and control the network traffic in and out of their virtual machines.

Here’s how to do it:

Deploy a gateway using the service template in the same way as in the original solution.

Note the cluster front end IP address and name of the new VM gateway cluster. You will use this information in the connection string used in the next step.

Create a new network service in VMM to deploy the forwarding gateway service. Use a connection string similar to the following:

If your newly deployed forwarding gateway is not properly forwarding packets to the VM subnet configured for direct routing, double check that you followed the previous procedure correctly. If you are still experiencing problems, check to make sure the frontend interfaces on the forwarding gateway are configured for forwarding. To do this, check the following

Logon to one of the forwarding gateway guest cluster virtual machines.

From a Windows PowerShell administrator command prompt, use Get-NetIPInterface to examine the IP interfaces. Note the ifIndex number for the interface associated with your frontend network.