a blog by Sander Berkouwer

Last week, I showed you how to perform a simple Hybrid Identity implementation with AD FS on-premises. While this scenario is easy and fast to deploy, it also has a couple of downsides. One of them is the risk of ‘AD FS Unavailability’ and the inability to authenticate to cloud resources when the on-premises environment is unreachable from the Internet.

My weapon of choice to mitigate that risk is Azure Traffic Manager.

In this blog post I share how I see Azure Traffic Manager play its role in making Active Directory Federation Services (AD FS) on Windows Server 2016 and Windows Server version 1709 highly available through geo-redundancy and, thus, failing over between multiple locations if need be.

Traffic Manager uses the Domain Name System (DNS) to direct client requests to the most appropriate endpoint in its configuration, based on the traffic-routing method of your choice and the health of the endpoints. It provides automatic failover.

The four ways Azure Traffic Manager helps AD FS

Azure Traffic Manager is capable of helping make Active Directory Federation Services (AD FS) on multiple locations highly available in all of its four routing methods. Let’s look at these methods using real-life examples:

Performance
Your organization has multiple offices worldwide with corresponding datacenters. Traffic Manager can direct user authentication traffic from Internet-based clients to endpoints in different geographic locations. Azure Traffic Manager directs them to the “closest” endpoint in terms of the lowest network latency.

GeographicYour organization has deployed AD FS in multiple Azure regions. Like the ‘Performance’ method, Traffic Manager can direct user authentication traffic from Internet-based clients to endpoints in different geographic locations. This time it’s not based on the lowest latency, but the actual geographic location.

Weighted / Round-Robin
You have multiple AD FS deployments, but they vary in compute power and bandwidth. Traffic Manager can distribute user authentication traffic from Internet-based based to any combination of Azure and non-Azure endpoints using weights your organization defines. You could use this method to direct 20% of the traffic to the on-premises AD FS implementation on your first datacenter, 20% to your other datacenter and the rest of the traffic to your organizations Azure Infrastructure-as-a-Service-based AD FS implementation.

While all four traffic manager methods have their pros and cons, my customers mainly prefer the Priority method. They chose to implement AD FS initially, because they need to be in reach of the auditing data to prove they are in control of the authentication mechanisms. The choice for on-premises AD FS comes from the wish to take advantage of the x-ms-client-ip, insidecorporatenetwork claimtypes and/or automatic Workplace and/or Azure AD Join, based on domain membership.

However, recently, I configured the Performance routing method for a customer, so we’ll use it for this example, because it’s way more exiting. Here’s an overview

Networking

In addition to the network traffic depicted in that document, we need to take care of the ability of our Web Application Proxies to communicate plain HTTP towards the Internet (or at least the Azure Traffic Manager probing IP address ranges).

Name resolution

Normally, we’d point DNS to the external IP address of the load balancer on the edge of the perimeter network featuring the Web Application Proxies. However, since Azure Traffic Manager leverages DNS, we’ll need to create a CNAME record for our sts.domain.tld AD FS Farm name in external DNS Zones towards Azure Traffic Manager. For instance, we’ll create a CNAME record for sts.domain.tld to domainsts.traffficmanager.net.

Next, we’ll need to configure proper name resolution. Azure Traffic Manager uses DNS records to locate the endpoints, so the external IP addresses for the Web Application Proxies you’d want to integrate with Azure Traffic Manager will need to have a fully-qualified Domain name (FQDN) attached.

My suggestion is to use the same naming convention used for the AD FS farm name, but add a location or region to it. This way, an sts.domain.tld farm name would feature EastUsSTS.domain.tld and WestEuropeSTS.domain.tld endpoints, for example.

Step 1: Configuring the second AD FS server

In contrast to the simple deployment scenario, we’ll deploy more than one Active Directory Federation Services (AD FS) server. We don’t have to worry about the scope of the group Managed Service Account (gMSA), because the AD FS server takes care of that.

Deploy a new server running Windows Server 2016 or Windows Server 1709, join it to the Active Directory domain as adfs2.domain.tld and import the AD FS service communications certificate.

Step 2: Configuring the second Web App Proxy

Configuring additional Web Application Proxies to an AD FS Farm is not different than adding the first.

In the scenario of multiple datacenters, however, you might want to pay specific detail to the contents of the HOSTS file on the Web Application Proxies to point them to the closest AD FS server, instead of the AD FS server in the other datacenter.

The following PowerShell one-liner provides a good way to add a line to the HOSTS file using an elevated PowerShell (ISE) session:

Enter the credentials of a local administrator account on the AD FS Server at the login screen and then, the Web Application Proxy is setup.

Step 3: Configuring both Web App Proxies

For optimum load-balancing, we’ll need proper probes. We need to probe the infrastructure for something that is available only when the infrastructure is up and the functionality (the service) is running properly. A mere ping won’t suffice.

While you’d need to install KB2975719 on Windows Server 2012 R2 to enable the /adfs/probe endpoint, in Active Directory Federation Services (AD FS) on Windows Server 2016, this endpoint is available and enabled, by default.

Note:
You won’t see the /adfs/probe/ endpoint in the list of endpoints in AD FS Management (.msc), under Service, Endpoints.

However, the /adfs/probe/ endpoint is not available by default on Web Application Proxies, so we’ll need to perform a manual action to allow it through its Windows Firewall.

To this purpose, create the following Windows Firewall rule using Windows PowerShell on each of the Web Application Proxy servers you want to be probable by Azure Traffic Manager. Log onto each Web Application Proxy with an account with local admin privileges, start an elevated PowerShell (ISE) window and issue the following lines of code:

Note:
You should be aware that this rule allows Azure Traffic Manager to probe the status of each of the Web Application Proxies, and, thus, the availability of the connection and running services on these servers, but not the AD FS services on the AD FS Servers. For the purpose of geo-redundancy, however, this should be sufficient.

Step 4: Adding DNS records

Internal DNS

When you want your AD FS farm name to be available for users on-premises, add another DNS A record for the second AD FS server in the internal DNS zone:

Step 5: Configuring Azure Traffic Manager

If you have multiple tenants, choose the right tenant by clicking on your name or e-mail address in the top right corner. Select the tenant you want to use from the bottom of the context menu.

Click on the green big plus sign in the left navigation menu to add products and services to your tenant.

In the Search the Marketplace field, search for Traffic Manager Profile. by beginning to type its name. When Azure suggests it to you, click on it, or use the down cursor to go to it and then press Enter.

A new blade appears named Traffic Manager Profile. It contains information on what Traffic Manager does, information on its publisher and help links. Click on the Create button on the bottom of the blade.

A new blade appears, labeled Create Traffic Manager profile.

Type and select the following information:

Type the DNS name of your Traffic Manager profile, for instance DomainSTS. This will be appended by trafficmanager.net to become the FQDN your external DNS zone has a CNAME for: DomainSTS.trafficmanager.net.

Create a new Azure Resource Manager (ARM) Resource group, or if you already have a resource group for Traffic Manager profiles and other load-balancing/high-availability resource, reuse that, by selecting it from the drop-down list.

Select a Resource group location from the drop-down list, when you create a new Resource group, otherwise continue with the next step.

Select the Pin to dashboard option for your convenience.

Click Create on the bottom of the blade.

You will be redirected back to the Azure Portal dashboard, where you’ll see the profile be provisioned. After that, you’ll be taken into the configuration of your freshly created Azure Traffic Manager profile. If not, click it’s tile on the dashboard to continue.

In the left navigation blade, click on Configuration under SETTINGS.

In the Configuration blade that appears, click on the Path field under Endpoint monitor settings. Change it to /adfs/probe/.

Click Save on the top of the blade.

In the left navigation blade, click on Endpoints.

Follow the + Add link on the top of the blade.

A new blade appears, labeled Add endpoint.

From the Type drop-down list, select External endpoint.

Type something meaningful as the name. This makes for a good opportunity to exercise the organization’s naming convention. You might also just opt to type the hostname of the Web Application Proxies.

For the Fully-qualified Domain Name (FQDN) enter the DNS name you assigned to the external IP address of the load balancer of Web Application Proxy in the particular location.

Select a Location from the drop-down list. This determines the end-user device locations to be directed to this particular endpoint.

Click OK.

Now, add any additional endpoints you’d like to include in the Azure Traffic Manager profile, like EastUsSTS.domain.tld. After that, your Endpoints overview should look like this:

The New-NetFirewallRule has knowledge of protocols beyond TCP, UDP and ICMP. You can use it to specify well-known protocols like HTTP, HTTPS and SMTP. That way, you don’t have to specify it uses port 80. For more smart examples, see the Microsoft documentation on New-NetFirewallRule.

fromSander Berkouwer September 9, 2018 at 1:43 PM

Sander, for the ADFS servers are you using WID? If so, do you know if this is officially supported by Microsoft? I’m willing to live with the fact the WID database on failover is going to be read only until primary ADFS comes online and forego on SQL geo-clusters.

fromName October 17, 2018 at 11:22 PM

Hi,

Yes, WID is supported.
SQL replication is used by AD FS in this scenario.

fromSander Berkouwer October 22, 2018 at 1:00 PM

So what do you mean by “SQL replication is used by ADFS in this scenario?” Your setup is an extension of your WIDS setup from “the simple Hybrid Identity implementation” is it not? There does not appear to be any SQL involved.

fromName October 22, 2018 at 10:16 PM

Windows Internal Database (WID) is a variant of SQL Server Express 2005–2014 that is included with Windows Server.
As per the 6th remark on this Microsoft blogpost, WID uses SQL Replication between AD FS Servers.

fromSander Berkouwer October 23, 2018 at 7:13 AM

That’s what I thought you meant, just wanted to clarify. Thanks for your feedback, excellent article.

Archives

Categories

The information on this website is provided for informational purposes only and the authors make no warranties, either express or implied. Information in these documents, including URL and other Internet Web site references, is subject to change without notice. The entire risk of the use or the results from the use of this document remains with the user.Active Directory, Microsoft, MS-DOS, Windows, Windows NT, and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. All other trademarks are property of their respective owners.