The chronicles of a Bostonian tech geek navigating through life, technology, and general geekiness.

Menu

Tag Archives: saas

Welcome back fellow geeks to my third post on my series covering Azure AD Privileged Identity Management (AAD PIM). In my first post I provided an overview of the service and in my second post I covered the initial setup and configuration of PIM. In this post we’re going to take a look at role activation and approval as well as looking behind the scenes to see if we can figure out makes the magic of AAD PIM work.

The lab I’ll be using consists of a non-domain joined Microsoft Windows 10 Professional version 1803 virtual machine (VM) running on Hyper V on my home lab. The VM has a local user configured that is a member of the Administrators group. I’ll be using Microsoft Edge and Google Chrome as my browsers and running Telerik’s Fiddler to capture the web conversation. The users in this scenario will be sourced from the Journey Of The Geek tenant and one will be licensed with Office 365 E5 and EMS E5 and the other will be licensed with just EMS E5. The tenant is not synchronized from an on-premises Windows Active Directory. The user Homer Simpsons has been made eligible for the Security Administrators role.

With the intro squared away, let’s get to it.

First thing I will do is navigate to the Azure Portal and authenticate as Homer Simpson. As expected, since the user is not Azure MFA enforced, he is allowed to authenticate to the Azure Portal with just a password. Once I’m into the Azure Portal I need to go into AAD PIM which I do from the shortcut I added to the user’s dashboard.

Navigating to the My roles section of the menu I can see that the user is eligible to for the Security Administrator Azure Active Directory (AAD) role.

Selecting the Activate link opens up a new section where the user will complete the necessary steps to activate the role. As you can see from my screenshot below, the Security Administrator role is one of the roles Microsoft considers high risk and enforces step-up authentication via Azure MFA. Selecting the Verify your identity before proceeding link opens up another section that informs the user he or she needs to verify the identity with an MFA challenge. If the user isn’t already configured for MFA, they will be setup for it at this stage.

Homer Simpson is already configured for MFA so after the successful response to the MFA challenge the screen refreshes and the Activation button can now be clicked.

After clicking the Activation button I enter a new section where I can configure a custom start time, configuration an activation duration (up to the maximum configured for the Role), provide ticketing information, and provide an activation reason.. As you can see I’ve adjusted the max duration for an activation from the default of one hour to three hours and have configured a requirement to provide a ticket number. This could be mapped back to your internal incident or change management system.

After filling in the required information I click the Activate button, the screen refreshes back to the main request screen, and I’m informed that activation for this role requires approval. In addition to modifying the activation and requiring a ticket number, I also configured the role to require approval.

At this point I opened an instance of Google Chrome and authenticated to Azure AD as a user who is in the privileged role administrator role. Opening up AAD PIM with this user and navigating to the My roles section and looking at the Active roles shows the user is a permanent member of the Security Administrators, Global Administrators, and Privileged Role Administrators roles.

I then navigate over to the Approve requests section. Here I can see the pending request from Homer Simpson requesting activation of the Security Administrator role. I’m also provided with the user’s reason and start and end time. I’d like to see Microsoft add a column for the user’s ticket number. My approving user may want to reference the ticket for more detail on why the user is requesting the role

At this point I select the pending request and click the Approve button. A new section opens where I need to provide the approval reason after which I hit the Approve button.

After approving the blue synchronization-like image is refreshed to a green check box indicating the approval has been process and the user’s role is now active.

If I navigate to My audit history section I can see the approval of Homer’s request has been logged as well as the reasoning I provided for my approval.

If I bounce back to the Microsoft Edge browser instance that Homer Simpsons is logged into and navigate to the My requests and I can see that my activation has been approved and it’s now active.

At this point I have requested the role and the role has been approved by a member of the Privileged Role Administrators role. Let’s try modifying an AIP Policy. Navigating back to Homer Simpsons dashboard I select the Azure Information Protection icon and receive the notification below.

What happened? Navigating to Homer Simpsons mailbox shows the email confirming the role has been activated.

What gives? To figure out the answer to that question, I’m going to check on the Fiddler capture I started before logging in as Homer Simpson.

In this capture I can see my browser sending my bearer token to various AIP endpoints and receiving a 401 return code with an error indicating the user isn’t a member of the Global Administrators or Security Administrators roles.

I’ll export the bearer token, base64 decode it and stick it into Notepad. Let’s refresh the web page and try accessing AIP again. As we can see AIP opens without issues this time.

At this point I dumped the bearer token from the failure and the bearer token from a success and compared the two as seen below. The IAT, NBF, and EXP are simply speak to times specific to the claim. I can’t find any documentation on the aio or uti claims. If anyone has information on those two, I’d love to see it.

I thought it would be interesting at this point to deactivate my access and see if I could still access AIP. To deactivate a role the user simply accesses AAD PIM, goes to My Roles and looks the Active Roles section as seen below.

After deactivation I went back to the dashboard and was still able to access AIP. After refreshing the browser I was unable to access AIP. Since I didn’t see any obvious cookies or access tokens being created or deleted. My guess at this point is applications that use Azure AD or Office 365 Roles have some type of method of receiving data from AAD PIM. A plausible scenario would be an application receives a bearer token, queries Azure AD to see if the user is in one a member of the relevant roles for the application. Perhaps for eligible roles there is an additional piece of information indicating the timespan the user has the role activated and that time is checked against the time the bearer token was issued. That would explain my experience above because the bearer token my browser sent to AIP was obtained prior to activating my role. I verified this by comparing the bearer token issued from the delegation point at first login to the one sent to AIP after I tried accessing it after activation. Only after a refresh did I obtain a new bearer token from the delegation endpoint.

Well folks that’s it for this blog entry. If you happen to know the secret sauce behind how AAD PIM works and why it requires a refresh I’d love to hear it! See you next post.

Welcome back to part 2 of my series on Azure Active Directory Privileged Identity Management (AAD PIM). In the first post I covered the basics of the service. If you haven’t read it yet, take a few minutes to read through it because I’ll be jumping right into using the service going forward. In this post I’m going to cover the setup process for AAD PIM.

Before you can begin using AAD PIM, you’ll need to purchase a license that includes the capability. As we saw in my last post, at this time that means a standalone Azure AD Premium P2 or Enterprise Mobility + Security E5 license. Once the license is registered as being purchased by your tenant, you’ll be able to setup AAD PIM.

Your first step is to log into the Azure Portal. After you’ve logged in you’ll want to click the Create a Resource button and search for Azure AD Privileged Identity Management.

Select the search result and AAD PIM application will be displayed with the Create button. Click the create button to spin the service up for your tenant.

It will only take a few seconds and you’ll be informed the service has successfully been spun up and you’ll be given the option to add a link to your dashboard.

The global admin who added AAD PIM to the tenant will become the first member of the Privileged Role Administrator role. This is a new role that was introduced with the service. Members of this role are your administrators of AAD PIM and has full read and write access to it. Beware that other global admins, security administrators, and security readers only have read access to it. As soon as you successfully spin up the service, you’ll want to add another Privileged Role Administrator as a backup.

Upon opening AAD PIM for the first time, you’ll receive a consent page as seen below. The consent process requires confirmation of the user’s identity using Azure MFA. If the user isn’t enabled for it, it will be configured at this point.

After successfully authenticating with Azure MFA. The screen will update to show the status check was completed as seen below. This is a great example of Microsoft exercising the concept of step-up authentication. The user may have authenticated to the Azure Portal with a password or perhaps a still-valid session cookie. By prompting for an Azure MFA challenge the assurance of the user being the real user is that much higher thus reducing the risk of the user accessing such sensitive configuration options.

After clicking the Consent button the service becomes fully usable. The primary menu options are displayed as seen in the picture below. For the purpose of this post we’re going to click on the Azure AD directory roles option under the Manage section.

The Manage section of the menu is refreshed and a number of new options are displayed. Before I jump into the Wizard, I’ll navigate through each option in the section to explain its purpose.

The Roles option gives us a view of all of the users who are members of privileged roles within Azure AD and Office 365. In the activation column it’s shown as to whether or not the user is a permanent or eligible admin. The expiration column shows any user that is eligible and has actively requested and been approved for temporary access to the privileged role. As you can see from my screenshot from my test tenant I have a number of users in the global admin roles which is a big no no. We’ll remediate that in a bit using the Wizard.

Selecting the Add user button brings up a new screen where new users can be configured for privileged roles. Microsoft has done a good job of giving AAD PIM the capability of managing a multitude of Azure AD and Office 365 roles. Adding users to roles through this tool will make automatically make the user an eligible for the role rather than a permanent member like through other means would.

The Filter button allows for robust filtering options including the permission state (all, eligible, permanent), activation state (all, active, or inactive), and by role. The Refresh button’s function is obvious and the group option allows you to group the data either by user or by role. The Review button allows you to kick off an access review which we’ll cover in a later post. Lastly we have the Export button which exports the data to a CSV.

The Users option under the Manage section presents the same options as the Roles option except it takes a user-centric view.

The Alerts option under the Manage section displays the alerts referenced here. You can see it is alerting me to the fact I have too many permanent global admins configured for my tenant. I also have the option to run a manual scan rather than waiting on the next automatic scan.

The Access Reviews option under the Manage section is used to create new access review. I’ll cover the capability in a future post.

Skipping over the Wizard option for a moment, we have the Settings option. Here we can configure a variety of settings for roles, alerts, and access reviews.

Let’s dig into the settings for roles first.

Here we can configure the default settings for all roles as well as settings specific to one role. When a user successfully activates a privileged role, the membership in that role is time bound with a default of one hour. If after doing some baselining we find one hour is insufficient, we could bump it up to something higher. We can also configure notifications to notify administrators of activation of a role. There is also the option to require an incident or request reference that may map back to an internal incident management or request management system. Azure MFA can be required when a user activates a role. You’ll want to be aware that the MFA setting is automatically enforced for roles Microsoft views as critical such as global administrator.

Finally we have the option to require an approval. If you’ve played around with AAD PIM since preview, you may remember the approval workflow. For some reason the product team removed it when AAD PIM original went general available. This effectively meant users could elevate their access whenever they wanted. Sure they weren’t permanent members but there were no checks and balances. For organizations with a high security posture it made AAD PIM of little value and forced the on-demand management of privileged roles to be done using complicated PowerShell scripts or third-party tools that were integrated with the Graph API. It’s wonderful to see the product team responded to customer feedback and has added the feature back.

Toggling to Enable for the require approval option adds a section where you can select approvers for requests for the role.

Moving on to the Alerts settings we have the ability to configure parameters for some of the alerting as can be seen from the examples below.

The default values for the configurable thresholds around the “There are too many global administrator” should be a good wake up call to organizations as to the risk Microsoft associates with global admin access. The thresholds for the “Roles are being activated too frequently” should be left as the default until the behavior of your user base is better understood. This will help you to identify deviations from standard behavior indicating a possible threat as well as to identify opportunities to improve the user experience by bumping up the activation time span for users holding privileged roles that the hour long default activation time is insufficient.

Lastly we have Access Review settings. Here we can enable or disable mail notifications to reviewers are the beginning and end of an access review. Reminders can also be sent to reviewers if they have no completed a review they are a part of. A very welcome feature of requiring reviewers to provide reasons for approvals of a review. This can be helpful to capture business requirements as to why a user needs continued access to a role. Finally, the default access review duration can be adjusted.

Now on to the Wizard. The Wizard is a great tool to use when you first configure AAD PIM in order to get it up and running and begin capturing behavioral patterns. The steps within the Wizard are outlined below.

The Discover privileged roles step displays a simple summary of the privileged roles in use and the amount of permanent and eligible users. We can see from the below my tenant has exceeded either the 3 global admins or greater than 10% of users default thresholds for the “There are too many global admins” alert. Selecting any of the roles displays a listing of the users holding permanent or eligible membership in the role.

Clicking the next button bring us to the “Convert users to eligible step” where we can begin converting permanent members to eligible members. From a best practices perspective, you should ensure you keep at least two permanent members in the Privileged Role Administrator role to avoid being locked out if one account becomes unavailable. You can see that I’m making Ash Williams and Jason Voorhies eligible members of the global admins group.

After clicking the Next button I’m moved to the “Review the changes to your users in the privileged roles” step. I commit the changes by hitting the OK button and my two users are now setup as eligible members of the roles.

As you’ve seen throughout the post AAD PIM is incredibly easy to configure. I firmly believe that the only successful security solutions moving forward will be solutions that are simple to use and transparent to the users. These two traits will allow security professionals to focus less of their time on convoluted solutions and more time working directly with the business to drive real value to the organization.

I’m going to start something new with a quick bulleted list of key learning points that I came across while performing the lab and doing the research for the post.

AAD PIM can be configured after the first Azure AD Premium P2 or EMS E5 license is associated with the tenant

Be aware that at this time Microsoft does not enforce a technical control to prevent all users from benefiting from PIM but the licensing requirements require an individual license for each user benefitting from the feature. Make sure you’re compliant with the licensing requirements and don’t build processes around what technical controls exist today. They will change.

Once AAD PIM is activated by the first global admin, immediately assign a second user permanent membership in the Privileged Role Administrators role.

That’s it folks. In the next post in my series I’ll take a look at what the user experience is like for a requestor and approver. I’ll also look at some Fiddler captures to see I capture any detail as to how/if the modified privileges are reflected in the logical security token.

Welcome to the fifth entry in my series on the evolution of Microsoft’s Active Directory Rights Management Service (AD RMS) to Azure Information Protection (AIP). We’ve covered a lot of material over this series. It started with an overview of the service, examined the different architectures, went over key planning decisions for the migration from AD RMS to AIP, and left off with performing the server-side migration steps. In this post we’re going to round out the migration process by performing a staged migration of our client machines.

Before we jump into this post, I’d encourage you to refresh yourself with my lab setup and the users and groups I’ve created, and finally the choices I made in the server side migration steps. For a quick reference, here is the down and dirty:

Users Jason Voorhies and Ash Williams will be using a Windows 10 client machine with Microsoft Office 2016 named GWCLIENT1

Users Theodore Logan and Michael Myers will be using a Windows 10 client machine with Microsoft Office 2016 named GWCLIENT2

Users Jason Voorhies and Theodore Logan are in the Information Technology Windows Active Directory (AD) group

Users Ash Williams and Michael Myers are in the Accounting Windows AD group

Onboarding controls have been configured for a Windows AD group named GIW AIP Users of which Jason Voorhies and Ash Williams are members

Prepare The Client Machine

To take advantage of the new features AIP brings to the table we’ll need to install the AIP client. I’ll be installing the AIP client on GWCLIENT1 and leaving the RMS client installed by Office 2016 on GWCLIENT2. Keep in mind the AIP client includes the RMS client (sometimes referred to as MSIPC) as well.

If you recall from my last post, I skipped a preparation step that Microsoft recommended for client machines. The step has you download a ZIP containing some batch scripts that are used for performing a staged migration of client machines and users. The preparation script Microsoft recommends running prior to any server-side configuration Prepare-Client.cmd. In an enterprise environment it makes sense but for this very controlled lab environment it wasn’t needed prior to server-side configuration. It’s a simple script that modifies the client registry to force the RMS client on the machines to go to the on-premises AD RMS cluster even if they receive content that’s been protected using an AIP subscription. If you’re unfamiliar with the order that the MSIPC client discovers an AD RMS cluster I did an exhaustive series a few years back. In short, hardcoding the information to the registry will prevent the client from reaching out to AIP and potentially causing issues.

As a reminder I’ll be running the script on GIWCLIENT1 and not on GIWCLIENT2. After the ZIP file is downloaded and the script is unpackaged, it needs to be opened with a text editor and the OnPremRMSFQDN and CloudRMS variables need to be set to your on-premises AD RMS cluster and AIP tenant endpoint. Once the values are set, run the script.

Install the Azure Information Protection Client

Now that the preparation step is out of the way, let’s get the AIP client installed. The AIP client can be downloaded directly from Microsoft. After starting the installation you’ll first be prompted as to whether you want to send telemetry to Microsoft and use a demo policy. I’ll be opting out of both (sorry Microsoft).

After a minute or two the installation will complete successfully.

At this point I log out of the administrator account and over to Jason Voorhies. Opening Windows Explorer and right-clicking a text file shows we now have the classify and protect option to protect and classify files outside of Microsoft Office.

I thought it would be fun to see what the client machine’s behavior would be after the AIP Client was installed but I hadn’t finished Microsoft’s recommended client-side configuration steps. Recall that GIWCLIENT1 has been previously been bootstrapped for the on-premises AD RMS cluster so let’s reset the client after observing the current state of both machines.

Notice on GWICLIENT1 the DefaultServer and DefaultServerUrl in the HKCU\Software\Microsoft\Office\16.0\Common\DRM do not exist even though the client was previously bootstrapped for the on-premises AD RMS instance. On GIWCLIENT2, which has also been bootstrapped, has the entries defined.

I’m fairly certain AIP cleared these out when it tried to activate when I started up Microsoft Word prior to performing these steps.

Navigating to HKCU\Software\Classes\Local Settings\Software\Microsoft\MSIPC shows a few slight differences as well. On GIWCLIENT1 there are two additional entries, one for the discovery point for Azure RMS and one for JOG.LOCAL’s AD RMS cluster. The JOG.LOCAL entry exists on GIWCLIENT1 and not on the GIWCLIENT2 because of the baseline testing I did previously.

Let’s take a look at the location the RMS client stores its certificates which is %LOCALAPPDATA%\Microsoft\MSIPC. On both machines we see the expected copy of the public-key CLC certificate, the machine certificate, RAC, and use licenses for documents that have been opened. Notice that even though the AD RMS cluster is running in Cryptographic Mode 1, the machine still generates a 2048-bit key as well.

Now that the RMS Client is reset on GIWCLIENT1, let’s go ahead and see what happens the RMS client tries to do a fresh activation after having AIP installed but the client-side configuration not yet completed.

After opening Microsoft Word I select to create a new document. Notice that the labels displayed in the AIP bar include a custom label I had previously defined in the AIP blade.

I then go back to the File tab on the ribbon and attempt to use the classic way of protecting a document via the Restrict Access option.

After selecting the Connect to Rights Management Servers and get templates option the client successfully bootstraps back to the on-premises AD RMS cluster as can be seen from the certificates available to the client and that all necessary certificates were re-created in the MISPC directory.

That’s Microsoft Office, but what about the scenario where I attempt to use the AIP client add in for Windows Explorer?

To test this behavior I created a PDF file named testfile.pdf. Right-clicking and selecting the Classify and protect option opens the AIP client to display the default set of labels as well as a new GIW Accounting Confidential label.

If I select that label and hit Apply I receive the error below.

The template can’t be found because the client is trying to pull it from my on-premises AD RMS cluster. Since I haven’t run the scripts to prepare the client for AIP, the client can’t reach the AIP endpoints to find the template associated with the label.

The results of these test tell us two things:

Installing the AIP Client on a client machine that already has Microsoft Office installed and configured for an on-premises AD RMS cluster won’t break the client’s integration with that on-premises cluster.

The AIP client at some point authenticated to the Geek In The Weeds Azure AD tenant and pulled down the classification labels configured for my tenant.

In my next post I’ll be examining these findings more deeply by doing a deep dive of the client behavior using a combination of procmon, Fiddler, and WireShark to analyze the AIP Client behavior.

Performing Client-Side Configuration

Now that the client has been successfully installed we need to override the behavior that was put in place with the Prepare-Client batch file earlier. If we wanted to redirect all clients across the organization that were using Office 2016, we could use the DNS SRV record option listed in the migration article. This option indicates Microsoft has added some new behavior to the RMS Client installed with Office 2016 such that it will perform a DNS lookup of the SRV record to see a migration has occurred.

For the purposes of this lab I’ll be using the Microsoft batch scripts I referenced earlier. To override the behavior we put in place earlier with the Prepare-Client.cmd batch script, we’ll need to run both the Migrate-Client and Migrate-User scripts. I created a group policy object (GPO) that uses security filtering to apply only to GIWCLIENT1 to run the Migrate-Client script as a Startup script and a GPO that uses security filtering to apply only to GIW AIP Users group which runs the Migrate-User script as a Login script. This ensures only GIWCLIENT2 and Jason Voorhies and Ash Williams are affected by the changes.

You may be asking what do the scripts do? The goal of the two scripts are to ensure the client machines the users log into point the users to Azure RMS versus an on-premises AD RMS cluster. The scripts do this by adding and modifying registry keys used by the RMS client prior to the client searching for a service connection point (SCP). The users will be redirected to Azure RMS when protecting new files as well as consuming files that were previously protected by an on-premises AD RMS cluster. This means you better had performed the necessary server-side migration I went over previously, or else your users are going to be unable to consume previously protected content.

We’ll dig more into AIP/Office 2016 RMS Client discovery process in the next post.

Preparing Azure Information Protection Policies

Prior to testing the whole package, I thought it would be fun to create some AIP policies. By default, Microsoft provides you with a default AIP policy called the Global Policy. It comes complete with a reasonably standard set of labels, with a few of the labels having sublabels that have protection in some circumstances. Due to the migration path I undertook as part of the demo, I had to enabled protection for All Employees sublabels of both the Confidential and Highly Confidential labels.

In addition to the global policy, I also created two scoped policies. One scoped policy applies to users within the GIW Accounting group and the other applies to users within the GIW Information Technology group. Each policy introduces another label and sublabel as seen in the screenshots below.

Both of the sublabels include protection restricting members of the relevant groups to the Viewer role only. We’ll see these policies in action in the next section.

Testing the Client

Preparation is done, server-side migration has been complete, and our test clients and users have now been completed the documented migration process. The migration scripts performed the RMS client reset so no need to repeat that process.

For the first test, let’s try applying protection to the testfile.txt file I created earlier. Selecting the Classify and protect option opens up the AIP Client and shows me the labels configured in my tenant that support classification and protection. Recall from the AIP Client limitations different file types have different limitations. You can’t exactly append any type of metadata to content of a text file now can you?

Selecting the IT Staff Only sublabel of the GIW IT Staff label and hitting the apply button successfully protects the text file and we see the icon and file type for the file changes. Opening the file in Notepad now displays a notice the file is protected and the data contained in the original file has been encrypted.

We can also open the file with the AIP Viewer which will decrypt the document and display the content of the text file.

Next we test in Microsoft Word 2016 by creating a new document named AIP_GIW_ALLEMP and classifying it with the High Confidential All Employees sublabel. The sublabel adds protection such that all users in the GIW Employees group have Viewer rights.

Opening the AIP_GIW_ALLEMP Word document that was protected by Jason Voorhies is successful and it shows Ash Williams has viewer rights for the file.

Last but not least, let’s open the a document we previously protected with AD RMS named GIW_GIWALL_ADRMS.DOCX. We’re able to successfully open this file because we migrated the TPD used for AD RMS up to AIP.

At this point we’ve performed all necessary steps up the migration. What you have left now is cleanup steps and planning for how you’ll complete the rollout to the rest of your user base. Not bad right?

Over the next few posts ‘ll be doing a deep dive of the RMS Client behavior when interacting with Azure Information Protection. We’ll do some procmon captures to the behavior of the client when it’s performing its discovery process as well as examining the web calls it makes to Fiddler. I’ll also spend some time examining the AIP blade and my favorite feature of AIP, Tracking and Revocation.

Welcome to the third post in my series exploring the evolution of Active Directory Rights Management Service (AD RMS) into Azure Information Protection (AIP). My first post provided an overview of the service and some of its usages and my second post covered how the architecture of the solution has changed as the service has shifted from traditional on-premises infrastructure to a software-as-a-service (SaaS) offering). Now that we understand the purpose of the service and its architecture, let’s explore what a migration will look like.

For the post I’ll be using the labs I discussed in my first post. Specifically, I’ll be migrating lab 2 (the Windows Server 2016 lab) from using AD RMS to Azure Information Protection. I’ve added an additional Windows 10 Professional machine to that lab for reasons I’ll discuss later in the post. The two Windows 10 machines are named CLIENT1 and CLIENT2. Microsoft has assembled some guidance which I’ll be referencing throughout this post and using as the map for the migration.

With the introduction done, let’s dig in.

Before we do any button pushing, there’s some planning necessary to ensure a successful migration. The key areas of consideration are:

Possibly most impactful to an organization is the planning that goes into how the migration will affect collaboration with partner organizations. Back in the olden days of on-premises AD RMS, organizations would leverage the protection and control that came with AD RMS to collaborate with trusted partners. This was accomplished through trusted user domains (TUDs) or federated trusts. With AIP the concept of TUDs and additional infrastructure to support federated trusts is eliminated and instead replaced with the federation capabilities provided natively via Azure Active Directory.

Yes folks, this means that if you want the same level of collaboration you had with AD RMS using TUDs, both organizations will need to need to have an Azure Active Directory (Azure AD) tenant with a license that supports the Azure Rights Management Service (Azure RMS). In a future post in the series, we’ll check out what happens when the partner organization doesn’t migrate to Azure AD and attempts to consume the protected content.

Tenant Key Storage

The tenant key can be thought to as the key to the kingdom in the AIP world. For those of you familiar with AD RMS, the tenant key serves the same function as the cluster key. In the on-premises world of AD RMS the cluster key was either stored within the AD RMS database or on a hardware security module (HSM).

When performing a migration to the world of AIP, storage of the tenant key has a few options. If you’re using a cluster key that was stored within the AD RMS database you can migrate the key using some simple PowerShell commands. If you’re opted to use HSM storage for your cluster key, you’re going to be looking at the bring your own key (BYOK) scenario. Finally, if you have a hard requirement to keep the key on premises you can explore the hold your own key option (HYOK).

For this series I’ve configured my labs with a cluster key that is stored within the AD RMS database (or software key as MS is referring to it). The AD RMS cluster in my environment runs in cryptographic mode 1, so per MS’s recommendation I won’t be migrating to cryptographic mode 2 until after I migrate to AIP.

AIP Client Rollout

Using AIP requires the AIP Client be installed. The AD RMS Client that comes with pre-packaged with Microsoft Office can protect but can’t take advantage of the labels and classification features of AIP. You’ll need to consider this during your migration process and ensure any middleware that uses the AD RMS Client is compatible with the AIP Client. The AIP Client is compatible with on-premises AD RMS so you don’t need to be concerned with backwards compatibility.

As I mentioned above, I have two Windows 10 clients named CLIENT1 and CLIENT2. In the next post I’ll be migrating CLIENT2 to the AIP Client and keeping CLIENT1 on the AD RMS Client. I’ll capture and break down the calls with Fiddler so we can see what the differences are.

If you want to migrate to AIP but still have a ways to go before you can migrate Exchange and SharePoint to the SaaS counterparts, have no fear. You can leverage the protection capabilities of AIP (aka Azure RMS component) by using the Microsoft Rights Management Service Connector. The connector requires some light weight infrastructure to handle the communication between Exchange/SharePoint and AIP.

I probably won’t be demoing the RMS Connector in this series, so take a read through the documentation if you’re curious.

We’ve covered an overview of AIP, the different architectures of AD RMS and AIP, and now have covered key planning decisions for a migration. In the next post in my series we’ll start getting our hands dirty by initiating the migration from AD RMS to AIP. Once the migration is complete, I’ll be diving deep into the weeds and examining the behavior of the AD RMS and AIP clients via Fiddler captures and AD RMS client debugging (assuming it still works with the AIP client).

Hi there. Welcome to the second post in my series exploring the evolution of Active Directory Rights Management Service (AD RMS) into Azure Information Protection (AIP). In the first post of the series I gave an brief overview of the important role AIP plays in Microsoft’s Cloud App Security (CAS) offering. I also covered the details of the lab I will be using for this series. Take a few minutes to read the post to familiarize yourself with the lab details because it’ll be helpful as we progress through the series.

I went back and forth as to what topic I wanted to cover for the second post and decided it would be useful to start at the high level comparing the components in a typical Windows AD RMS implementation to those used when consuming AIP. I’m going to keep the explanation of each component brief to avoid re-creating existing documentation, but I will provide links to official Microsoft document for each component I mention. With that intro, let’s begin.

The infrastructure required in an AD RMS implementation is pretty minimal but the complexity is in how all of the components work together to provide the solution. At a very high level it is similar to any other web-based application consisting of a web server, application code, and a data backend. The web-based application integrated with a directory to authenticate users and get information about the user that is used in authorization decisions. In the AD RMS world the components map to the following products:

Nodes providing the AD RMS service are organized into a logical container called an AD RMS Cluster. Like most web applications AD RMS can be scaled out by adding more nodes to the cluster to improve performance and provide high availability (HA). If using MS SQL for the data backend, traditional methods of HA can be used such as SQL clustering, database mirroring, and log shipping. You can plop your favorite load balancer in front of the solution to help distribute the application load and keep track of the health of the nodes providing the service.

Beyond the standard web-based application components we have some that are specific to AD RMS. Let’s take a deeper look at them.

AD RMS Cluster Key

The AD RMS cluster key is the most critical part of an AD RMS implementation, the “key to the kingdom”, as it is used to sign the Server Licensor Certificate (SLC) which contains the corresponding public key. The SLC is used to sign certificates created by AD RMS that allow for consumption of AD RMS-protected content as well as being used by AD RMS clients to encrypt the content key when a document is newly protected by AD RMS.

The AD RMS cluster key is shared by all nodes that are members of the AD RMS cluster. It can be stored within the MS SQL database/WID or on a supported hardware security module for improved security.

AD RMS Client / AD RMS-Integrated Server Applications

Applications are great, but you need a method to consume them. Once content is protected by AD RMS it can only be consumed by an application capable of communicating with AD RMS. In most cases this is accomplished by using an application that has been written to use the AD RMS Client. The AD RMS Client comes pre-installed on Windows Vista and up desktop operating systems and Windows Server 2008 and up server operating systems.

The AD RMS client performs tasks such as bootstrapping (sometimes referred to as activation). I won’t go into the details because I wouldn’t do near as well job as Dan does in the bootstrapping link. In short it generates some keys and obtains some certificates from the AD RMS service that facilitate protecting and consuming content.

AD RMS-integrated server applications such as Microsoft SharePoint Server and Microsoft Exchange Server provide server-level services that leverage the capabilities provided by AD RMS to protect data such as files stored in a SharePoint library or emails sent through Microsoft Exchange.

AD RMS Policy Templates

While not a component of the system architecture, AD RMS Policy Templates are an AD RMS concept that deserves mention in this discussion. The templates can be created by an organization to provide a standard set of use rights applicable to a type of data. Common use cases are having multiple templates created for different data types. For example, you may want one data type that allows trusted partners to view the document but not print or forward it while another template may restrict view rights to the accounting department.

In AD RMS the policies are stored in the AD RMS database but are accessible via a call to the web service. Optionally they can be exported from the database and distributed in other means like a Windows file share.

As you can see there are a lot of moving parts to an on-premises Windows AD RMS implementation. Some of the components mentioned above can get even more complicated when the need to collaborate across organizations or support mobile devices arises.

How does AIP compare? For the purposes of this post, I’m going to focus that comparison on Azure RMS which provides the protection capability of AIP. Azure RMS is a software-as-a-service (SaaS) offering from Microsoft replaces (yes Microsoft, let’s be honest here) AD RMS. It is licensed on a per user basis via a stand-alone, Enterprise Mobility + Security P1/P2, or qualifying Office 365 license.

The architecture of Azure RMS is far more simple than what existed for AD RMS. Like most SaaS services, there is no on-premises infrastructure required except in very specific scenarios such as hold-your-own-key (HYOK) or integrating Azure RMS with an on-premises Microsoft Exchange Server, Microsoft SharePoint Server, or servers running Windows Server and File Classification Infrastructure (FCI) using the RMS Connector. This means you won’t be building any servers to hold the RMS role or SQL Servers to host configuration and logging information. The infrastructure is now managed by Microsoft and the RMS service provided over HTTP/HTTPS.

Azure RMS shifts its directory dependency to Azure Active Directory (AAD). It uses the tenant in which the Azure RMS licenses are associated with for authentication and authorization of users. As with any AAD use case, you can still authenticate users against your on-premises Windows Active Directory if you’ve configured your tenant for federated authentication and source data from an on-premises directory using Azure Active Directory Connect.

The cluster key, client, integrated applications, and policies are still in place and work similar to on-premises AD RMS with some changes to both function and names.

Azure Information Protection Tenant Key

The AD RMS Cluster key has been renamed to the Azure Information Protection tenant key. The tenant key serves the same purpose as the AD RMS Cluster and is used to sign the SLC certificate and decrypt information sent to Azure RMS using the public key in the SLC. The differences between the two are really around how the key is generated and stored. By default the tenant key is generated (note that Microsoft generates a 2048-bit key instead of a 1024-bit like was done with new installations of AD RMS) by Microsoft and is associated with your Azure Active Directory tenant. Other options include bring-your-own-key (BYOK), HYOK, and a special instance where you are migrating from AD RMS to Azure RMS. I’ll cover HYOK and the migration instance in future posts.

Azure Information Protection Client

The AD RMS client is replaced with the Azure Information Protection Client. The client performs the same functions as the AD RMS Client but allowing for integration with either on-premises AD RMS or Azure RMS. In addition, the client introduces functionality around Azure Information Protection including adding a classification bar for Microsoft Office, Do Not Forward button to Microsoft Outlook, option in Windows File Explorer to classify and protect files, and PowerShell modules that can be used to bulk classify and protect files. In a future post in this series I’ll be doing a deep dive of the client behavior including analysis of its calls to the Azure Information Protection endpoints via Fiddler.

Unlike the AD RMS client of the past, the Azure Information Protection Client is supported on mobile operating systems such as iOS and Android. Additionally, it supports a wider variety of file types than the AD RMS client supported.

Azure RMS-Integrated Server Applications

Like its predecessor Azure RMS can be consumed by server applications such as Microsoft Exchange Server and Microsoft SharePoint Server with the RMS Connector. There is native integration with Office 365 products including Exchange Online, SharePoint Online, OneDrive for Business, as well as being extensible to third-party applications via Cloud App Security (I’ll demonstrate this after I complete this series). Like all good SaaS, there is also an API that can be leveraged to add the functionality to custom developed applications.

Rights Management Templates

Azure RMS continues to use concepts of rights management templates like its predecessor. Instead of being stored in a SQL database, the templates are stored in Microsoft’s cloud. Templates created in AD RMS can also be imported into Azure RMS for continued use. I’ll demonstrate how that process in a future post in this series. Classification labels in AIP are backed by templates whenever a label applies protection with a pre-defined set of rights. I’ll demonstrate this in a later post.

Far more simple in the SaaS world isn’t it? In addition to simplicity Microsoft delivers more capabilities, tighter integration with its collaboration tools, and expansion of the capabilities to third party applies through a robust API and integration with Cloud App Security.

Today I continue my series of posts that cover a behind the scenes look at how Active Directory Federation Service (AD FS) and the Microsoft Web Application Proxy (WAP) interact. In my first post I explained the business cases that would call for the usage of a WAP. In my second post I did a deep dive into the WAP registration process (MS refers to this as the trust establishment with AD FS and the WAP). In this post I decided to cover how user certificate authentication is achieved when AD FS server is placed behind the WAP.

AD FS offers a few different options to authenticate users to the service including Integrated Windows Authentication (IWA), forms-based authentication, and certificate authentication. Readers who work in environments with sensitive data where assurance of a user’s identity is important should be familiar with certificate authentication in the Microsoft world. If you’re unfamiliar with it I recommend you take a read through this Microsoft article.

With the recent release of the National Institute of Standards and Technology (NIST) Digital Identity Guidelines 800-63 which reworks the authenticator assurance levels (AAL) and relegates passwords to AAL1 only, organizations will be looking for other authenticator options. Given the maturity of authenticators that make use of certificates such as the traditional smart card it’s likely many organizations will look at opportunities for how the existing equipment and infrastructure can be further utilized. So all the more important we understand how AD FS certificate authentication works.

I’ll be using the lab I described in my first post. I made the following modifications/additions to the lab:

Configure Active Directory Certificate Services (AD CS) certificate authority (CA) to include certificate revocation list (CRL) distribution point (CDP). The CRLs will be served up via an IIS instance with the address crl.journeyofthegeek.com. This is the only CDP listed in the certificates. Certificates created during my original lab setup that are installed within the infrastructure do not include a CDP.

Added a non-domain-joined Windows 10 computer which be used as the endpoint the test user accesses the federation service from.

Tool-wise I used ProcMon, Fiddler, API Monitor, and WireShark.

So what did I discover?

Prior to doing any type of user interaction, I setup the tools I would be using moving forward. On the WAP I started ProcMon as an Administrator and configured my filters to capture only TCP Send and TCP Receive operations. I also setup WireShark using a filter of ip.addr==192.168.100.10 && tcp.port==80. The IP address is the IP of the web server hosting my CRLs. This would ensure I’d see the name of the process making the connection to the CDP as well as the conversation between the two nodes.

** Note that the machine will cache the CRLs after they are successfully downloaded from the CDP. It will not make any further calls until the CRLs expire. To get around this behavior while I was testing I ran the command certutil -setreg chain\ChainCacheResyncFiletime @now as outlined in this article. This forces the machine to pull the CRLs again from the CDP regardless of whether or not they are expired. I ran the command as the LOCAL SYSTEM security principal using psexec.

The final step was to start Fiddler as the NETWORK SERVICE security principal using the command psexec -i -u “NT AUTHORITY\Network Service” “C:\Program Files (x86)\Fiddler2\Fiddler.exe”. Remember that Fiddler needs the public key certificate in the appropriate file location as I outlined in my last post. Recall that the Web Application Proxy Service and the Active Directory Federation Service running on the WAP both run as that security principal.

Once all the tools were in place I logged into the non-domain joined Windows 10 box and opened up Microsoft Edge and popped the username of my test user into the username field.

After home realm discovery occurred within Azure AD, I received the forms-based login page of my AD FS instance.

Let’s take a look at what’s happened on the WAP so far.

In the initial HTTP Connect session the WAP makes to the AD FS farm, we see that the ClientHello handshake occurs where the WAP authenticates to the AD FS server to authenticate itself as described in my last post.

Once the secure session is established the WAP passes the HTTP GET request to the AD FS server. It adds a number of headers to the request which AD FS consumes to identify the client is coming from the WAP. This information is used for a number of AD FS features such as enforcing additional authentication policies for Extranet access.

The WAP also passes a number of query strings. There are a few interesting query strings here. The first is the client-request-id which is a unique identifier for the session that AD FS uses to correlate event log errors with the session. The username is obvious and shows the user’s user principal name that was inputted in the username field at the O365 login page. The wa query string shows a value of wsignin1.0 indicating the usage of WS-Federation. The wtrealm indicates the relying party identifier of the application, in this case Azure AD.

The wctx query string is quite interesting and needs to be parsed a bit on its own. Breaking down the value in the parameter we come across three unique parameters.

LoginOptions=3 indicates that the user has not selected the “Keep me signed in” option. If the user had selected that checkbox a value of 1 would have been passed and AD FS would create a persistent cookie which would exist even after the browser closes. This option is sometimes preferable for customers when opening documents from SharePoint Online so the user does not have to authenticate over and over.

The estsredirect contains the encoded and signed authentication request from O365. I stared at API monitor for a few hours going API call by API call trying to identify what this looks like once it’s decoded, but was unsuccessful. If you know how to decode it, I’d love to know. I’m very curious as to its contents.

The WAP next makes another HTTP GET to the AD FS server this time including the additional query string of pullStatus which is set equal to 0. I’m clueless as to the function on of this, I couldn’t find anything. The only other thing that changes is the referer.

My best guess on the above two sessions is the first session is where AD FS performs home realm discovery and maybe some processing on to determine if there are any special configurations for the WAP such as limited or expanded authentication options (device authN, certAuthN only). The second session is simply the AD FS server presenting the authentication methods configured for Extranet users.

The user then chooses the “Sign in with an X.509 certificate” (I’m not using SNI to host both forms and cert authN on the same port) and the WAP then performs another HTTP CONNECT to port 49443 which is the certificate authentication endpoint on the AD FS server. It again authenticates to the AD FS server with its client certificate prior to establishing the secure tunnel.

The third session we see a HTTP POST to the AD FS server with the same query parameters as our previous request but also providing a JSON object with a key of AuthMethod and the key value combination of AuthMethod=CertificateAuthentication in the body.

The next session is another HTTP POST with the same JSON object content and the key value pairs of AuthMethod=CertificateAuthentication and RetrieveCertificate=1 in the body. The AD FS server sends a 307 Temporary Redirect to the /adfs/backendproxytls/ endpoint on the AD FS server.

Prior to the redirect completing successful we see the calls to the CDP endpoint for the full and delta CRLs.

I was curious as to which process was pulling the CRLs and identified it was LSASS.EXE from the ProcMon capture.

At the /adfs/backendproxytls/ endpoint the WAP performs another HTTP POST this time posting a JSON object with a number of key value combinations.

The interesting key value types included in the JSON object are the nested JSON object for Headers which contains all the WAP headers I covered earlier. The query string JSON object which contains all the query strings I covered earlier. The SeralizedClientCertificate contains the certificate the user provided after selecting to use certificate authentication. The AD FS server then sends back a cookie to the WAP. This cookie is the cookie the representing the user’s authentication to the AD FS server as detailed in this link.

The WAP then performs a final HTTP GET back at the /adfs/ls/ endpoint including the previously described headers and query strings as well as provided the cookie it just received. The AD FS server responds by providing the assertion requested by Microsoft along with a MSISAuthenticated, MSISSignOut, and MSISLoopDetectionCookie cookies which are described in the link above.

What did we learn?

The certificate is checked at both the WAP and the AD FS server to ensure it is valid and issued from a trusted certificate authority. Remember to verify you trust the certificate chain of any user certificates on both the AD FS servers and WAPs.

CRL Revocation checking is enabled by default and is performed on both the AD FS server and the WAP. Remember to verify the locations in your CDP are available by both devices.

The AD FS servers use the LSALogonUser function in the secur32.dll library to perform standard certificate authentication to Active Directory Domain Services. I didn’t include this, but I captured this by running API monitor on the AD FS server.

In short, if you’re going to use device authentication or user certificate authentication make sure you have your PKI components in order.

I recently had a use case come across my desk where I needed to do a SAML integration with a SaaS provider. The provider required a number of pieces of information about the user beyond the standard unique identifier. The additional information would be used to direct the user to the appropriate instance of the SaaS application.

In the past fifty or so SAML integrations I’ve done, I’ve been able to source my data directly from the Active Directory store. This was because Active Directory was authoritative for the data or there was a reliable data synchronization process in place such that the data was being sourced from an authoritative source. In this scenario, neither options was available. Thankfully the data source I needed to hit to get the missing data exposed a subset of its data through a Microsoft SQL view.

I have done a lot in AD FS over the past few years from design to operational support of the service, but I had never sourced information from a data source hosted via MS SQL Server. I reviewed the Microsoft documentation available via TechNet and found it to be lacking. Further searches across MS blogs and third-party blogs provided a number of “bits” of information but no real end to end guide. Given the lack of solid content, I decided it would be fun to put one together so off to Azure I went.

For the lab environment, I built the following:

Active Director forest name – geekintheweeds.com

Server 1 – SERVERDC (Windows Server 2016)

Active Directory Domain Services

Active Directory Domain Naming Services

Active Directory Certificate Services

Server 2 – SERVER-ADFS (Windows Server 2016)

Active Directory Federation Services

Microsoft SQL Server Express 2016

Server 3 – SERVER-WEB (Windows Server 2016)

Microsoft IIS

On SERVER-WEB I installed the sample claims application referenced here. Make sure to follow the instructions in the blog to save yourself some headaches. There are plenty of blogs out there that discuss building a lab consisting the of the services outlined above, so I won’t cover those details.

On SERVER-ADFS I created a database named hrdb within the same instance as the AD FS databases. Within the database I created a table named dbo.EmployeeInfo with 5 columns named givenName, surName, email, userName, and role all of data type nvchar(MAX). The userName column contained the unique values I used to relate a user object in Active Directory back to a record in the SQL database.

Once the database was created and populated with some sample data and the appropriate Active Directory user objects were created, it was time to begin to configure the connectivity between AD FS and MS SQL. Before we go creating the new attribute store, the AD FS service account needs appropriate permissions to access the SQL database. I went the easy route and gave the service account the db_datareader role on the database, although the CONNECT and SELECT permissions would have probably been sufficient.

After the service account was given appropriate permissions the next step was to configure it as an attribute store in AD FS. To that I opened the AD FS management console, expanded the service node, and right-clicked on the Attribute Store and selected the Add Attribute Store option. I used mysql as the store name and selected SQL option from the drop-down box. My SQL was a bit rusty so the connection string took a few tries to get right.

I then created a new claim description to hold the role information I was pulling from the SQL database.

The last step in the process was to create some claim rules to pull data from the SQL database. Pulling data from a MS SQL datastore requires the use of custom claim rules. If you’re unfamiliar with the custom claim language, the following two links are two of the best I’ve found on the net:

The first claim rule I created was a rule to query Active Directory via LDAP for the SAM-Account-Name attribute. This is the attribute I would be using to query the SQL database for the user’s unique record.

Next up I had my first custom claim rule where I queried the SQL database for the value in the userName column for the value of the SAM-Account-Name I pulled from earlier step and I requested back the value in the email column of the record that was returned. Since I wanted to do some transforming of the information in a later step, I added the claim to incoming claim set.

I then issued another query for the value in the role column.

Finally, I performed some transforms to verify I was getting the appropriate data that I wanted. I converted the email address claim type to the Common Name type and the custom claim definition role I referenced above to the out of the box role claim definition. I then hit the endpoint for the sample claim app and… VICTORY!

Simple right? Well it would be if this information had been documented within a single link. Either way, I had some good lessons learned that I will share with you now:

Do NOT copy and paste claim rules. I chased a number of red herrings trying to figure out why my claim rule was being rejected. More than likely the copy/paste added an invalid character I was unable to see.

Brush up on your MS SQL before you attempt this. My SQL was super rusty and it caused me to go down a number of paths which wasted time. Thankfully, my worker Jeff Lee was there to add some brain power and help work through the issues.

Before I sign off, I want to thank my coworker Jeff Lee for helping out on this one. It was a great learning experience for both of us.

Post navigation

About Me

Hi there! My name is Matt Felton and I am a long time geek with a passion for technology. I have over 10 years experience in the industry that spans the technology stack. Over the past few years I’ve had the opportunity to dig deeper into security and identity which I’ve been more than happy to do.

I started Journey Of The Geek over 6 six years ago when I saw an opportunity to provide in-depth technical deep dives to peel back the onion on technologies and products. I enjoy sharing what I’ve learned and giving back to the industry. Plus there is no better way to learn a topic than to teach it.

I hope you enjoy and if you have questions feel free to reach out via the comments, LinkedIn, or Twitter.

DISCLAIMER

All views expressed on this site are my own and do not represent the opinions of any entity whatsoever of which I have been, am now, or will be affiliated.