I recently decided to look into how to add a "Login from Facebook" button to a website, in particular a website protected by Tivoli Access Manager WebSEAL (though this really isn't all that important). Given that Facebook's graph API is based on the draft OAuth 2.0 specification this seemed an interesting technology investigation. The surprise that awaited me was (for a change) a very pleasant one - this integration is very simple, and took just a few hours to implement from start to finish. I also wanted to perform the integration using Java/JSP rather than using python or php which have helper SDK's from Facebook.

My primary piece of required reading was the Facebook Authentication documentation, although this blog post by Facebook developer Luke Shepard was also very useful. My particular area of interest was in authenticating users to a web application and extracting the user's email address from their profile information at Facebook. The email address will be the username to login to WebSEAL. The email address must match a user identity in WebSEAL, or alternatively a Facebook ID can be used just like an Information Card or OpenID to link to an existing account and/or bootstrap a self-registration process as described in these developerworks articles:

Of course yet another choice exists, and that is to use the email address in a request to the TFIM STS to build a TAM credential without the user being in the TAM registry as described in my developerworks article on Using WebSEAL without a User Registry.

For ease of understanding in this article I'll just assume that the email address does match an existing Tivoli Access Manager username in WebSEAL, and we'll use the email address as a value in a response HTTP header as part of an EAI application junctioned behind WebSEAL. The important part to understand is how to securely obtain the user's email address from Facebook using OAuth and the graph API - what you do with it after that is up to you.

Registering an OAuth Client Application

The first step to the integration was to register my OAuth client with Facebook and obtain an application key and secret. Interestingly your application URL doesn't even need to be internet-facing as Facebook only ever redirects to it via the browser. As long as your browser can resolve the application URL, that is good enough. As a result you may use a completely internal target application environment provided it can connect out to the internet to contact Facebook. I used a locally hosted WebSEAL and WebSphere to test the integration while writing this article.

To register my OAuth client application, I visited the Facebook Developers Setup link, and registered an application as shown:

Site Name

My Test Site

Site URL

https://www.myrp.ibm.com/

Locale

English (US)

What comes back is the information about your OAuth client registration, plus example code for people interested in the Javascript integration. The important pieces of information for this integration is the App ID and App Secret.

Understanding the OAuth Message Flow

The OAuth message flow used for this Facebook integration can be broken down into four steps:

Redirect to Facebook to obtain authorization from the user to access their basic account information and email address

Allow the user to grant authorization and return with an authorization code

Use the authorization code to request an authorized access token directly from Facebook

Use the authorized access token to get the user's profile information directly from Facebook

Putting the Message Flow Into a JSP

I wrote a few small Java helper classes to handle retrieving data from a URL using standard J2SE HttpsURLConnection and some helper code for parsing query string parameters from a response. I also leveraged some free open source for parsing JSON objects from java from http://www.json.org/java/.

My WebSEAL server was addressed via https://www.myrp.ibm.com, with a junction /jct to a WebSphere server where the sample application fb.ear was installed.

Putting these pieces together resulted in this single fblogin.jsp which manages the entire authentication process:

Installation and Configuration of the Solution

This section covers installation and configuration of the example code provided with this article.

Install the EAR to WebSphere and Update fblogin.jsp

This is quite straight forward - just download and install the example fb.ear to your WebSphere server. When you download the example code your browser may wish to save the fb.ear file as a .zip. If that happens, just rename it to fb.ear after download. You will need to edit the fblogin.jsp file that is part of the application to insert your own application id and secret, and to modify the hostname of the WebSEAL server that is used in both the redirectURL and also in the am-eai-redir-url response header for EAI. Check that the EAI headers in your WebSEAL configuration file match those in the sample application.

Configure WebSphere for SSL Trust

The helper classes that connect to Facebook via HttpsURLConnection will leverage WebSphere's J2SE SSL implementation including server certificate verification. Configuration is needed in WebSphere to install the SSL signer certificate of https://graph.facebook.com otherwise a "No Trusted Certificte Found" error will result. There are many ways to do this and the technique I show here is not the best but probably the quickest for setting up a test environment (for a more robust solution investigate WebSphere Dynamic outbound endpoint SSL configurations). In the WebSphere administration console, navigate to:

You will see a list of default signer certificates including default, dummyclientsigner and dummyserversigner. Click on the button: Retrieve from port

Enter settings as shown, then click: Retrieve signer information

You will then see the facebook certificate information. Press OK to save the alias in the NodeDefaultTrustStore. After the Save, your updated signer certificate list should include the facebook alias as shown:

Configure WebSEAL Junction, TAM policy and Trigger URL for EAI

The WebSEAL server requires a junction to the WebSphere. In this example I created /jct as shown:

Testing the Solution

To initiate the test, simply access the fblogin.jsp in a browser via WebSEAL. In my case this was using the URL:

https://www.myrp.ibm.com/jct/fb/fblogin.jsp

The first time I access the page, authorization is required at Facebook. After authentication (if needed), you should be prompted with a Facebook authorization page similar to this:

After allowing authorization, authentication should complete automatically. Provided you have an account in WebSEAL matching your email address and the epac demo application is properly configured, the very next page you see should be the epac program as an authenticated user:

Subsequent tests should not require explicit authorization as Facebook will remember your first decision. If you wish to "unremember" this decision, login to Facebook and proceed to the Account->Privacy Settings page, then under Application and WebSites click on Edit your settings, as shown:

From this page you can click on Remove unwanted or spammy applications and delete your remembered authorizations.

Conclusion

The Facebook graph API utilizes the OAuth 2.0 (draft) specification to provide a simple interface to allow a user to delegate authorized access to parts of their user profile. The interface is very straight forward for consumers to use as evidenced by the short time in which I was able to develop a solution. A lot more information can be accessed than just the user's basic profile and email address that I have shown here. There are also optomizations that could be implemented around caching authorization tokens for performance reasons. Error flows should also be considered in more detail should you wish to use this solution yourself. Many further integration scenarios are possible including using Facebook information for account linking and self-registration at a website just like OpenID and Information Cards. I believe this is just the tip of the iceberg and am looking forward to working more on a variety of OAuth integration scenarios.

I've received several enquiries lately about performing single sign-on (SSO) from on-premise environments, typically intranets or extranets, to Office 365 software-as-a-service (SaaS) subscriptions. Microsoft's documentation, specifically for directory integration with Office 365 is targeted at customers that utilize Active Directory (AD) in-house and guidance is provided for provisioning users to Office 365 from AD using a directory synchronization tool, and performing SSO using Active Directory Federation Services (ADFS). In this article I will explain how to realize provisioning of users without needing to have them stored in AD, and how to perform SSO to Office 365 using IBM's Federated Identity Manager (FIM) instead of ADFS. The applicability of these guidelines may change over time as either IBM or Microsoft update management interfaces however the steps were working in August 2012 when I prepared this article. Note also that IBM's federation support extends only to the WS-Federation protocol and other published FIM product capabilities. IBM does not control the interfaces to Office 365 or how they may evolve over time. The techniques described in this article are not part of Microsoft Office 365's official documentation, and this technique has not yet been endorsed by Microsoft as part of their "Works with Office 365 program" although it largely makes use of Microsoft Power Shell tools for configuration. That said, I am persuing that with the Microsoft Office 365 team and will publish any progress.

I would like to acknowledge that several colleagues assisted in developing this material, and specifically recognize Budi Mulyono for his work on this topic.

Let's start with a diagram depicting the desired runtime state:

Technical Summary

At a high level, these are the technical design and implementation steps that I needed to perform:

Establish an Office 365 subscription

Install the Windows Powershell and Microsoft Online Services Module

Decide on a UPN naming convention

Decide on a UUID/ImmutableID strategy

Configure a FIM federation for SSO to Office 365

Register a domain with Office 365 and register federation endpoints

Provision users to Office 365

Test Single Sign-on

Configuration and design details

Establish an Office 365 subscription

Somebody else in my team got me an Office 365 subscription. By the time I got engaged, I had login credentials. Logging in at https://portal.microsoftonline.com I have access to a portal with administrative functions.

Install the Windows PowerShell and Microsoft Online Services Module

Two components need to be installed, first the Microsoft Online Services Sign-In Assistant, and then the Microsoft Online Services Module for Windows PowerShell. Both 32 and 64 bit versions are available.

You can then start the Microsoft Online Services Module for Windows Powershell which we will use later to provision users to Office 365.

Common to the use of all cmdlets you must first connect to your domain with the cmdlet Connect-MsolService. This will popup a dialog box for login using the same administrator username and password for your Office 365 subscription:

You should do this now to ensure you have valid credentials.

Decide on a UPN naming convention

Office 365 users are identified by two pieces of information - a UPN (of the format username@domainname) and an ImmutableID (a never-recycled UUID for a user account). The UPN will typically be your local account username appended with @domainname for a registered domain you own. If users in your local registry are already represented in this format then there will be no need to map the username during SSO to append the "@domainname". At this stage just determine how you are going to logically map a local account username to the UPN format. The identity mapping rule in the FIM federation will need to implement your mapping when we configure the federation later.

Decide on a UUID/ImmutableID strategy

As you'll see later when provisioning users and when performing SSO to Office 365 there are two important pieces of information that need to be shared and consistent between the on-premise user registry and Office 365's online registry. These are a UPN (format: username@domainname) and ImmutableID.

The UPN is fairly straight forward and has already been covered.

The ImmutableID requires a little more consideration. This should be a non-recycled unique identifier for the account. AD accounts have this concept built in and transparent to user administration. Tivoli Access Manager accounts also have this same concept built in (in TAM it's called the principal UUID). Accounts in Office 365 require a unique identifier be set during provisioning, and this same unique identifier must be passed as an attribute in the SAML assertion used during WS-Federation SSO at runtime. The question is - what value should we use for the UUID and where is it stored so that later it can be retrieved at runtime?

There are lots of choices, but I'm going to describe two and you can then decide for yourself what makes sense in your environment:

Using TAM principal UUID for ImmutableID

Using the FIM alias service to manage ImmutableID

Using TAM principal UUID for ImmutableID

This technique is only applicable if you use Tivoli Access Manager WebSEAL as your point of contact for FIM and standard user authentication from the TAM registry as otherwise your credentials will not have a principal UUID attribute in them at runtime. The idea is that you need to figure out the principal UUID that TAM assigned to a user account, then use that same UUID when provisioning the Office 365 account, and when performing SSO using WS-Federation in FIM. This technique is attractive because when doing SSO at runtime the UUID will already be in the TAM credential and it's trivial to write a FIM mapping rule to retrieve it and insert into the SAML assertion. One drawback to this technique is that you don't really have any way to tell at runtime if a user has actually been provisionined to Office 365 or not before attempting SSO. The presumption is that provisioning is done before a user attempts SSO, otherwise SSO will fail at Office 365 (and the error doesn't really say "user not provisioned" or anything useful like that). For a TAM domain "default" and a user "shane", here's how to find the TAM principal UUID with an ldapsearch:

If you are familiar with the format of a TAM credential (e.g. read my Practial TAM Authorization API article) the secUUID will be the value of the AZN_CRED_PRINCIPAL_UUID attribute. We will use this later to both provision the Office 365 account and in a FIM mapping rule for WS-Federation SSO. For now you need to find the principal UUID for each of the users you intend to provision into Office 365.

Using the FIM alias service to manage ImmutableID

Note: Using this technique first requires you to have configured the alias service for FIM. Please see FIM documentation for how to do this for either the LDAP or JDBC alias service implementations.

If you are not using TAM as your point of contact, or you don't want to use the TAM principal UUID as the Office 365 ImmutableID then you need to store and manage a different UUID for this purpose. FIM provides a generic alias service for just this kind of situation. It is most commonly leveraged in SAML 2.0 federations with persistent name identifiers, however it is equally applicable in account linking scenarios such as this one. I've demonstrated it's use in previous account linking and self-registration articles.

The supported programmatic interface to the FIM alias service is via the API's made available in the IDMappingExtUtils class that can be called from Javascript and Java mapping rules in the FIM Security Token Service (STS). That means that to populate the alias service you will need to create an STS trust chain for that purpose and call those API's. Here's how to do that.

First create an STS chain with the following structure:

STSUU (validate)

Default Map (map)

STSUU (issue)

The chain selection criteria can be set to anything that is going to match the RST request message that we'll send to the STS. I suggest using the default "Validate" request type (it will match my example RST below), and set:

AppliesTo Address

http://appliesto/alias

Issuer Address

http://issuer/alias

This screenshot shows the chain configuration from my test environment:

For the default map module, use the following javascript mapping rule:

To provision an alias for a user, use the techique described in my Using CURL to send requests to the FIM Security Token Service article, with an RST such as this. Note that you should substitute in your own values for the username and alias(ImmutableID) attributes for each user you wish to provision. The federationID can be any value however it is important that you use the same value consistently for both this step, and for the mapping rule used later for the federation. Here is an example RST to start with:

Note that by changing the operation attribute value to fetch and re-sending the RST you can test that the alias service value was stored correctly (look for an aliasValue attribute in the returned STSUU. Similarly the mapping rule also supports the delete operation.

By now you should know which technique you are going to use for ImmutableID's for the federation, and in the case of the alias service approach you should know how to provision those ID's to the alias service. You can use any unique case-sensitive value you wish for the alias value that is acceptable as an ImmutableID by Microsoft. Using uuidgen is a fairly good way to get a unique id for this purpose. We now have the information we need to provision users to the Office 365 online service.

Configure a FIM federation for SSO to Office 365

Creating the federation is divided into two parts - creating the federation configuration and adding the partner configuration.

Creating a WS-Federation passive profile identity provider federation

Using the FIM management console, create a WS-Federation passive profile federation using a SAML 1.1 token. Establish your point of contact URL in the normal manner with or without a junction depending on whether or not you are using WebSEAL or WebSphere as your point of contact. In my example federation I used the federation name office365 with a point of contact https://profile.ibmidentitydemo.com/FIM/sps.

The identity mapping rule is particularly important as it is responsible for populating the UPN and ImmutableID attributes. This Javascript mapping rule should serve as a good example, and a flag at the top toggles between both ImmutableID techniques mentioned earlier in this article:

You can tune the security token NotBefore and NotOnOrAfter timeframe to allow for clock skew. During intial testing it is a good idea to make these larger values to avoid this being the reason an assertion is rejected. In my example configuration I set the values to 600 seconds each allowing a 20-minute skew window. Settings for assertion signatures should be left at their default values. It is important to note your final WS-Federation endpoint as it will be used later when verifying your DNS domain with Office 365. In my federation this was https://profile.ibmidentitydemo.com/FIM/sps/office365/wsf

This screen capture shows my federation properties:

Adding Office 365 as a federation partner

Use the wizard in the FIM console to add a partner to your office365 federation. When entering WS-Federation configuration data, use:

WS-Federation Realm: urn:federation:MicrosoftOnline

WS-Federation Endpoint: https://login.microsoftonline.com/login.srf

Maximum request lifetime: -1

The signing key you use for your federation should be a real public/private key pair that you have either purchased from a commercial certificate authority or generated from your own internal CA. Do not use the testkey as shown in these examples.

This screen capture shows my federation partner properties:

After creating the federation and partner and re-loading configuration, don't forget to run the tfimcfg utility to update TAM policy if you are using WebSEAL as a point of contact.

This completes the FIM configuration requirements for Office 365 integration.

Register a domain with Office 365 and register federation endpoints

The next important step on the path to SSO is to register and verify your domain with Office 365. This has to be a DNS domain that you own. Domain verification is a multi-step process which requires you to prove that you own the domain. Verification of the domain will also result in establishing the SSO endpoints and signing certificate related to our on-premise FIM federation for users from that domain. Users that will SSO from your intranet environment to Office 365 will be members of this domain. You cannot provision those users using the portal web interface, nor can those users be assigned a password and login via the portal. They will only be able to login via SSO.

First you must already own a registered DNS domain and be able to control it's MX or TXT records. We will use Windows PowerShell commands to register and verify the domain as federated domains cannot be configured via the management portal.

Start by adding the new domain to your Office 365 subscription. From the Microsoft Online Services Module for Windows PowerShell command prompt, login to your domain with Connect-MsolService if you are not already authenticated. Then add the new domain as a federated domain with the command:

New-MsolDomain -authentication federated -name <yourdomain>

For example:

At this point you need to get a piece of randomized information that Microsoft generates and add it to a TXT record or an MX record at your DNS registrar to prove you own the domain. The cmdlet Get-MsolDomainVerificationDns is used for this purpose. Here is an example of retrieving the TXT record information:

You now need to go to your DNS registrar and add a TXT record for the hostname @.<yourdomain> with the value as shown. In my example this was MS=ms92116141. How you do this will be dependent upon the management interface provided by your domain registrar.

After some period of time to allow for DNS propagation (suggested at least 15 minutes) you can attempt to verify your domain with the cmdlet Confirm-MsolDomain. This command takes a lot of parameters, and here is the example that I used for my federation (all on one line - line breaks are only shown here for formatting and readability purposes):

The IssuerUri, LoggOffUri and PassiveLogOnUri are all the same value and point to the WS-Federation endpoint of the on-premise FIM installation we set up earlier. The SigningCertificate is one big long string containing the PEM-formatted public certificate corresponding to the key that is used to sign SAML assertions in the federation. This should be a real public/private key pair that you have purchased or generated with an internal CA. The command returns silently on success:

At the end of the process you should be able to go back into the managment portal at https://portal.microsoftonline.com, list your domains and see a verified domain in your list, as shown here for our domain federativo.com:

Remember: Users that will SSO from your intranet environment to Office 365 will be members of this domain. You cannot provision those users using the portal web interface, nor can those users be assigned a password and login via the portal. They will only be able to login via SSO.

Provision users to Office 365

Using the Windows PowerShell, first make sure you have logged in using the Connect-MsolService cmdlet.

After successful connection you can determine your license (needed for user provisioning), with the cmdlet Get-MsolAccountSku:

You can now provision users, with the cmdlet New-MsolUser, for example (replace with parameters for your domain, user, UPN and ImmutableID, and make the command all on one line. The line breaks are only shown here for readability):

Notice that the user is NOT provisioned with a password. They can only login via federation. You now have a user provisioned to Office 365 with a UPN and ImmutableID set and are ready to test SSO.

Test Single Sign-on

Finally we can test that single sign-on from our on-premise FIM environment works to Office 365. Start by visiting the sign-in portal at https://portal.microsoftonline.com/

You will be redirected to login at login.microsoftonline.com. In the User ID field, enter your provisioned user id (e.g. shane@federativo.com) and press TAB to go to the password field. At this point you should see the screen change to indicate that you do not need to provide a password for this user and instead must login via federation:

Pressing the Sign in at Federativo.com link should direct you to your on-premise FIM federation for authentication:

Completing the login at your FIM server should result in being redirected back to Office 365 where you will now be authenticated:

Conclusion

Use of a standardized protocol such as WS-Federation passive profile makes this single sign-on possible. The FIM WS-Federation integration with Office 365 is a little complicated to establish and requires sophisticated use of a set of command-line tools on Windows, but once configured works seamlessly at runtime. Further automation would be useful for account provisioning and reconciliation and I anticipate refinements in this over time.

Not too long ago I wrote a developerworks article on Using Tivoli Access Manager for eBusiness WebSEAL without a user registry. In the article I provided a working Java Servlet which acts as a WebSEAL Enhanced Authentication Interface (EAI) application. The servlet leverages Tivoli Federated Identity Manager (TFIM) to build a Tivoli Access Manager (TAM) Privilege Attribute Certificate (PAC) without requiring the TAM user registry configuration to point to your actual user registry.

Ok - enough with the acronyms. One of my recent follow-up activities was to replicate this EAI application with an XSL accelerator configuration on a Datapower SOA appliance. In this post I'll show you how it works. The information here is primarily targeted at folks with previous Datapower experience. TAM experience is essential, and won't be covered. The only real TAM-ish difference between the solution with Datapower and the Servlet solution is that the junction from WebSEAL is created to a datapower endpoint instead of to a WebSphere application service instance.

Overview

First, a quick recap of the solution architecture:

The solution component discussed in the post is the EAI Application (this time on datapower). The work I've done was developed and tested on an XI50, however should be equally application to the XS40. The EAI is an XSL Proxy Service, with a custom processing policy.

Datapower Configuration

Create a new XSL Proxy Service object using the web administration panel, as shown in the following screenshot:

Name the proxy, then set the Proxy Type to Loopback Proxy. A loopback proxy indicates the response is generated by datapower itself, which will be done by the stylesheet configured in the custom processing policy. Take note of the Device Port - this will be the port that the WebSEAL junction should point to. The relevant configuration settings are shown in this screenshot:

Now configure a new Proxy Policy, by clicking on the + next to the Proxy Policy drop-down list. A new dialog box will open to configure the policy. Name the policy, then click on New Rule. A match rule (diamond with the = inside) will be added to the policy configuration wizard, and to that you should add an Advanced policy object by dragging it to the policy editing line just to the right of the match rule. Do the same for a Transform object, so that you have a policy rule with three objects, as shown here:

Double-click on the match rule to edit it, and add a URL matching rule. This determines which URL's match your request. For testing purposes I just set mine to '*', which matches everything:

After successfully configuring the match rule, you should see the surrounding yellow highlighting vanish from the object in the policy editor. Next, double-click on the advanced object in the processing policy. Be sure to select the Convert Queiry Params to XML radio button:

Press Next, then Done to select the default settings for the object. Finally, double-click on the Transform object in the policy editor, and upload this stylesheet. After uploading the stylesheet, use the advanced tab to configure TFIM parameters for the:

Note that in the example stylesheet, the request to the TFIM STS contains a UsernameToken, with a username and password rather than an STSUniversalUser. There is a commented out example with an STSUniversalUser as well. The stylesheet (and the TFIM STS configuration) will need customization based on your own real authentication requirements, however all the building blocks are in place, and no Java code needs to be written with this approach.

After you have configured the transform object, the processing policy configuration is complete, so press Apply Policy, then Close Window to return to the main XSL Proxy configuration panel. Press Apply, and your XSL proxy is ready to use.

WebSEAL Configuration

WebSEAL configuration is required to junction to the datapower endpoint (remember the Device Port), and set an EAI trigger URL to match the URL you will use to datapower.

Details of the Processing Policy Stylesheet

The main take-home from this article is the example stylesheet. You will almost certainly need to customize it, however it is quite readable and easy enough to understand with just a modest knowledge of XSL. It has the ability to both display the login form, and process the parameters when submitted from that form. You can modify the number and names of these parameters however you need to, just look at how the username and password are used in the existing example.

The advanced object in the processing policy (which is executed before the stylesheet) is also an interesting component of the solution. Using the excellent "Probe" debugging tool of datapower, you can see the before and after formats of the request of a POST'd form. Understanding this process is quite important to understanding how the FORM post parameters are referenced in the example stylesheet. Here's a couple of screenshots from the probe that show the details:

FORM Parameters BEFORE conversion to XML

FORM Parameters AFTER conversion to XML

Final Thoughts

This technique describes how to use a Datapower SOA appliance as a simple web application, which in this instance performs a WS-Trust token exchange with the TFIM STS, and sets HTTP response headers which are consumed by WebSEAL as authentication data. Many variants are possible, and the Datapower SOA appliance provides a rich set of XSL extenstion functions which can be utilized for those purposes. I am certainly no expert on Datapower capabilties, but found the platform very easy to work with. The Probe utility in particular is excellent and really aided with debugging exercises.

If you have any questions on using this technique, of course feel free to contact me.

I was recently asked to demonstrate a use case for securing Software-as-a-Service (SaaS) offerings, in particular GoogleApps. The concept is fairly straight forward - there are a growing number of hosted online services that coporations and individuals may subscribe to instead of self-hosting or using desktop applications. GoogleApps is one example, offering hosted email (GMail), online document authoring, management and sharing (Google Docs), website authoring (Google Sites) and shared calendar services (Google Calendar).

Subscribers to GoogleApps Premier Edition associate a GoogleApps account with a DNS domain, then configure user accounts for one or more users within that domain. Default authentication for a user to their GoogleApps account is via a username and password managed by GoogleApps itself. The focus of this article is to look at an alternate SSO offering from GoogleApps which leverages SAML 2.0, and how to configure Tivoli Federated Identity Manager (TFIM) as the Identity Provider to a GoogleApps account. This is particularly useful for corporations who offer employees an authentication portal today and would like to maintain that portal as a single sign-on for users to bridge to cloud-based services such as GoogleApps.

Technical Overview

GoogleApps offers single sign-on services via a very basic subset of SAML 2.0. For the SAML-savvy reader, here's the highlights:

Single sign-on is achieved with either SP-initiated or unsolicited IDP-initiated SAML 2.0 single sign-on.

For SP-initiated, the HTTP Redirect binding is used for the AuthnRequest message with no signatures from Google.

The single sign-on response uses the HTTP POST binding (also commonly known as browser-post) and a digital signature is required on the assertion.

The GoogleApps assertion consumer service expects the user's GoogleApps account email address as the NameID in the SAML Subject.

One of the major advantages of using a browser-post profile for single sign-on is that only the browser needs to be able to contact both the identity provider and service provider URL's. There are no configured SOAP backchannels, and no other direct communication is needed between GoogleApps and the partner identity provider. This makes the solution work very well for scenarios where the identity provider is located on a VPN or other non-public server that only the user's browser can access (e.g. an intranet environment).

This diagram (linked from the Google SSO page) describes the message flow for the SAML 2.0 browser-post SSO when it is SP-initiated:

The IDP-initiated flow is very similar, just a subset starting at the browser and clicking on a link at the identity provider which invokes step 5 onward.

An example of the SAML assertion used in the flow (with the signature removed for readability) is shown here. Note that the NameID element within the subject carries the GoogleApps identity during the single sign-on:

The rest of this entry will focus on the technical configuration requirements to get the SSO working at both GoogleApps and on Tivoli Federated Identity Manager.

SAML 2.0 Configuration Overview

SAML 2.0 integrations require the sharing of trusted configuration data (typically a set of identifiers, URL endpoints and signing certificates) between the parties involved in the federation. Integration between TFIM and GoogleApps is no different, and the following sections provide written guidelines for what it takes to get GoogleApps and TFIM to interoperate. Configuration is necessary at both Google and on the TFIM server. The information is really reference-level, and not a tutorial for the completely uninitiated. Prior experience with SAML 2.0 and TFIM are pre-requisites to get the most out of these sections.

GoogleApps Configuration

After configuring your GoogleApps domain account and establishing some users, single sign-on is enabled via the Advanced Tools of the GoogleApps control panel. This is an all-or-nothing approach. Either single sign-on is enabled (to ONE idp-partner only), or local authentication is used. Unlike other product-based SAML 2.0 offerings GoogleApps doesn't deal with standard metadata document formats, and instead prompts for the minimal necessary sub-components needed for service provider enablement. The configuration panel has settings with the following properties:

Configuration Option

Notes

Enable Single Sign-on

A checkbox that must be selected for single sign-on to be used.

Sign-in page URL

This should be the single sign-on service URL for the TFIM SAML 2.0 identity provider federation, somthing like:

https://myidp.ibm.com/FIM/sps/saml20idp/saml20/login

Sign-out page URL

A URL your users should be redirected to if they click Logout on the GoogleApps pages. It can be any URL, in this exampe I've pointed it to the logout URL of the identity provider which is a Tivoli Access Manager WebSEAL server:

https://myidp.ibm.com/pkmslogout

Change password URL

A URL where users should be redirected to if they select the change password option from GoogleApps. In this example I've pointed it to the WebSEAL change password functionality:

https://myidp.ibm.com/pkmspasswd

Verification Certificate

This is a control that allows you to upload a PEM-formatted text file containing the public signing verification key for signed SAML assertions sent from the identity provider. This should be the public key matching the signing certificate configured for assertions in the TFIM SAML 2.0 identity provider federation.

Use a domain specific issuer

This is an optional checkbox that controls the entity ID that google will send in AuthnRequest's (and what is expected in the AudienceRestrictionCondition of assertions). I recommend it is always checked so that the same identity provider may be used to provide single sign-on services to more than one GoogleApps domain.

That's really all there is to configuring GoogleApps as a SAML 2.0 SSO Service Provider for Tivoli Federated Identity Manager. Obviously the URL's may vary for your own TFIM installation. Now when you access a GoogleApps URL like http://docs.google.com/a/<yourdomain.com> you will be redirected to the sign-in page URL with an AuthnRequest using the HTTP redirect binding.

There is however some configuration data needed about the GoogleApps service provider that is to be shared with the TFIM identity provider. TFIM expects this data in a standard SAML 2.0 metadata XML file. As GoogleApps doesn't provide such a file, it must be constructed manually. The file format is fairly trivial, and an example you can use as a template is shown below. The two variable portions are the pieces where you see yourdomain.com.

Tivoli Federated Identity Manager 6.2, fixpack 3 or later should be used. At the time of writing this fixpack is not yet available publically so if your need is more urgent, please contact me to discuss options. A solution can be made to work with earlier versions of TFIM however it requires manual provisioning of persistent name identifiers in the TFIM alias service.

When using the TFIM federation wizard to create SAML 2.0 identity provider configuration, use a Basic Web SSO profile. In reality only the single sign-on service will be used, so you can later disable single logout if you wish.

On the Signature Options panel, do NOT check the box which requires signatures for incoming messages. This is important for SP-initiated SSO as Google does not sign AuthnRequest messages.

Still on the Signature Options panel, leave the radio button selection for outgoing messages on the default: Typical set of outgoing SAML messages and assertions are signed. The signing key you select should be the private key matching the ceritificate uploaded to GoogleApps SSO configuration.

All other options are set at their default values.

The most important part of the remaining federation configuration is the identity mapping options. Google requires the email address of the GoogleApps user be transmitted in the NameID element of the SAML Subject. This lends itself to the email address NameID format, and the TFIM mapping rule is responsible for populating the STSUniversalUser principal name with this email. Following is an example XSLT mapping rule for TFIM which will work as-is if the user names of the TFIM environment exactly match the GoogleApps usernames. You will also see a commented out section which allows you to do a different identity mapping where just the email address suffix is modified (or added) to the TFIM usernames (e.g. TFIM-user = shane (or shane@ibm.com), GoogleApps-user = shane@yourdomain.com).

After configuring the federation and partner, there are some manual settings which MUST be made to the TFIM federation configuration file (feds.xml) to enable successful interoperability with GoogleApps:

In the identity provider's self configuration, there is a multi-valued property called SAML2.NameIDFormat. Add an additional value for urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified. This is necessary for SP-initiated SSO to work, as the GoogleApps SP sends an AuthnRequest with the unspecified NameID format.

Again in the identity provider's self configuration, add a new property called SAML2.DefaultNameIDFormat with a value of urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress. This will instruct the federation to use the emailAddress NameID format by default when it receives an AuthnRequest with the unspecified format. Again this is only needed to make SP-initiated SSO work properly.

These manual configuration updates should be possible with the TFIM command-line interface, and when that is verified I will update the blog entry with the relevant commands.For now the recommendation is to manually edit <websphere_profile_root>/config/itfim/<tfim_domain>/etc/feds.xml.

After the configuration file is updated, either restart the WebSphere server, or simply use the TFIM console to navigate to the Domain Management -> Runtime Node Management panel, and Reload Configurations for the domain.

Configuration is now complete, and you should be able to SSO between TFIM and your GoogleApps domain.

Invoking Single Sign-on

The SP-initiated SSO flow is invoked simply by trying to access the target application, for example:

http://docs.google.com/a/<yourdomain.com>

The IDP-initiated flow is a little different as you need to tell the TFIM IDP which SSO profile and NameID format to use. A typical IDP-initiated SSO link would look something like:

There are a variety of extensions and variants possible to the solution described above. Consider some of these:

TFIM can easily be configured to act as an IDP aggregator, allowing bridging of different SSO protocols from different authentication domains, or proxying to various different alternate IDP's of the user's choice whilst still being configured with a 1:1 SAML 2.0 relationship to a particular GoogleApps domain. E.g. You may allow users to login to YOUR authentication portal with OpenID, Information Cards or other SSO protocols and then they are transparently signed on to GoogleApps via SAML 2.0.

If you have a large body of corporate users, any suspect only a subset of them will leverage GoogleApps, you could use the GoogleApps Provisioning API from within a TFIM identity provider mapping rule to do just-in-time provisioning of the users to GoogleApps rather than a bulkload or manually provisioning all the accounts. This would be a very effective component of a complete corporate solution and demonstrates the flexibility of the TFIM identity mapping solution. The TFIM identity mapping plug-in point could also be used to invoke a Tivoli Directory Integrator (TDI) and/or Tivoli Identity Manager (TIM) workflow for more complex provisioning scenarios.

Conclusion

This article demonstrates how a simple SAML 2.0 single sign-on flow can be used to secure access to GoogleApps cloud services from Tivoli Federated Identity Manager. This is just the tip of the iceberg for cloud services security, but has the very desirable attributes of being a simple integration, and will work out of the box in small volumes without any custom code development. Larger deployments would require some auto-provisioning integration, however all the required building blocks are already in place. Other cloud services offerings like Microsoft Azure services also have federation gateways in place, and it seems like a natural capability for all the cloud vendors to offer. Whilst each may use it's own federation protocol, Tivoli Federated Identity Manager with it's best-of-breed broad range of protocol offerings is uniquely positioned to offer seamless authentication portal integration to any concurrent set of cloud services.

Tivoli Federated Identity Manager OAuth Demonstrations

This article guides you through how to try out a new capability of the Tivoli Federated Identity Manager self service demonstration site. I have introduced the demonstration site in a previous article. In particular the site now demonstrates OAuth 2.0 service provider capabilities that are new in TFIM 6.2.2. The site includes a couple of demonstration clients, and you can also write you own client to work with this site. I have also provided a simple AJAX client that works with the OAuth 2.0 implicit grant flow on this very page which has been pre-registered with the demonstration site.

Next, go the the Manage your Attributes page on the demonstration site and add an attribute called email and another called phone with any value you like. Both email and phone will be used as requested scopes in the OAuth flow. If you don't have values for them in your attribute list on the demonstration site then that's ok, they just won't be available in the protected resource response.

I have found current versions of Firefox and Chrome work with default settings.

For Internet Explorer, CORS is disabled by default for sites in the Internet Zone. To enable it:

Go to Tools -> Internet Options -> Security

Click on Custom Level

Find Miscellaneous -> Access data sources across domains

Select "Enable" or "Prompt", then OK

Provided you are using a CORS-capable browser, start the OAuth flow by clicking on the Redirect for Authorization link below. After you have authenticated to the demonstration site and granted access you will be returned to this page with the access token in a URL fragment. The access token will also appear in the Access Token entry field below. You can then press Get Resource to retrieve the protected resource. Note that you can continue to press Get Resource to re-retrieve the JSON profile until the access token expires. If you were to open another browser and go to the Manage your attributes page and modify an attribute value, then return to this browser and press Get Resource you will immediately see the changed attribute value in the protected resource JSON data.

Demonstration AJAX OAuth Client Using Implicit Grant Flow

Operations

OAuth Clients on the Demonstration Site

The demonstration site itself also has a number of built-in OAuth clients showing different types of OAuth flows. The OAuth Services page has links and instructions for each of these demonstration clients as well as the ability for you to self-register your own OAuth client application. The page also contains comprehensive information for OAuth-aware developers on everything you need to know to write your own OAuth client to interact with the demonstration site OAuth service provider.

Take some time to explore each of these clients and watch how they work. Ultimately they will all retrieve the same protected resource however the way in which each obtains an access token is unique to that specific OAuth flow.

Driving an OAuth flow manually

In this section I will demonstrate how you can drive the most common OAuth flow (authorization code flow) manually with a browser and using curl.

You will then be prompted to authenticate and authorize the client, as shown:

After consent approval you are typically redirected back to the client with the authorization code in a query parameter however in this case our client has no registered Redirect URI so the authorization code will be display on the screen for manually transferring to the client:

In the demonstration environment authorization codes expire after 300 seconds so you have a little time to do this next step. We will now act as the client and use curl to exchange the authorization code for an access token and refresh token:

Conclusion

This article has given you some insight into the OAuth capabilities on the TFIM demonstration site, and hopefully you have the information necessary to write and test your own OAuth clients against this site. While the protected resource currently on offer is very trivial (simply the ability to get a user's attributes with the attribute names representing OAuth scopes), you can imagine this could be used for protected resource which are actually REST API's that perform operations at the service provider (e.g. Twitter's ability to have a third party application post a tweet on behalf of a user). Should you have any questions or feedback on TFIM's OAuth capabilities or would like to arrange a real time demonstration, please feel free to contact me.

In previous postings on this blog and developerworks I've provided example Java code to programmatically communicate with a Security Token Service (STS) using the WS-Trust protocol. Tivoli Federated Identity Manager includes an STS that supports both WS-Trust 1.2 and WS-Trust 1.3 specifications. This posting discusses different WS-Trust Java client implementations, with a focus on moving to the WebSphere WS-Trust Client API.

Technical Overview

In my previous writings I have demonstrated two other WS-Trust clients. These are:

The Higgins WS-Trust Client shipped with TFIM 6.2 and later. This was used in:

Both of these options support WS-Trust 1.2 only and will run on WebSphere 6.1 and later. The Higgins client requires several utility JAR files however it is portable outside of the WebSphere JRE environment. The home-grown example is smaller, but requires your own maintenance for optional WS-Trust elements that I haven't already included in the example code.

IBM's preferred direction is to consolidate on a single WS-Trust client that has been developed as part of the WebSphere API's. This WS-Trust Client API is available from WebSphere version 7.0.0.7 (that's WebSphere 7.0, fixpack 7) and later and is not available in WebSphere 6.1. There is a lot of information available about the WebSphere WS-Trust Client API in the Information Center. The goal of my post is not to repeat that information, but instead show you some specific examples of how to use the WebSphere WS-Trust client to communicate with the Tivoli Federated Identity Manager STS. I recommend you read the WebSphere Information Center reference material on the WS-Trust Client API in parallel with the rest of this article.

This post contains downloadable examples at the end which include source and provide WebSphere WS-Trust client equivalents of both the examples I have previously done with alternate STS clients.

General WebSphere Configuration

You will notice that the WebSphere WS-Trust client API is built on JAX-WS and uses a completely different configuration model for transport and message level security from JAX-RPC. Of particular interest to communicating with the TFIM STS is how we go about configuring SSL connections and optional basic-authentication support for the client. In the JAX-WS model in the J2EE container this is done external to the application code with Policy Sets and Bindings, a little like how SSL configurations can be managed outside of application code for JSSE clients.

Configuring the TFIM STS for Restricted Access

If your STS is listening on completely unprotected HTTP with no authentication requirement (on a trusted network), then no specific Poicy Set and Binding is required, however for the purposes of this demonstration my STS is only accessible via SSL, and will require basic-authentication using the user stsclient with a password. To configure this constraint on my STS I installed and deployed the TFIMRuntime on WebSphere 7, then under Applications -> Application Types -> WebSphere enterprise Applications -> ITFIM Runtime -> Security role to user/group mapping I changed the TrustClientInternalRole to just the user stsclient. This userid had to be created in the WebSphere registry. Of course group-level access control is probably more appropriate and this was just an example.

Given these contraints, we need to configure a policy set and binding to suit.

Information on configuring WebSphere policy sets and bindings for STS communications can be found here in the WebSphere Information Center and this was used as a guide for producing the instructions below.

Configuration a Policy Set for STS Communications

The policy set controls controls HTTP and SSL transport general connection parameters. Using the WebSphere console under Services -> Policy Sets -> System policy sets I created a new TFIMWSTrustPolicySet with the following resources:

HTTP transport

SSL transport

The HTTP transport resource of the policy set is configured as shown:

The SSL transport resource of the policy set is configured as shown:

Configuration a Policy Set Binding for STS Communications

The policy binding controls specific connection runtime data. In our case this is the basic-authentication username and password and the SSL certificate chain for server-authentication verification of the SSL connection. If you were using a mutually-authenticated SSL connection for the STS, it would be configured here also. Using the WebSphere console under Service -> Policy Sets -> General client policy set bindings I created a new TFIMWSTrustClient policy set binding with the following resources:

HTTP transport

SSL transport

The HTTP transport resource of the policy set binding is configured as shown. Note the inclusion of the stsuser and password for basic-authentication support:

The SSL transport resource of the policy set binding is configured as shown. Note that because my application that is acting as the WS-Trust client is actually running on the same WebSphere as the ITFIMRuntime I was able to use the NodeDefaultSSLSettings as-is. If you were running the STS on a different machine with a different SSL endpoint you would want to create your own SSL configuration settings object with the trusted certificate chain of the SSL cert at the STS.

The WebSphere WS-Trust client can be used for either WS-Trust 1.2 or WS-Trust 1.3. This is controlled by the Namespace URI passed to the ProviderConfig and RequesterConfig constructors.

Similarly the SOAP version can be controlled with configuration on the RequesterConfig. TFIM expects SOAP 1.1 for WS-Trust 1.2, and SOAP 1.2 for WS-Trust 1.3.

Initialization of a ProviderConfig

Several configuration properties can be set on the ProviderConfig object. The important configuration items for communications with the TFIM STS are:

Configuration Property

Description

WS-Trust Namespace URI

Passed in the constructor to ProviderConfig and will be either Namespace.WST12 or Namespace.WST13

The URL to the STS

Also passed in the constructor the ProviderConfig. In my test environment this is https://localhost:9443/TrustServer/SecurityTokenService (for WS-Trust 1.2) or https://localhost:9443/TrustServerWST13/services/RequestSecurityToken (for WS-Trust 1.3)

Policy Set Name

Set with a call to providerConfig.setPolicySetName("YourPolicySet"). This is only needed if you have a Policy Set such as described earlier.

Policy Set Binding Name

Set with a call to providerConfig.setBindingName("YourPolicySetBindingName"). This is only needed if you have a Policy Set Binding such as described earlier.

Policy Set Binding Scope

Set with a call to providerConfig.setBindingScope("domain"). This is only needed if you have a Policy Set Binding such as described earlier. Can be one of "domain" or "application" depending on how you configure your policy sets. There is more information on this in the WebSphere InfoCenter previous referenced.

Notes on Configuring TFIM STS Chains

The WebSphere WS-Trust client API will set the RequestType element in the RequestSecurityToken message based on the WS-Trust Namespace in use, and the method called on the WSTrustClient object (validate or issue).

To ensure a correct match between client requests and the STS configuration, pay particular attention to the RequestType parameter that you select for Chain Mapping criteria when configuring a chain in the TFIM STS. If the WebSphere WS-Trust client will be used with WS-Trust 1.3 to make validate calls, then the RequestType MUST be set to Validate OASIS URI as shown here:

Similarly if you are using the WS-Trust 1.2 namespace for the WebSphere WS-Trust client, then it must be set to Validate.

Download 2 contains a replacement STS Mapping Module that uses the WebSphere WS-Trust client API to perform the same function as the download from Complex Federation Identity and Attribute Mapping for Tivoli Federated Idenity Manager. This download is a JAR file. If your browser save it as a .zip file, rename it to a .jar file. The bundle uses a slightly different class and package name from the original as they can co-exist on the same system and I wanted to avoid naming conflicts due to the risk of stamping on existing configurations. You will need to create a new module instance and include that in your federations as the custom mapping module.

The client code is slightly different in each of the downloads as the STS Mapping Module example (Download 2) requires a class loader switch to call the WebSphere WS-Trust client in the J2EE container and outside of the TFIM OSGi runtime environment.

Use of the STS Mapping Module example also requires modification of the OSGi connector MANIFEST.MF file to include additional package exports that are not included in the shipped MANIFEST.MF. This file resides at <FIM_INSTALL_ROOT>/plugins/com.tivoli.am.fim.osgi.connector_6.2.x/META-INF/MANIFEST.MF. Updates to this file are discussed in the Advanced Development considerations of the STS Module Development Tutorial. A utility application is provided as part of the com.tivoli.am.fim.sdk downloadable package to generate the updated manifest file from the existing MANIFEST.MF deployed in the FIM plugins directory and the MANIFEST.MF contained in the com.tivoli.am.fim.demo.stsmap2_1.0.0.jar. Essentially the following 7 exported packages are required to be added to the osgi connector's MANIFEST.MF:

com.ibm.websphere.wssecurity.wssapi

com.ibm.websphere.wssecurity.wssapi.token

com.ibm.wsspi.wssecurity.core.token.config

com.ibm.wsspi.wssecurity.trust.config

com.ibm.websphere.wssecurity.wssapi.trust

com.ibm.wsspi.wssecurity.wssapi

org.apache.axis2.util

Conclusion

The WebSphere WS-Trust client API is a powerful, flexible interface for calling security token services and will be the strategic direction for IBM customers that wish to invoke STS services programatically. This post demonstrates how to use the API to make validation requests to the TFIM STS, and I recommed all Tivoli Federated Identity Manager customers on WebSphere 7 who require this functionality to consider this approach rather than the legacy STS clients I have previously documented.

Overview

The December 2011 release of Tivoli Federated Identity Manger 6.2.2 includes support for OAuth 1.0 and 2.0 delegated authorization protocols. Included in this support for each protocol are three extensible programming interfaces that may be used to manage certain configuration and persistence information used for OAuth flows. The supported extension interfaces are:

Title

Description

Link to development material

Client Configuration Provider

Permits externalization of the storage and management of OAuth client registration data including client identifiers, secrets and redirect URI’s.

Permits externalized storage and query of resource owner authorization decisions. This can be used to decide if a resource owner needs to be prompted for consent-to-authorize at the authorize endpoint. It will also be queried when a resource owner wishes to browse their stored consent decisions.

As you can see from the links in the table above there is already detailed development information available for these extension points. The purpose of this particular article is to provide TFIM customers with example ready-to-use plugins (including source) for database-based implementations of these extension points. The examples included in this article are not directly supported by IBM, however the extension points that they utilize are fully supported. Source code is provided allowing you to build, modify and extend the examples as needed for your own requirements. The example code utilizes JDBC API’s to store and retrieve information from DB2, and it should be relatively straight forward to deploy this same pattern for other database types.

There are several simple reasons why these plugins are shipped as example code rather than directly in TFIM itself as supported implementations.

You will find that you can implement some interesting customized behaviour in these extensions – for example limiting the number of access or refresh tokens that are issued to a client per resource owner, and permitting fine-grained revocation interfaces.

TFIM does not ship a database license for general use – this is something you either already have and use in your organization or need to acquire if you want to use database storage for the OAuth extensions.

There are far more database and storage technologies in the market than we can economically test and individually support.

Based on this extensive variability a conscious decision was made to establish support at the point of the programming extension points.

TFIM does ship default implementations for each of these OAuth extension points and for a variety of scenarios these will be sufficient. A persistence-based approach such as that demonstrated in these JDBC implementations will be best suited to scenarios where one or more of the follow apply:

The rest of this article is geared at design and deployment guidelines for the example JDBC plugins in a DB2 environment. The plugins may be used independently however there is one optional revocation capability demonstrated in the trusted client manager extension that requires you also use the JDBC implementation for the token cache extension.

Operating System and Database Preparation

First create an operating-system user called fimjdbc. This will be the username used to connect to the database. For DB2 it’s important this username is 8 characters or less. Set the password to non-expiring and something you can remember. This will need to be configured in WebSphere later.

Creating Database, Tables, Indexes and Grants

The commands in this section show all the tables necessary for BOTH OAuth 1.0 and 2.0 implementations used by the example plugins. Of course you can change these table schemas so long as you also make sure the SQL commands executed from the example plugins are modified to suit. Note that the trusted clients manager implementation uses the same table for both OAuth versions however the token cache and external clients manager are different tables for OAuth 1.0 and 2.0. You only need to create tables for the version of OAuth you are using however it causes no problems if you create all the tables.

Next you need to determine the TCP/IP port used by your DB2 instance if you don’t already have it recorded:

db2=> get dbm cfg

In the output look for line similar to:

TCP/IP Service name (SVCENAME) = db2c_db2inst1

If the value is a string name such as shown above, look up /etc/services to find the port number:

# grep db2c_db2inst1 /etc/services
db2c_db2inst1 50003/tcp

Here you can see the port number on my server is 50003. This will be used later in the WebSphere configuration.

WebSphere Configuration

JDBC Provider Configuration

Next use the WebSphere administration console to configure a JDBC provider and data source. Navigate to Resources -> JDBC -> JDBC Providers, then:

Pick a scope (I used server)

Click New. The wizard will start.

Select:

Database Type: DB2.

Provider Type: DB2 Universal JDBC Provider.

Implementation Type: Connection pool data source.

Click Next.

Set paths for the JDBC driver. These will be dependent on your DB2 installation:

DB2_UNIVERSAL_JDBC_DRIVER_PATH = /home/db2inst1/sqllib/java

DB2_UNIVERSAL_JDBC_DRIVER_NATIVEPATH = /home/db2inst1/sqllib/lib

Click Next, then Finish.

Save the config.

Authentication alias

Now create a J2C authentication alias for connecting to the database. Navigate to Security->Secure administration, applications and infrastructure->Java Authentication and Authorization Service->J2C authentication data:

Click New.

Set:

Alias: fimalias

UserID: fimjdbc <===== This is the operating system user we created originally.

Password: That user's password.

Click OK.

Save Config.

Data Source Configuration

Now create the data source that the plugins will use. Navigate to Resources->JDBC->Data Sources.

Pick a scope (server is what I used)

Click New. Wizard will start.

Set:

Data source name: FIM OAuth JDBC Service

JNDI Name: jdbc/OAuthDB

Click Next.

Select an existing JDBC Provider. You should be able to select the DB2 Universal JDBC Driver Provider configured earlier.

Click Next.

Set:

Database name: OAuthDB

Driver type: 4

Server name: <DB2 server> - maybe localhost

Port number: <See the comments on SVCNAME above - in my example this is 50003>

Use the TFIM console to navigate to your OAuth federation properties panel. The custom plugins should now be available in the menu options for each of the plug-in extension types, as shown in the snippets below:

JDBC External Client Provider Configuration

JDBC Token Cache Configuration

JDBC Trusted Clients Manager Configuration

Implementation Details for the Sample Plug-ins

External Client Provider

The JDBC external client provider is very simple. The plug-in provides runtime retrieval only of OAuth client configuration data from the database. Any updates to the database (including registering new clients) must be done by your own processes external to TFIM. TFIM will check that a client is valid and enabled by during the authorization and token phases of an OAuth flow as well as each time that an access token is validated. If you wish to revoke all access to a client you can either delete the client or simply set the enabled column in the database for that client to 0. The only configuration for the module is the JNDI name for the data source which defaults to jdbc/OAuthDB.

As a bonus piece of example code I have included a WebSphere EAR application OAuthJDBC.ear with full source that provides very rudimentary client management. You can add and edit both OAuth 1.0 and 2.0 clients with this application. The application assumes the data source name is jdbc/OAuthDB. The URL's to access the application management pages are:

OAuth 1.0: https://yourwebsphere/OAuthJDBC/oauth10Clients.jsp

OAuth 2.0: https://yourwebsphere/OAuthJDBC/oauth20Clients.jsp

Token Cache

The token cache implementations facilitate storage and retrieval of OAuth tokens and grants (including authorization codes and refresh tokens for OAuth 2.0). The token cache implementations include a rudimentary cleanup thread which will periodically remove expired tokens from the database. The plugins support a "cleanup interval" configuration parameter for this purposes which defaults to 300 seconds. Another example configuration item shown in this plug-in for OAuth 2.0 is a facility to restrict the number of refresh tokens that have been issued to a client for a particular resource owner to 1. That means that whenever a client obtains a new refresh token for a particular resource owner (for example from a new authorization code flow initiated by the client) then any existing active refresh token for that client/resource-owner will be deleted. This is an example of how you can implement some of your own constraints regarding grants and tokens within the plug-ins. The module also has a configuration item for the JNDI name for the data source which defaults to jdbc/OAuthDB.

Trusted Clients Manager

The trusted clients manager extension point serves two purposes:

During authorize endpoint processing it is used to both lookup and store resource owner authorization consent decision information (including scopes) in the database. These decisions are used to if a resource owner needs to be prompted for consent during the resource owner authorization step of an OAuth flow. Provided all the requested scopes have been previously consented by the user, interactive consent will not be required more than once.

TFIM provides a trusted clients management URL endpoint for each federation. A resource owner can use this endpoint to view (and remove) their remembered consent decisions. TFIM only provides the ability to completely remove the remembered decision - not modify specific attributes associated with it. That said, once you are storing these consent decisions in your own database you may always write your own UI for resource owners to manipulate consent decisions if you have specialized requirements. When a decision is remove the resource owner is typically prompted for consent again on their next visit to the authorize endpoint.

The example plug-in included with this article offers an additional configuraiton option that will only work if the federation is also configured to use the JDBC token cache. The configuration option is a checkbox titled Auto-Revoke Tokens from JDBC Token Cache and when selected the plug-in will delete all existing access tokens and grants for a particular client/resource-owner at the time a resource owner deletes their remembered consent decision for that client. This has the effect of immediately revoking access for that client even if the client has a current non-expired access token.

Debugging and Troubleshooting

All debugging of the modules can be done using the WebSphere trace facility (Troubleshooting -> Logs and Trace in the WebSphere administration console). The plug-ins themselves use a common java package name and as such you can trace all execution of them using the trace string:

com.tivoli.am.fim.demo.jdbcplugins.*=all

Downloads

Conclusion

The intent of this article is to provide TFIM customers wishing to leverage OAuth service provider capabilities in TFIM 6.2.2 with practical example implementation code for database-managed persistence of state and client configuration data. You should also note that by using custom implementations of the plug-in extension points you can implement advanced requirements around token lifetime management and revocation whilst still leverage TFIM for endpoint managmement, token creation and a rich set of authorization enforcement points.

For any further information on the TFIM OAuth implementation or for advice on your own development of plug-ins, please contact me.

This entry is a collection of references to developerworks articles on advanced Tivoli Federated Identity Manager (TFIM) concepts, development, and integrations. I have also included a few of my articles related to Tivoli Access Manager (TAM), as I often use the concepts from both in various Tivoli security deployments that I am involved with. I hope you find this collection of articles useful, and if you would like me to write about another other aspects of TFIM or TAM, please let me know.

TFIM 6.2 now delivers an Eclipse-based development approach for authoring custom product extensions. TFIM now uses an OSGi plug-in runtime environment and developers can author several different types of plug-ins using standard Eclipse extension tooling. One of the primary TFIM extension interfaces for customers is the Security Token Service Module (STS Module). This extension point allows customers to easily author their own identity token or mapping modules in Eclipse (or Rational Application Developer) and export them as plug-ins that will work with TFIM. The STS Module Development Tutorial walks through the complete development process from establishing a Eclipse environment for TFIM development to deploying and testing a plug-in. Examples are provided for both mapping modules and simple token modules.

This is actually a refresh from an article first delivered TFIM 6.1.1 and represents a new and improved way of providing IIS web server integration. Major enhancements include:

TFIM-BG 6.2 add support for SAML 2.0 in addition to existing support for SAML 1.0/1.1.

TFIM 6.2 (including TFIM-BG) introduces a new plug-in model which can locally interpret LTPA cookies set from a WebSphere / TFIM-BG environment. This allows for much simpler integration into Microsoft application environments after SSO from a 3rd party Idenity Provider.

This article presents a Security Token Service mapping module which allows user identity data to be queried from Tivoli Identity Manager, and is particular useful in SOA environments. This article is authored by Neil Readshaw.

This describes in detail how to configure the new Kerberos Junctions capability in Tivoli Access Manager 6.1. This capability leverages the new Kerberos Delegation STS module in TFIM 6.2 to generate kerberos service tickets that allow WebSEAL to authenticated to a junctioned IIS server as the logged-in user.

This article describes how to replace the cookie-based OpenID Identity Provider trusted sites manager implementation with your own custom implementation. An example is provided which uses JDBC to store the user's trust site information.

This article describes how to use an Information Card or OpenID Relying-Party federation to enable the linking of a user-centric identity to a local account for reduced single-signon. Part 2 of the series will add self-registration capabilities with email verification.

This article describes a technique that allows you to leverage WebSEAL for enterprise WebSSO and authorization without having to tie the TAM registry to the corporate directory where users are authenticated. One redeeming quality of this integration compared to doing a many:1 mapping at authentication time to an existing TAM user is that TAM audit logs show correct per-user information as the user's real identity is maintained in the WebSEAL session credential.

This article provides integration code to allow you to graph realtime WebSEAL junction statistics data (txns/sec and milliseconds/txn) in Windows Performance Monitor. It leverages the TAM administration API's in C++, and contains fully-functional binaries for TAM 6.0 as well as all the source code.

This tutorial provides working example code of usint the TAM Java authorization API's to decode a TAM credential and extract all the attributes. There is a JSP equivalent of the sample TAMeB epac demo program included. This is invaluable when working with TFIM to do user identity mapping rules, or when authoring TAM authorization rules.

A while back I posted configuration notes for SAML 1.1 Integration with Salesforce.com. The integration was performed with a trial edition of Salesforce CRM. Since that time Salesforce have added support for SAML 2.0 as a single sign-on protocol. This article will highlight the configuration requirements for SAML 2.0.

If you haven't already done so, please read my earlier posting on the SAML 1.1 Integration with Salesforce.com as most of the configuration is identical and explanations won't be repeated in detail in this posting.

Technical Summary

For those familiar with TFIM and SAML 2.0, here are my technical summary observations from the integration:

The SSO is still performed with SAML Browser-POST profile using only IDP-initiated SSO. I was not able to find any way to trigger SP-initiated SSO for Salesforce CRM. It appears from reading the very detailed Salesforce Documentation that some Salesforce product offerings do support SP-initiated SSO, however I don't believe the trial edition of Salesforce CRM is one of them.

All of the same user mapping options are identical between the SAML 1.1 and SAML 2.0 offerings.

Some extra TFIM configuration is required to set the DefaultNameIdFormat attribute for the federation or partner to urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress

Configuration Details

TFIM SAML 2.0 Identity Provider Configuration

Use the TFIM console to create a SAML 2.0 Identity Provider federation using all of the default federation options, including the default ip_saml20.xsl mapping rule. We will override the mapping rule at the partner level later anyway. After configuration of the federation you can optionally disable artifact profiles as only the browser-post profile is used for this integration. For the example given here, the federation name I chose was saml20idp.

If you are using WebSEAL as your point-of-contact, don't forget to run the tfimcfg utility to set the TAM access control lists and WebSEAL configuration required for the federation to work.

There is no need to export the SAML 2.0 metadata for the TFIM IDP federation from the console as this is not used for configuration at salesforce.com

Salesforce.com SAML 2.0 Service Provider Configuration

After establishing my free trail Salesforce CRM account (via www.salesforce.com) and logging in, click to the Setup link (click on any image to view it in actual size):

On the left navigation frame, navigate to Administration Setup -> Security Controls -> Single Sign-On Settings:

Enable SAML via the checkbox, then enter your SAML settings:

Parameter

Suggested/Default Value

Notes

SAML Enabled

Checked

This must be enabled to allow SAML authentication.

SAML Version

2.0

Both 1.1 and 2.0 are now supported.

Identity Provider Certificate

Signer cert from TFIM identity provider

This should be an uploaded PEM-encoded certificate file containing the public key matching the signing certificate at the TFIM identity provider.

SAML User ID Type

Assertion contains User's Salesforce username

There are two choices, and this is the selection you would use when the identities on your identity provider website match the usernames in your salesforce.com account. If the names do not match, then you would select Assertion contains the Federation ID from the User object and you would need to enter a Federation ID for each user in the Manage Users menu.

SAML User ID Location

User ID is in the NameIdentifier element of the Subject statement

Again there are two choices, and this is the most logical. The other choice would be for advanced user mapping scenarios and allows the value to be read from an nominated Attribute in the AttributeStatement of the SAML assertion.

Issuer

The Protocol ID value from your TFIM federation

This value must match the Issuer in the SAML assertion, which comes from the Provider ID value defined in the identity provider SAML 2.0 federation settings in TFIM.

After saving the configuration, download the SAML 2.0 metadata that is required to create a partner in TFIM from the link shown:

Adding Salesforce as a SAML 2.0 Partner to the TFIM Identity Provider Federation

You can use the standard partner wizard in the TFIM console to create the partner configuration using the metadata provided by Salesforce. All the default federation options can be used, although I recommend this mapping rule for your Salesforce partner: ip_saml_20_salesforce.xsl. After the partner is created, reload the TFIM configurations using the console.

Your work for the SAML 2.0 partner is not quite down however, as we need to set the DefaultNameIdFormat property for the partner as salesforce.com by default uses the "unspecified" nameid format, and we are going to perform IDP-initiated SSO. The TFIM command line interface is required for this step. Login to wsadmin on the WebSphere running the TFIM Management Service (typically the dmgr node in a cluster):

To verify the name of your federation and partner, list all partners (my examples use a FIM Domain called idp):

Now edit the response file and update the DefautlNameIdFormat property. This examples shows the TFIM 6.2.1 response file format, however the property also exists in TFIM 6.2.0. Note that we are configuring the DNameIdFormat to urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress. This simply means to use the userid as the NameIdentifier in the SAML Subject:

Testing and Debugging Considerations

Debugging was exactly the same as discussed in the SAML 1.1 integration. For issues on the Identity Provider, check the TFIM WebSphere logs. For issues at Salesforce.com, check the user authentication history under Setup->Administration Setup->Manage Users->Login History.

The Salesforce documentation for SAML federated single sign-on also contains some good debugging tips and best practices for your integration.

Conclusion

The TFIM SAML 2.0 integration with Salesforce CRM (at least with the trial I used) was quite straight forward with only one advanced piece of configuration requried for setting the DefaultNameIdFormat. Given a choice I would still use the SAML 1.1 configuration as it's simpler, unless I was using a Salesforce.com offering that supported SP-initiated single sign-on. In that case I would probably use SAML 2.0 just for user convenience so that when a user bookmarked a salesforce.com page they would be automatically redirected to my IDP for authentication.

From a quick perusal of the Salesforce.com documentation there are a lot of other SAML 2.0 capabilities discussed which appear to be for offerings beyond those exposed in the trial edition of Salesforce CRM that I had access to. In any case it appears the integration with TFIM is straight forward and I would not expect any insurmountable interoperability issues.

Pulse 2013

Having just returned from our biggest IBM security show of the year in Las Vegas I thought it would be a good idea to share some of my personal highlights from Pulse 2013.

Identity and Access Management for Mobile Security

Many of those who attended with an interest in the IAM track of our security sessions would have seen my demonstration of mobile security for hybrid applications. In that demonstration I showed how various security technologies from IBM can be combined to provide a pattern for mobile security in hybrid applications. The technologies, and their business value include:

As with many of my demos, you can run it yourself from the hosted demonstration environment. After self-registration navigate to Account Management -> Manage Mobile Application Instances (OAuth). There you will find a link to download the android application. Use the browser on your android device and you can download and install right from that web page. Also on the page is a button to obtain a registration code that you can scan in from the app on your phone. Watch the video - you'll get the idea.

Several people asked me how it all hangs together - that is probably a topic for a more in-depth article, but at least let me share this architecture diagram showing the various components in the solution and some of the native and web-based flows. I've turned the diagram into a short video with animations so you can follow a time-sequence of what the mobile application is really doing.

While on the demonstration site you may also wish to try out the One-time password, and Browser risk-based-access demonstrations as well. These are self-guided, and are a good pre-cursor to the mobile demonstration.

I am always interested in your feedback on these topics, so please feel free to contact me if you have something to share, or you can always comment directly on this blog.

IBM lauches the MobileFirst brand

This actually happened during the timeframe of Mobile World Congress the week before Pulse, however the announcement of the MobileFirst brand is very interesting and timely. I know from talking to a number of customers and industry subject matter experts that security is an incredibly important part of any mobile strategy and I believe that even simple patterns such as the one shown above demonstrate that IBM has a lot of capability in mobile. The brand announcement will drive further investiment in this incredibly pervasive technology and I am looking forward to being a solutions contributor for mobile.

Business Partner Solutions for IAM

A number of IBM Security business partner solutions were on display in the expo hall. With no particular bias, I was very impressed at the quality and depth of value-added services that our business partners are offering in the identity and access management space, built on IBM security products. Some of the offerings I saw personally (alphabetical by company) included:

I also spoke to a variety of other buisness partners during the conference and apologies if I didn't get to see your offerings in detail or mention them here.

IAM web gateway appliance and integrated security demo

In November 2012 the IBM Security team released the web gateway appliance as part of the IBM Security Access Manager for Web bundle. Available in either hardware or virtual appliance form factors, this appliance combines our world class web reverse proxy (formerly known as WebSEAL) with X-Force backed threat capabilities in a WAF (web application firewall) allowing customers to use a single appliance for addressing a large set of both access and threat requirements. Also thrown in are some basic load balancing and config replication features plus a host of other new management interfaces. The customers I spoke to were very pleased to see this integrated approach to security delivery promising faster time to value and completely abstracting away the middleware software management problem into a firmware upgrade. This was one of the hottest new products discussed for IAM at Pulse and something I believe will see rapid adoption in the coming 12 months.

In an even more recent bundling annoucement the virtual appliance form of the web gateway appliance is in included in the IBM Security Access Manager for Cloud and Mobile offering along with all of the base capabilities required to implement the hybrid mobile security pattern I demonstrated above.

From a demonstration perspective, there is a great integrated security demo of a bunch of products including the web gateway appliance and QRadar. This was put together by David Druker, and you can view it here:

Privileged Identity Management

On the identity side of IAM I was impressed with the demonstrations of the new privileged identity management solution. This allows controlled and audited use of credentials used for privileged access and has tight integration with both our IBM Security Identity Manager server and the Enterprise Single Sign-on client authentication solution. Here's a demo of PIM:

Security Intelligence and BigData

This partnership between the QRadar folks in IBM Security and IBM BigInsights promises to be able to combine comprehensive analytics of unstructured data (from BigInsights based on Hadoop technology) with a world-class correlation and offense detection engine to deliver unprecidented visibility into business security incidents. This will permit predicitive analytics and new forms of real time threat detection. I don't pretend to grasp complete details of the technology yet, but I understand the value proposition (and it's not for everyone) and I can see massive potential here for big companies with lots of historical information that today requires manual forensics. For more info check out this material: http://www-03.ibm.com/security/data/

There are many more topics I could talk about here - I learned a lot at this year's Pulse conference. I thought it was a great event and I'm glad I was able to share a little about what we are doing in the Identity and Access management portfolio in IBM Security as well.

My inbox ran hot today with both IBM employees and external contacts asking me for my opinion on a rather scathing article about OAuth 2.0 from former editor Eran Hammer-Lahav.

First my disclaimers. I am an architect and senior developer for a product in IBM (Tivoli Federated Identity Manager - aka TFIM) which implements OAuth (both 1.0 and 2.0). IBM sell's this product and my opinions are therefore not completely unbiased but I do believe in the comments I make in this post. I also contributed some of the content to the OAuth 2.0 threat model so have spent some time on understanding OAuth 2.0 security considerations. These comments are my own and may not necessarily reflect IBM's long term position so take them as just another input for your consideration.

I also have great respect for the work Eran put into OAuth over the past several years on both 1.0 and 2.0 and he should be recognized for that.

I don't think OAuth 2.0 is as doom and gloom as portrayed in Eran's comments, and I think it is unfair to compare the complexity of using OAuth (1.0 or 2.0) to WS-* as a security mechanism for Web API's. Those technologies are light years apart in terms of complexity, ability to implement, interoperability and market acceptance. I've worked on and have implemented both so am reasonably well qualified to express an opinion.

I think OAuth 2.0 is very useful and some of the things criticised in the article I would argue could be considered features. Maybe that's my "enterprise" perspective.

It is true that OAuth 2.0 on it's own is not really a protocol which provides a prescriptive definition of interoperability boundaries however I contest this is not as big a deal as Eran makes out. In my opinion the incredibly simplified client development model in OAuth 2.0 obviates the need for strict interoperability contracts between OAuth 2.0 clients and servers. The popularity of the Facebook graph API is a working example of this. The reason I believe Facebook haven't moved from draft 12 of the spec is that there is no need to. Eran argues that "an updated 2.0 client written to work with Facebook’s implementation is unlikely to be useful with any other provider and vice-versa". I would argue that there is actually very little need to implement a client library at all for OAuth 2.0. That is one reason why TFIM doesn't ship one. Other emerging protocols such as OpenID Connect are more prescriptive in their use of OAuth 2.0 as they introduce interfaces which do require interoperability of web API's between partners which are protected with OAuth 2.0 tokens.

A good example of something Eran criticises which I view as a feature is bearer tokens and the fact that a client does not have to present [a signature derived from] the client's own credentials along with an access token when accessing a protected resource. Sure - there is an absolute need to protect bearer tokens at rest and in transit, just like there is with a password, key, or any other form of credential maintained by a client or end-user today. Every user of a website requiring authentication and every business API needs this - we need to accept that and get used to it. I understand the argument about the use of signatures not exposing the actual credentials on the wire vs the perceived lower security of sending real credentials over protected transport however I just don't think that consumers of our solutions believe the risk is worth the complexity. If you get a few things right like protected-transport, securing credentials and tokens at the client, token and grant entropy, etc then OAuth 2.0 is good. Eran argues that the list of things you have to know about and "get right" is too long and the specification is not prescriptive enough. As a security professional I provide guidance to clients on this very matter for not just OAuth 2.0 but SSL/TLS, SAML, OpenID and all sorts of other web security technologies on a daily basis. Prescriptive repeatable patterns will emerge.

Beyond the simpler client development model, the use of bearer tokens means that we can also easily implement things like brokered requests as well once you are operating within a trusted server environment. An OAuth service provider absolutely CAN decide to require a client to present both their own credentials and the access token in requests for a protected resource if they wish however this is not a requirement of the protocol. What is a requirement is correct and secure use of a protected transport with server authentication and I can understand that Eran sees this as a risk. The market has clearly decided it's a more acceptable risk than requiring a client library to do signatures.

The "Under the Hood" section of the article indicates that expiry of access tokens was introduced because of self-encoded access tokens. This is not the only reason, and at least for me is not the most significant reason although it's closely related. TFIM uses opaque (non-self-describing) access tokens that need to be evaluated back at the authorization server. TFIM also supports "remote" enforcement points for resource servers particularly IBM Web access enforcement points like WebSEAL, Datapower, and WebSphere which can be remote from the TFIM authorization server. To make access token validation perform in high-volume transactions with the same token presented on more than one occasion, TFIM allows enforcement points to optionally cache successful validation results from the TFIM authorization server for the lifetime of that token. When this is done, revocation is not possible until the access token expires (or you have some bespoke method to purge all caches). Essentially this is the same behaviour after first validation as allowing the enforcement point to validate self-encoded tokens. Eran argues that "whatever is gained from the removal of the signature is lost twice in the introduction of the state management requirement". This is simply an opinion and one that I do not agree with. Refreshing of tokens can be done with a simple POST from the client to the token endpoint requiring no client-side library crypto or hashing algorithms (beyond protected-transport with server authentication) and therefore allows for much simpler client programming models. Yes, the client does have to understand when an access token expires and perform the refresh step, but it's not a difficult step. Further, if you decide to not do caching at enforcement points with opaque tokens, you don't need refresh tokens - but you will need a highly available and performant data store for tokens.

In the "Reality" section Eran describes OAuth 2.0 as a blueprint for authorization and "the enterprise way". I tend to agree, but don't think of that as a bad thing. He then says "The WS-* way". As I said at the beginning of this post, OAuth 2.0 is nowhere near as complicated as WS-* and that is an unfair comparison.
<controversial>It's fairly apparent to me that the market has decided that WS-* is too complicated, definitely in the web API world but increasingly also for enterprises.</controversial> Ultimately the market will decide.

My experience has been that our customers want something simple to consume but also demand flexibility in their enterprise software. I believe OAuth 2.0 provides good balance between simplicity, flexibility and capability. It still requires you to have a brain and think about security - I don't think that will ever change.

In the "To Upgrade or Not to Upgrade" Eran supports customers using OAuth 2.0 with the caveat "and consider yourself a security expert". I would argue you need to be security conscious for OAuth 1.0 anyway. OAuth 2.0 (with the right patterns) provides distinct advantages over OAuth 1.0 in the area of support for public clients, descriptive new grant-type flows, token scope and refresh tokens. It is a very good base and a prescriptive set of popular patterns will emerge. I've even started doing some of this with the mobile security demo documented elsewhere on my blog:

TFIM OAuth Mobile Demonstration - Under the hood

In my previous article I presented a demonstration of a mobile application retrieving a protected resource (a set of user profile attributes) from a website using OAuth 2.0. In this article I'll explain the rationale for using OAuth 2.0, show you the exact message transactions used by that application and present some of the variables you need to consider when making design/deployment decisions for such an environment. An understanding of the OAuth 2.0 Protocol and the Bearer Token Specification is an advantage when reading this article.

Why use OAuth 2.0

I believe the number one reason for using OAuth 2.0 as opposed to alternative security protocols is its relative ease of use for client application developers. By lowering the technical entry point for client developers you have the opportunity to reach more customers. Sure, developers can still get it wrong by not following security best practices such as requiring secure transport with server certificate validation, storing keys insecurely, etc, but these flaws are possible with other protocols as well. Given that those baseline best practices are followed, writing OAuth 2.0 clients is really trivial.

OAuth 2.0 is also an emerging standard with widespread adoption and this means that client application developers will be increasingly seeing the same deployment patterns time and again allowing for code re-use, client library development which does incorporate security best practices, etc. Being a standard is a good thing.

Messages used for the Mobile Demonstration Application

In this section I will show you exactly how the mobile application communicates with the TFIM demonstration site. You can "be a mobile phone app" with just a few simple curl commands.

For the technically minded comfortable with OAuth 2.0, here's the key information you'd need to develop the mobile application:

The client type is public, with the client_id: mobileClient. This means there is NO client_secret.

The authorization code flow is used with simple bearer tokens. The authorize endpoint is: https://tfim01.demos.ibm.com/FIM/sps/oauth20sp/oauth20/authorize

There is no redirect_uri registered at the service provider, and none should be passed to the authorize endpoint. Instead the authorization code will be displayed on the browser and the user will manually deliver it to the client. There are alternatives to this such as custom scheme registration, back-channel delivery of the authorization code via a notification service, the device code flow, etc. Manual delivery was chosen in this deployment for simplicity. Make no mistake though, secure delivery of the authorization code is fundamental to the security of the system.

Passing scope to the authorize endpoint is optional. If no scope is passed, the user will be asked to authorize all their available profile attributes. If scope is passed, each scope string represents an attribute name the client is requesting permission to access.

When an authorization code is presented to the token endpoint, the JSON response will contain the standard OAuth 2.0 response parameters plus one additional string parameter called appInstance with the value being the friendly name the user chose for their application instance during authorization. The client should use this for display purposes to allow the user to reconcile the application instance with their list of registered instances on the service provider management site.

The access token lifetime is 1 hour. This is also reflected using the expires_in response parameter from the token endpoint.

Refresh tokens are used, and are rolled over to a new value on each use. Refresh tokens do not expire, but are single-use as just described.

When a refresh token is used to get a new access token, the old access token becomes invalid immediately. This means there is only one active valid access token at any given time.

There is only one protected resource URL and it takes no request parameters other than the access token. The access token can be delivered via authorization header, post body or query string per the bearer token spec. The protected resource accepts both GET and POST http methods, and the resource URL is: https://tfim01.demos.ibm.com/FIM/demo/oauthprotected/profile.jsp

The successful protected resource response is a 200 OK followed by a JSON string containing two fixed fields, and variable fields for the user profile data. The fixed fields are:

username

The username of the resource owner who issued the grant

timestamp

The current time as milliseconds since epoch - i.e. (new java.util.Date()).getTime()

The variable fields are all JSON string arrays as attributes can be multi-valued. An example protected resource response can be seen later in this article.

Remember scope is optional and you can explicitly added something like scope=attr1 attr2 to the parameter list. Following authentication and authorization, the resource owner will see the authorization code on the screen of the service provider, and manually provides it to the client.

This screenshot shows the authorize step where the user decides which attributes to grant access to, and assigns a friendly name for the application instance:

This screenshot shows the authorization code displayed with a reminder that it expires in 60 seconds:

Exchanging an authorization code for an access token and refresh token

The client obtains the authorization code from the resource owner, and exchanges it for an access token and refresh token. The appInstance (friendly name chosen by the user for the application instance) is also returned in the response from the token endpoint. This exchange must be completed within 60 seconds of the authorization code being displayed to the resource owner:

Using a refresh token to get a new access token and refresh token

When the access token expires, or the application instance has been disabled and then re-enabled at the service provider management interface by the resource owner, the client will get an error when trying to use the access token to get a protected resource. Instead of a 200 OK response with JSON data, a 401 response will be returned, as shown:

If the client receives an error from the refresh token request that is not a communications error, a severe warning should be displayed to the user as one of the following has happened:

The resource owner has disabled or deleted the application instance at the service provider.

The refresh token has been compromised and used by an attacker.

The resource owner should told to login to the service provider at the Manage Mobile Applications page and if the application instance is shown as enabled it should be immediately deleted to revoke access to the attacker who has compromised the refresh token. Re-registration of the application is then possible by beginning with a new authorization code.

Deployment Considerations

When designing this demonstration scenario there were a lot of variables for deployment that were taken into consideration. These include:

Entropy, lifetime and delivery mechanism of the authorization code. Ultimately we chose manual delivery for simplicity. Therefore we wanted the length to be reasonably short, and chose 6 lower case characters for easy entry into a mobile phone keyboard. As the authorization code is short, a short lifetime was used to reduce attack exposure windows since no client secret is needed to present it to the token endpoint.

Lifetime and entropy of access tokens. These can be anything you like, we chose an hour for lifetime and 20 alphanumerics for an access token.

Lifetime and entropy of refresh tokens. In our scenario we made the lifetime of a refresh token infinite, and the length of a refresh token 40 alphanumerics. Again this comes down to personal choice. Longer refresh tokens are simply a configuration choice. The reason we chose to make refresh tokens of infinite lifetime is that when the refresh token expires, re-registration of the application instance is the only option. This may be ok depending on your own use case requirements.

Concurrency of tokens. In our scenario only one access token and refresh token may be valid at a time per authorization grant. This is quite suitable since the access token and refresh token represent a single instance of the application stored on a phone.

The mobile application scenario is implemented on the server side using Tivoli Federated Identity Manager with a modified version of the OAuth JDBC plugins that I have previously made available, along with a custom Java mapping rule for the OAuth federation that communicates with the database tables used for token storage. Tivoli Access Manager WebSEAL is used as the point of contact, and the WebSEAL enforcement point for OAuth is used for access token validation.

I hope this article helps to explain the value OAuth offers to companies wishing to expose resources and API's for client applications to call on behalf of your users. Should you have any questions about this scenario or your own business scenario and are considering Tivoli Federated Identity Manager, please contact me.

In this article I will describe a pattern of custom authentication to WebSphere via TAI for use with Tivoli Federated Identity Manager acting as an Identity Provider. The article assumes a strong background in WebSphere authentication and Tivoli Federated Identity Manager. The primary goal of this pattern is to be able to authenticate to WebSphere and include custom extended attributes in the user's WebSphere session credential which can then be consumed by TFIM and manipulated in a mapping rule during a subsequent single sign-on operation. There are several deployment scenarios in which this might be a useful pattern to follow, including:

You have a custom authentication token passed as a HTTP header, cookie or query string variable to WebSphere and it contains attributes that you want TFIM to consume and include (for example) in a SAML assertion.

You would like to accept client certificates for authentication at the IBM HTTP Server and there are custom attributes embedded in the client certificates that you want TFIM to consume.

You have some kind of authenticating (non-WebSEAL) reverse proxy in front of WebSphere and are looking for a simpler authentication solution than writing your own TFIM Custom Point-of-Contact implementation.

It is worth noting that the above authentication scenarios do not necessarily require a separate WebSphere user registry account for every user that is authenticating to TFIM. The TAI can perform a simple many:1 user mapping and just include the per-user attributes in the session authentication data so long as the TFIM mapping rule knows where to read per-user data from in the STSUniversalUser.

This solution requires TFIM 6.2.0 LA fixpack 4 or later.

The basic premise of this solution is that you have already developed a TAI to authenticate users to WebSphere and now wish to include extended attributes in the session credential that TFIM will consume and make available in the STSUniversalUser during user identity mapping. The secret to the solution is knowing that TFIM builds it's internal user credential by taking the WebSphere username as the STSUniversalUser username, along with the attributes from any Private Credentials that implement one of the following interfaces:

com.ibm.wsspi.security.token.AuthorizationToken

com.ibm.wsspi.security.token.SingleSignonToken

Pictorially, the construction of the internal credential (which is actually a TAM credential) that is converted to an STSUniversalUser looks like this:

Essentially this means your TAI has to add a private credential to the WebSphere subject at authentication time that implements one of the above interfaces and include all the attributes you wish to make available to the TFIM STSUniversalUser as part of that token. The example in this article will create a custom AuthorizationToken and add it to the WebSphere subject at authentication time.

Basic TAI example

First we'll look at a trivial TAI implementation which simply looks for a HTTP header called TAI_USER, and if that header exists this becomes the userid authenticated to WebSphere. Note that this requires the user identified by TAI_USER exist in the WebSphere registry.

That's about as simple as a TAI can be. The TAI is configured in the WebSphere Application Server using the administration console. There is plenty of information on how to do that in the WebSphere InfoCenter, but basically it requires the jar containing your TAI class to appear in the WebSphere classpath (e.g. install to /opt/IBM/WebSphere/AppServer/lib/ext) and that TAI be enabled and your class included in the list of configured TAI interceptors.

In this next example we'll extend the TAI to include some extra canned attributes in the subject, along with some better tracing.

Note that the above example introduces a new class called the FIMDemoAuthorizationToken. This is actually a simple class that implements the com.ibm.wsspi.security.token.AuthorizationToken interface, as shown here:

That shows you all the mechanisms you need to implement your own TAI and add attributes that will be consumed by TFIM and included in the STSUniversalUser. Any of these attributes can then be included in single sign-on tokens as part of a TFIM identity provider SSO flow (including for example as SAML attributes).

This example download jar file contains all of the example code shown above, along with an advanced example that demonstrates a TAI which consumes and parses SSL client certificate data as passed from the IBM HTTP Server. The client certificate example requires that your IBM HTTP Server is configured for client certificates, and that the keystore file used by the HTTP server contains the trusted signer certs of the certificates you are willing to accept.

The httpd.conf of the IBM HTTP Server needs to be configured for SSL client authentication. There are several available options, this is just one example configuration:

Summary

There are often many ways to solve an architectural design issues when deploying federated authentication solutions, and hopefully this example will add to your toolkit of ideas. Please feel free to email me if you have any questions about this type of deployment.

I have had several enquiries about how to configure federated single sign-on integration between Tivoli Federated Identity Manager and salesforce.com. Saleforce.com offer cloud applications for all manner of sales and CRM capabilities, and the typical use case is that an enterprise has already authenticated their employees (application users) via a company portal/website, and then want them to be able to single-click login to outsourced applications such as those hosted by salesforce.com. Earlier this year I documented a similar integration for Google Apps.

The saleforce.com SSO capability is very simple and easy to use. The underlying technology is SAML 1.1, using browser-post profile. Configuration is required at both the identity provider (in this case Tivoli Federated Identity Manager), and the service provider (i.e. a salesforce.com subscription). I was able to test this integration using a 30-day trial license, and completed the entire integration in 20 minutes. The remainder of this entry dives straight into the configuration tasks and options.

Using the Tivoli Federated Identity Manager management console, create a SAML 1.1 identity provider federation. You MUST select a signing key for the signing of SAML browser-POST messages, and the public key will need to be shared with salesforce.com (so be sure to have that ready as a PEM-encoded certificate file). All the typical SAML federation settings can be left at the default values, with the exception of the identity mapping rule.

The identity mapping rule is responsible for determining the SAML Subject (ie. the username) and certain other attributes of the SAML assertion. In the integration with salesforce.com you must determine how identities will be mapped between your local website user accounts, and salesforce.com user accounts. In the simplest case they are the same, and the SAML subject that you send from the identity provider will be the exact same username as the salesforce.com username. If this is NOT the case, identity mapping can either be done at the identity provider (with custom logic in the TFIM mapping rule), or in the case of salesforce.com, at the service provider where you can store a Federation ID for each salesforce.com user account. This is a nice feature, but can become a management headache for large numbers of users. If starting fresh I would recommend keeping them the same.

The other requirement on the identity mapping rule in TFIM for salesforce.com is to set the correct value for the Audience value in the AudienceRestrictionCondition. In my testing this value needed to be: https://saml.salesforce.com. The mapping rule ip_saml_11_salesforce.xsl can be used for this purpose in your federation.

Partner Configuration

When creating the federation partner in TFIM for salesforce.com, here are the key parameter values to use:

Parameter

Suggested/Default Value

Notes

Provider ID

https://login.salesforce.com

This value should match the protocol, hostname and port (if non-standard) of "Recipient URL" that you get when you configure SSO in the salesforce.com administration panel.

Assertion Consumer Service URL

https://login.salesforce.com

This value should exactly match the entire "Recipient URL" that you get when you configure SSO in the salesforce.com administration panel.

Partner uses HTTP POST profile for Single Sign-On

Checked

This checkbox should be selected as salesforce.com does use the browser-post profile.

Validate Signatures on Artifact Requests

Unchecked

Unchecked and not-needed because salesforce.com does not use browser-artifact profile.

Sign SAML Assertions

Unchecked

Not required as the entire SAML Response is signed for browser-post profile.

This completes the configuration for TFIM. If you are using TAM/WebSEAL as your point-of-contact, don't forget to run the tfimcfg utility to establish the correct TAM ACL policy.

Configuration at salesforce.com (Service Provider)

Login to your salesforce.com account as a system administrator, then click on Setup (right at the top). Navigate to Administration Setup->Security Controls->Single Sign-On Settings. Enter configuration settings as follows:

Parameter

Suggested/Default Value

Notes

SAML Enabled

Checked

This must be enabled to allow SAML authentication.

Identity Provider Certificate

Signer cert from TFIM identity provider

This should be an uploaded PEM-encoded certificate file containing the public key matching the signing certificate at the TFIM identity provider.

SAML User ID Type

Assertion contains User's Salesforce username

There are two choices, and this is the selection you would use when the identities on your identity provider website match the usernames in your salesforce.com account. If the names do not match, then you would select Assertion contains the Federation ID from the User object and you would need to enter a Federation ID for each user in the Manage Users menu.

SAML User ID Location

User ID is in the NameIdentifier element of the Subject statement

Again there are two choices, and this is the most logical. The other choice would be for advanced user mapping scenarios and allows the value to be read from an nominated Attribute in the AttributeStatement of the SAML assertion.

Issuer

The Protocol ID value from your TFIM federation

This value must match the Issuer in the SAML assertion, which comes from the Provider ID value defined in the identity provider SAML federation settings in TFIM.

Testing and Debugging the Configuration

To launch a single sign-on, use the TFIM intersite transfer service URL. This be part of your TFIM federation configuration, but will look something like:

You can always check the value of your intersite transfer service URL in the federation properties for your federation in the TFIM console.

If you have any errors on the identity provider side, check the WebSphere Application Server SystemOut.log for details.

If you have any errors on the salesforce.com side, the best place to check is in the administration console under Administration Setup->Manage Users->Login History

Conclusion

Integration with salesforce.com is simple, easy to setup and manage, and it works. Authentication can be done via either SAML federated single sign-on or local username/password concurrently (unlike GoogleApps which is all-or-nothing when using federation). Prior planning is required to decide how to manage user identity mapping,

In this article I will describe a technique with Tivoli Federated Identity Manager for including attributes in an single sign-on (SSO) assertion that come from a business application at runtime. This example will be in the context of IDP-initiated SAML 1.1 SSO using WebSEAL as the point of contact but the pattern is not restricted to that protocol. Typically when performing SSO from an identity provider to a service provider the attributes included in the assertion come from the user's credential (identity attributes) or are looked up from a user registry or database in an identity mapping rule written in Javascript or Java at the IDP. On more than one occasion now I've been asked how to include attributes in an SSO assertion that may only be know to, generated by or discovered by a business app at runtime within the context of the current user session. A trivial example that we will use in this demonstration is to include the session id of the application server that the business app is running on. I know this is fairly useless data, but it is a piece of data known only at runtime to the business app itself.

One way that this could be approached is to create a database and from the business app store the data to be included in the SAML assertion in the database, keyed on the userid. Then the mapping rule used at TFIM during SSO operations can lookup the database based on the userid (which is available during SSO). There are two issues with this approach. First, the database management is an overhead that requires lifecycle management, but more importantly the user id is not unique to the business application session - if the web access system permits a user to have more than one active session then the userid is not unique enough to use as a key.

To solve the database issue we will instead make use of a TFIM trust chain, and in that chain use an API available to TFIM mapping rules to store data in a distributed cache available to the TFIM WebSphere cluster. To solve the cache key problem we will use the WebSEAL session id which is contained within an attribute of the user's TAM credential and can be made available to both the business application and the TFIM mapping module used in SSO operations. Pictorially the solution looks like this:

The sequence is:

1. The application decides SSO is to be initiated and the SAML assertion must include business attributes that the application knows.

2. The application calls a custom chain at the TFIM STS sending an STSUniversalUser (a simple XML token) which contains the user's current WebSEAL session ID and the attributes to remember.

3. The mapping rule in the custom chain calls the TFIM utility function IDMappingExtUtils.getIDMappingExtCache().put(key, data, lifetime) to store the attributes for a short period (say 20 seconds). The cache entry lifetime need only be short since the business app is now going to redirect directly to TFIM where the entry will be read again and after that it is no longer needed.

4. The business application redirects the user's browser to the standard TFIM IDP-initiated SSO URL for the federation and partner in use.

5. TFIM is invoked to perform SSO which in turns calls the TFIM STS to build the SAML assertion.

6. A TFIM mapping rule configured for the federation or partner has access to the WebSEAL session ID from the STSUU generated as part of the SSO operation and retrieves the attributes from the cache using IDMappingExtUtils.getIDMappingExtCache().get(key). These attributes are used to populate the SAML assertion's AttributeList.

7. SSO is triggered to the partner in the normal fashion (e.g. browser-POST or browser-artifact).

Solution Components

To help you implement this scenario I have provided a few downloadable source code examples including:

An example business application SAMLBusinessAppEAR.ear written as a J2EE/jsp application which includes source and a WS-Trust client that calls the TFIM STS. This example app uses a simple JAX-RPC bsaed WS-Trust client that has appeared in some of my previous articles, however you could use any WS-Trust client including those referenced in my WS-Trust clients article. In the application as it is currently written the WS-Trust client assumes TFIM is running on the same host without transport security (it uses http://localhost:9080/TrustServer/SecurityTokenService as the WS-Trust endpoint).

Configuring and Testing the Scenario

Other properties of the custom chain should look like those shown in this picture (they must match the configuration for the STS client built into the demo business app):

Your SAML partner configuration should use the second javascript mapping rule shown in the example above.

The way I tested this scenario was to install the example business app on the same WebSphere server as TFIM - but this is absolutely not a requirement. Be sure to modify the configuration properties at the top of the index.jsp file contained in the application to match your environment including the connection properties to your TFIM server, and the SSO URL for triggering SSO to your SAML partner.

You also to ensure that the business application (index.jsp in the sample ear) received the tagvalue_user_session_id attribute from the TAM credential as a HTTP header. To do this I used the following TAM administration command:

Note that in the above command your_webseal_object should be for the WebSEAL server in your environment. Do "object list /WebSEAL" if you don't know what it is. Also your_junction should be the junction to the WebSphere running the business application (this could be the /FIM junction for example if the EAR is running on the TFIM server like it was when I tested the scenario).

The result of running the scenario is that the SAML assertion generated as part of the SSO to the partner included the business attributes that the index.jsp puts into the STSUU before calling the TFIM STS.

And, as I wrap up this article, I just thought of some other techniques if your business app is not behind WebSEAL and doesn't have access to the user_session_id. You could easily generate your own "lookup key" in the business app instead of using the WebSEAL session id, then include that key in the TARGET parameter of the redirect to the TFIM SSO URL. This will be available to your TFIM IDP mapping rule in the STSUU as well, and you could unmarshall the query string and extract the lookup key from there. There are lots of options - you just have to keep thinking!