As members of the Fusion Middleware Architecture Group (a.k.a the A-Team), we get exposed to a wide range of challenging technical issues around security and Oracle Fusion Middleware. We're using this blog to answer common questions and provide interesting solutions to the real-world scenarios that our customers encounter every day.
NOTICE: All our post and much more can now be found at http://www.ateam-oracle.com/category/identity-management/

Monday, November 30, 2009

I've had a few customers recently that have struggled with this scenario - a web-service consumer receives a response message that it cannot validate. We spend some much time focusing on the request, but often don't think about the response. The messages in question look something like this:

The issue is that the message includes a timestamp, but its not signed. I've seen this issue with both WCF clients as well as Oracle Web Services Manager (OWSM).

Depending on the client stack, there are two ways to fix this issue. The first is to simply sign the response. This is really the best practice, especially for a message that the sender took the time and effort to add WS-Security to in the first place. The second approach, is to simply remove the message security (i.e. WS-Security header and timestamp). For example, below is a modified Wssp1. 2-2007-Https.xml that still ensures that the request is over SSL, but removes the "offending" timestamp.

Thursday, November 26, 2009

One of the really interesting things about working in the security area, is that you get to learn about a lot of different technologies. In order to work to integrate/diagnose/optimize security into some container, you have to get a pretty good understanding of that container works. This has created good opportunities for me to learn about a ton of different technologies over the years. But recently, I have to admit, I became a little uneasy when I started preparing for a meeting on securing their cloud.

I know that people use SOA and Cloud and PaaS (Platform as a Service), IaaS (Infrastructure as a Service) and SaaS (Software as a Service) in some what overlapping ways, and I am in no way an expert on "the cloud" and all of the different layers, but there have been two recent use cases, that I'm bundling in this post as "Cloud Security Use Cases".

PaaS Secured By Oracle Identity Management

So, in this case, the Platform is WLS. Imagine adding WLS instances on-demand, and having them pre-configured with applications, or just environment suitable for a certain class of applications. When the domain is configured, what should the security be? In this use case, lets assume that its impractical to have each domain be totally isolated. Isolation in this sense means that all of the security infrastructure is also contained inside of the Node (droplet?) in the Platform cloud. The WebLogic platform needs to make use of a shared Identity Management infrastructure - Web SSO, User Authentication, Audit.

The core security configuration in WebLogic Server for a domain in a realm. Configuring a realm to accept tokens from the Web SSO infrastructure is pretty straight forward - though typically there may be some Web SSO specific configuration like the creation of trust between the PEP and the PDP. Authentication is straight forward as well - all instances point to a common LDAP configuration. The audit configuration is simple as well - either local file audit or configured to point to a centralized service.

Authorization becomes a little more interesting. I think the type of authorization that needs to be configured depends of the requirements of the application(s) running on the domain. If you need basic JEE authorization, contained in the deployment descriptors of the application, you can use the default XACML provider. If you wanted centralized authorization, you'll need to deploy the realm with Custom Roles and Policies, and configured Authorization and Roles providers that support centralized management - like Oracle Entitlements Server (OES). OES administration is performed centrally from the OES admin server, but the policy enforcement can be done either inside of the container (WLS-SM) or from a central service (WebService/RMI-SM). The WLS-SM makes the decisions with lower latency, so its best for applications which require low-latency decisions. The WebService and RMI SM use the same engine and are just as fast, but once you add in the latency of SOAP or RMI, the latency goes up and through-put goes down. Essentialy, from a cloud perspective, you have a few different flavors of platform (JEE Security, Centralized Admin/Centralized Enforcement, Centralized Administration/Local Enforcement)

Obviously, in this model with applications running in the cloud, it would make sense to have the supporting Identity Management technologies running in a cloud as well, but that is a post for another day ;)

More details on using WebLogic and PAAS can be found here

Multi-tenant SaaS Secured By Oracle Identity Management

In this model, the Identity Management pieces are running as part of the application node. Basically, each domain of WebLogic Server uses its own Authenication and Authroization services. This is because each domain represents a unique customer of a service, and in this example, lets assume that there are rules prohibiting the sharing of data across customers. Cloud makes sense here, since its easy to isolate the customers from each other. But, what if a use case arises where their needs to be sharing of information across customers, for example if two customers merge or if several customers work in some sort of partnership.

The answer is to create a new domain for the merged company/partnership and then virtualize the information of each individual partner behind the new entity. This actually can be a pretty elegant solution to what is basically an edge case. 99% of the scenarios in multi-tenancy have customers isolated, but these use cases can add a lot of complexity - example: pushing down the tenant_id filter to the databaase or complexity with LDAP schema design. The preferred alternative here is to try to address these use cases with the cloud directly. In specific, use Oracle Virtual Directory (OVD) to build a unified view of all of the users and ODSI Suite to virtualize the data across the data of each individual partner.

Fortunately, WebLogic Server supports multiple realms, so you can configure a domain with one realm for each type of model you'll need in the cloud, and then you can simply switch the active realm for the domain. This means that once deployed, to change the way that the domain interacts with the Identity Management system can be changed very easily, which is the whole point of putting things in the "cloud" in the first place.

Conclusion

These use csaes are the first that I've seen with actual customers, but I suspect, given all of the excitement around the cloud, that there are more out there. I encourage people to share them on this blog, so that we can have a "grounded" discussion about this topic.

Wednesday, November 25, 2009

The below is not officially supported by Oracle yet, but I've been happily running OES under WebLogic 11gR1 and thought it might be useful information for others.

WebLogic Server 10.3 is the most recent version certified for use with OES is 10.1.4.3. If you want to run OES with WebLogic 11gR1 (also known as WebLogic 10.3.1) you will need to take a few extra steps.

1) install WebLogic 11gR1

2) run the DBConfig tool to create the database for OES3) install OES 10.1.4.3 Admin Server

During the installation do not install the OES schema.

4) install the latest OES cumulative patch - currently CP2

to do this you unzip the patch, edit ApplyAdminPatch.bat or .sh and then run that script

5) run install_schema.bat or .sh depending on platform. This script no only creates all of the tables and indexes but it also loads the default set of policies and boots the server.

If you accidentally install the schema in step 3 it's easy enough to recover:

Tuesday, November 24, 2009

At times it is necessary to reconfigure one or more of Oracle Access Manager’s directory configuration.

This post describes the recommended process and step-by-step configuration changes to Oracle Access Manager (OAM) required to make OAM utilize a different directory or directory configuration for user, configuration and policy storage.

This post does not address the migration of data between directories themselves. Rather, it focuses on the configuration changes required in OAM.

Also be aware that schema and tree changes can require additional considerations such as requiring changes to attribute access control policies and attribute mapping.

OverviewI recommend that during the process of modifying OAM directory configuration, users turn on directory logging/debugging sufficient enough to monitor all LDAP requests made from OAM to the directory.

If directory changes are being made to the user store and OAM policy/configuration stores then I recommend that OAM users first re-configure the user store by modifying the directory server profiles. I recommend that this change is tested and verified before moving on to modifying the policy/configuration store.

The process for modifying directory configuration in OAM mostly follows what is outlined in the OAM documentation on the subject. However, I would like to make the following clarifications:

1) Directory profile changes themselves are activated following a restart of the OAM services. Changes to directory configuration for policy and configuration storage require re-configuration of the appropriate components (Identity Server, Policy Server, and Access server).

2) In the re-configuration of the Access Server, customers should execute the start_configureAAAServer command with the ‘install’ option rather than the ‘reconfig’ option recommended in the doc. The reconfig option does not give you the opportunity to reconfigure directory configurations. The appropriate command on UNIX is as follows: start_configureAAAServer install

Directory Profile Changes To Reflect User Store ChangesDirectory profile changes can be made from either the Identity or Access System consoles. Directory profiles appear to be shared across OAM components and so only need to be changed in one place.

From the Identity System Console:1. From the Identity System Console, click System Configuration.2. On the System Configuration page, click Directory Profiles. The Configure Profiles page appears. The middle section of the page, under the heading Configure LDAP Directory Server Profiles, contains a list of configured directory server profiles.3. Click the link for the directory server profile that you want to view. The Modify Directory Server Profile page appears.

From the Access System Console:1. From the Access System Console, click System Configuration, then click Server settings. The View Server Settings page appears. This page displays the directory settings for configuration and policy storage, a link to modify those settings, and a listing of directory profiles with links to modify the profiles as well.2. Click the link for the directory server profile that you want to view. The Modify Directory Server Profile page appears.

From the Modify Directory Server Profile page, simply make the modifications you desire and then restart all OAM services.

At this point I recommend that you do a test login to verify that the change worked and that users can now be successfully authenticated out of the modified Directory Profile (associated with a new or modified directory).

Modifying Policy and Configuration Data Directory Configuration1. First modify the Directory Server Configuration in the Identity System Console.a. From the Identity System Console, click System Configuration. On the System Configuration page, click Directory Profiles. The Configure Profiles page appears. b. The top portion of Configure Profiles page shows details for the directory server that contains user data and configuration data. Click the Directory Server link to bring up the modification page.c. Modify the page as you intend. Note that if you change the security, server, or port setting that you will have to rerun the Identity Server setup (see below).

2. Modify the Directory Server Configuration in the Access System Console.a. From the Access System Console, click System Configuration, then click Server settings. The View Server Settings page appears.b. Click the Directory Server link to bring up the modification page.c. Note here that the page is divided into two sections. One section to configure the store for ‘configuration data’ and one section to configure the store for ‘policy data’. Many OAM users will make these the same but they can be different.d. Modify the page as you intend. After making changes you will have to rerun the setup processes for the Policy and Access Servers (see below).

Rerun Setup for All ComponentsRerun the setup of the Identity System, Policy Manager, and Access Server, in order, carefully following the instructions in the documentation (with one exception listed below):

After completing each component, I recommend that you restart that component and verify that on startup it is successfully pulling the appropriate data from the modified source.As mentioned above, you need to execute the start_configureAAAServer command with the ‘install’ option rather than the ‘reconfig’ option recommended in the doc. The reconfig option does not give you the opportunity to reconfigure directory configurations. The appropriate command on UNIX is as follows:

OES-OWSM Integration FAQ

Answer: You'll need an 11g SOA Suite Domain protected by an OES WLS SM. What I did for the OOW demo that makes things a lot simpler is use the OESAdjudicator. I talked a little about this approach previously. Basically, follow the steps I outline before except that instead of throwing the domain into discovery mode and figuring out the WLS resources, just add the OESAdjudicator to the domain. This basically eliminates the need for OES to protect the WLS resources.

Question: How do I build it?Answer: You'll need to get the source code from svn. The details are here. The src and the build are contained in the oes-owsm directory. You'll need to modify the build.xml file to point to the SOA home and OES home directories. From there, set-up you're environment (BEA_HOME/wlserver_10.3/server/bin/setWLSEnv.cmd) and then run ant. You should end-up with a file called oes-owsm.jar.

Question:Why do I need to know how to build it?Answer:Because when you use it to protect a WLS Web Service, all of the configuration is contained in the META-INF/policies/samples/oes file, and this needs to be packed into the jar. I have not tested this custom assertion protecting SOA Composite Web Services

Question: What can I configure in the META-INF/policies/samples/oes file?

There are only a few properties that you need to really care about. ConfigID is the name of the realm that WLS is running - I think the configTool makes it the domain name, but look in config.xml just to be sure. Application and Resource are concatenated together to make up the prefix for the resource in OES. Example: //app/policy//owsm/oow2009/model object name/port name

Question: How do I install it?

Answer: Copy the oes-owsm.jar to the DOMAIN_HOME/lib directory, and restart the server.

Question: How do I bind it to a webservice?

Answer: You add it to policy just like any other WLS Web-Services policy. Either through the WLS console or through the @Policy annotation. Note: You need to configure an additional policy that does the actual authentication, like one of the SAML policies. The assertion assumes that the user is already authenticated. Also, again I haven't tested it attaching it to a SOA composite, but if some one wants to try, I'll support the effort ;)

Question: How do I author an OES policy using it?Answer: Take a look at the policies from my OOW demo

I modeled it as a two step process (I could have just as easily passed down the entire SOAP message to the WLS SM, like I did in the OES-BPEL integration, but I think for most in-bound authorization cases, I like this model). In the first step, the assertion calls OES with the lookup action. If it succeeds, it processes the responses in an interesting way. If the response has a name of namespace then it sets uses it as the target name space for the XPathQuery. If its anything else, then the Assertion assumes that the name of the response is a dynamic attribute, and the value is an XPathQuery that the assertion should run to populate the attributes.

In the example for OOW, there are two responses from the lookup. One to get the value of the CCType in the body and add it to the variable oow_cc_type. The other is pulling the value of the attribute title from the AttributeStatement from the SAML Assertion (basically, the user's title) and sets it in the oow_title attribute.

The execute action gets called next and basically, as you can see from the example, passes in the values it got from the lookup XPATHQuery.

Question: What do I do if I want to use a namespace other than tns,env, or saml?

Answer: You need to add it to modify the AssertionNamespaceContext.java to handle the namespace. The code is pretty simple.

public String getNamespaceURI(String prefix) { System.out.println("Looking for the namespace for prefix: "+prefix);

if (prefix.equals("env")) {

return "http://schemas.xmlsoap.org/soap/envelope/";

} else if (prefix.equals("saml")) {

return "urn:oasis:names:tc:SAML:1.0:assertion";

}

return theNamespace; }

So What Next?

I added the project (like most of my stuff from the blog) to http://soa-security.samplecode.oracle.com. Try this out, and let me know what you think, but also, this integration could be enhanced in a number of ways. Examples:

What should OES do when filtering the response?

What could OES do with responses if it knew that the Web Service has a DataControl (i.e. SOAP Message contains a query)?

The source is out there, so I'm hoping people will add their own contributions. Basically, I hope that I've taught you how to fish. If you are starving, just post a comment here, and we'll throw you a fish stick :)

Wednesday, November 4, 2009

In yesterday's post on WLS to OSB to WLS with SAML Sender Vouches, OSB was not really acting as a full "active-intermediary". Basically, OSB was validating the SAML Assertion on the request and sending a new request with a new SAML Assertion to the business-service. The response from the business service (though signed) was ignored by OSB. The signature is validated by the sender. Let's for the sake of discussion call this configuration active/passive security. This model works if:

You are basically interesting in using OSB to authorize the use of services

You are not doing any transformations on the response (you'll break the signature).

The producer and consumer trust each other directly. This is different then they each trust the service bus.

If your deployment meets this criteria, then "Bob's Your Uncle" (I'm from Massachusetts, not California, so I'm not sure I used that right - but basically I'm saying "you're good")

If not, then read on. These are the gory details of getting this scenario to work in active/active. I'm not going to cover everything from yesterday, I'll just get down to the brass tacks of the policies.

Configuring the WLS Service Producer

Since this is a use case primarily about processing the response, let's start with the response coming back from the service producer. OSB cannot handle <ProtectTokens> policy assertions because it does not support the STR Transform that WLS uses to generate the signature. So, this assertion has to be removed from the standard Wssp1.2-2007-Saml1.1-SenderVouches-Wss1.0.xml.

Configuring the Business Service

A nuance of the way that OSB enforces policy is that the entire message is processed - signatures validated, messages decrypted - and then the policy is checked to see if it complies. The policy is basically a minimum that has to be met - not a prescription for how to validate the message. The idea of the "Empty" policy is to just have OSB process the signature that the Service Producer generated. There are no other conditions.

Configuring the Proxy Service

The message now needs to be signed by OSB. The pre-configured Sign.xml will work just fine.

Configuring WLS Service Consumer

You need to use two different policies for the JAX-WS client. Since the signature generated by OSB does not conform with the standard policy in two ways - it does not protect the tokens (sign) and it does include the RAW X.509 Certificate as opposed to an reference by Issuer Serial Number. So, the policy for the outputMessage (outbound) is a custom policy very similar to the in-bound policy on the service consumner, except that we have to additionally allow for the Recipient Token to be passed. The policy for the inputMessage (inbound) is the standard Wssp1.2-2007-Saml1.1-SenderVouches-Wss1.0.xml. The code looks like this:

Tuesday, November 3, 2009

I'm sure many people were hoping that the next post would be the much anticipated OES-OWSM Custom Assertion, but things have been very busy coming out of Open World. Oddly, I'm working with multiple customers that have been looking for a solution to the following use case:

WLS makes a Web-Services call using JAX-WS to Oracle Service Bus (OSB) passing the identity of the caller using SAML Sender Vouches (SV). OSB serves as an active-intermediary and process the SAML Assertion. OSB then goes and calls a Business Service propagating the user's identity, again using SAML SV.

What makes this use case a little tricky is that OSB at present does not understand WS-Policy 1.2 Assertions, and the JAX-WS stack on WLS does not understand the OSB proprietary assertions. This means that local policy needs to be applied 4 places

JAX-WS Client using WS-Policy 1.2

OSB Proxy Service using OSB Policy

OSB Business Service using OSB Policy

WLS Web Services service using WS-Policy 1.2

This post will cover the specific policies and how to apply them at each point.

JAX-WS Client using WS-Policy 1.2

You need to use a ClientPolicyFeature, and load the actual policy from an InputStream. In the example below, I'm running inside a web-application and packaged the policy.xml in the WEB-INF directory.

On the client, beside setting up the policy, you'll also need to configure a SAMLCredentialMapper and a PKICredentialMapper. The PKICredentialMapper needs to have access to a keystore that contains the Private Key used to sign the request.

OSB Proxy Service using OSB Policy

The OSB policy is really just the WLS 9.2 Web Services stack.. Basically, you need to create a WS-Policy and attach it to the request in the proxy service. Also, need to make sure to configure the proxy service to process the WS-Security header.

Setting up OSB to consume the SAML Assertion and validate the signature requires the creation of a SAML Identity Asserter. NOTE: In the asserting party configuration, OSB uses a relative path Remember this if you run into trouble getting OSB to understand the assertion.

OSB Business Service using OSB Policy

This was really the only "tricky" part. I had to modify the saml-sv policy to sign the timestamp. The policy on the business service doesn't require it, but it does expect the Timestamp to have a wsu:Id, and making OSB sign it does exactly that. The set-up in OSB is the same as the proxy service, just configure the policy on the request.

When adding a business service that requires signing, the proxy service has to have a Service Key Provider. This is really just a wrapper around the policies stored in the PKI Mapper. Basically, you select an alias to use to sign the message going to that business service.

Inside of the realm, in order to be able to configure a Service Key Provider, the PKI CredMapper needs to exist. In addition, the SAML Credential Mapper is required to generate the SAML Assertion.

Summary

In setting up a scenario as complicated as this there will undoubtedly be challenges. Use the debug settings inside of WLS, they are your friend. In my set-up I added the following JAVA_OPTIONS to my setDomainEnv.cmd

This gives you good visibility into where the issues are. Also can't hurt to have the weblogic.security.atn and weblogic.security.credmap debug categories on. When trying to figure out what is going on, more information is better. Be patient, this will spit out a lot of information, but look through it and you can normally get to the issue.