As members of the Fusion Middleware Architecture Group (a.k.a the A-Team), we get exposed to a wide range of challenging technical issues around security and Oracle Fusion Middleware. We're using this blog to answer common questions and provide interesting solutions to the real-world scenarios that our customers encounter every day.
NOTICE: All our post and much more can now be found at http://www.ateam-oracle.com/category/identity-management/

Wednesday, March 28, 2012

In this post I walk you through how to validate an Oracle Identity Management build out containing OID, OVD, OIM, and OAM. This post was motivated by work I have done with Fusion Apps.

It is important to validate the IDM build out for Fusion Apps before you move on to the provisioning of Fusion Apps itself. Problems detected during the IDM build out are much easier to diagnose and fix than problems detected during FA provisioning, FA functional setup or FA operations themselves.

In addition, it is important to have documented validation steps for your Oracle IDM environment to use at other points as well. For instance, you will want to validate your IDM environment when you bring it back online following a backup.

Lastly, you will want to be able to go through validation steps for your IDM environment as a means of debugging IDM related application issues. For example, let’s say people come to you all of the sudden saying they can’t login to a Fusion HCM application. You’ll want to be able to go through the IDM validation steps to see what if anything is wrong with the IDM infrastructure that could be causing this issue.

Tuesday, March 27, 2012

If your organization is like many, you’ve conducted access certification for a handful of applications. But what about the other thousand applications? Organizations are spending up to 40% of their IT budgets on compliance, yet many chief information security officers don’t feel any safer than they were before. With the large volume of systems, applications, users, and entitlements to review, the process is error-prone and difficult.

In this session, Mark Robison of ING shares his learning experiences on how to address these challenges. He will discuss how to:

Tuesday, March 20, 2012

This document describes how to encapsulate OIM API calls in a Web Service for use in a custom SOA composite to be included as an approval process in a request template.

We always recommend customers to follow this approach when trying to invoke OIM’s APIs inside SOA composites used as approval processes for the following reasons:

A web service implementation allows the instantiation of all related APIs once at service startup as opposed to getting a remote reference to each required API interface. This improves performance and reduces the memory footprint of the composite if these API’s are instantiated in embedded Java Tasks.

This paradigm allows the implementation of HA for the Web Service encapsulating the API calls and provides the ability to deploy the web service in a separate server from the SOA and OIM servers is so desired. This increases the robustness and reliability of the solution.

According to BPEL’s documentation Embedded Java Tasks should only be used for quick utility logic, no business logic should be included in these tasks. For details refer to http://docs.oracle.com/cd/E15586_01/integration.1111/e10224/bp_java.htm#BABHJHBG section 13.2.3 How to Embed Java Code Snippets into a BPEL Process with the bpelx:exec Tag. The reason for that is because all memory required for objects being instantiated within the embedded Java code is adding to the memory space of the composite instance itself which will be kept for the life of the composite instance. This means that if a composite has an asynchronous BPEL process (which is definitely the case for OIM’s Approval Process composites) and that can make the BPEL process to remain there for days or weeks, memory problem may start to arise.

Procedure

The assumption here is that JDeveloper is going to be used to edit the SOA composite and there are no other tools suitable for this purpose. JDeveloper is also a good tool to create the Web Service wrapping the OIM API calls. All that is needed is to create a POJO (Plain Old Java Object) and convert it to a Web Service, and then deploy it to an application server (Weblogic in this case); all of which can be accomplished with JDeveloper.

Please refer to JDeveloper 11g documentation for information on how to create a Web Service out of a POJO since this is out of scope for this document. Once the web service is created and deployed one can obtain the WSDL from the Web Logic Admin console. Just access the deployments and drill down to the Test Client of the web service. The WSDL will be available from the Test client window or from the table showing the testing points in the Weblogic Admin Console. All that is needed is to copy the URL for the WSDL and paste it in the proper text box when configuring the Web Service reference in the composite.

Once the Web Service reference is configured in the Composite, it can be linked to the BPEL process inside the composite. All we need to do is to connect the icon representing the BPEL process with the Web Service reference by stretching an arrow connecting the two of them. Consult the SOA Composite Editor documentation from JDeveloper’s 11g users guide. To invoke methods on the newly wired in Web Service an Invoke Task must be included for each method to be called. The Invoke Task allows you to define the following elements:

An input variable that will include the input values for the specific method call taken from the WSDL of the Web Service.

An output variable that will receive the returning data from the invocation of the Web Service method formatted as specified by the WSDL of the Web Service.

Before an invocation there typically is an Assign Task that will populate the input parameters of a Web Service call by copying values from other variables or assigning literal values to the input parameters in the Input Variable. So inserting the Invoke Task prior to inserting the Assign Task allows you to create the Input and Output Variables that will be populated by the Assign Task for the case of the Input Variable and with the output data from the Web Service method call in the case of the Output Variable. Now the values in the Output Variable can be used anywhere else in the composite and can be transferred using other Assign Tasks within the BPEL Process flow.

Summary

SOA Suite allows the execution of embedded Java logic within the composites. OIM Java APIs are not a good candidate to be included in Embedded Java Tasks, especially if the composites are meant to serve as approval processes that can potentially keep instances of the composite for a long time. The recommended approach is encapsulating the OIM APIs in Web Services with a SOAP interface. Then invoke operations on the OIM API Wrapping Web Service and just manipulate the results. This allows for other benefits from the architecture design perspective and from the performance and memory footprint stand point as well to prevent Out of Memory issues.

Friday, March 16, 2012

OIM 11g can be configured to maintain its user and role population synchronized with an LDAP directory using the LDAPSync feature. This functionality is based on asynchronous processing through orchestration events from OIM to LDAP and on scheduled tasks for synchronization from LDAP to OIM. This approach could mean that at some point in time, some of the entries on both repositories may be out-of-sync. Specially when executing long running Trusted Reconciliation scheduled jobs. The entry differences can be caused by processing errors or time lapse between user creation in OIM and user creation in LDAP. This post details some guidelines to minimize and troubleshoot possible errors for OIM LdapSync.

Tune the environment to allow for better ldapsync performance when executing large trusted reconciliation jobs in the order of +30K. These are some tuning tips, some of them straight from the documentation, others from existing deployments.

Consider deploying multiple OVD instances for Failover and Load Balancing. Front end the instances with an LB and use the OID LB virtual host as the LDAP server host.

The parameter Operations Timeout for the Adapters to 30000 if needed (using ODSM)

Consider increasing maxpoolsize for the Adapters to 30-40 if needed (using ODSM)

LDAPSync Monitoring

During the time when the LdapSync Orchestration is running check the following tables and columns in the OIM Schema to verify processing:

Obtain the latest reconciliation job key (RJ_KEY) with the query:

select max(RJ_KEY) from recon_events;

Table RECON_BATCHES: using RJ_KEY and RB_NOTE verify that the orchestration events are being created. The Column RB_NOTE shows the orchestration Process ID and the operation. It could also show errors that occur.

Table ORCH_PROCESS: Holds the generated orch processes. ID being the Orchestration Process ID. If Status shows Compensated it means that an event failed. The detail can be seen in the table ORCH_EVENTS.

Table ORCH_EVENTS: Linked to the orchestration process with the column PROCESSID. The RESULT column has the error details in case of failure.

The Out-of-the-Box reconciliation job " Retry Failed Orchestrations " can be used to retry compensated orchestration processes. Specify a date range ("ddMMyyyy") for multiple ones or "Orchestration ID" for single ones. OIM BP02 also includes fixes for this task.

Note: In the case when reconciling new users that come as disabled, an existing bug may create the orchestration disable event before the orchestration create event. So, these disable events would fail in LDAP since the user doesn't exist yet. As a workaround, these events can be retried with the above Recon Job (specify OPERATION=DISABLE and a date range) after all users are created in LDAP.

Tuesday, March 13, 2012

Complexity and delay can occur during deployments of Oracle Identity and Access Management products (including the IDM build out for Fusion Apps) due to the fact that certain tasks required for the build out can sometimes only be performed by individuals that are not a part of the core team doing the deployment.

In many organizations IT responsibilities are very siloed. Some tasks during an IAM deployment may require assistance from individuals that operate in silos that are different from the team doing the deployment itself.

It is important to identify these tasks up front. When possible it is a good idea to make as many of these tasks as possible pre-requisites to the actual onsite installation/deployment. When that is not possible, then it is important to line up the assistance that will be required from role players who are outside of the core install/deployment project team to perform tasks that require their help.

The following are examples of such tasks:

Network

1. Provisioning of virtual hosts and VIPs.

2. Configuration of load balancers.

DB

1. Provisioning of DB including install, configuration, and creation of instances.

2. Running the RCU.

3. DB backups

Machine and Storage Provisioning

Provisioning shared storage and machines required for install. Provisioning of machines themselves including the installation and patching of OS. You’d think this would go without saying, but I’ve seen enough projects get delayed due to a lack of machines and storage that I feel I have to mention it.

Root Access

Root access is required during the creation of oraInventory and at several points during the web tier, OID, and OVD install. It is also required to do environment (file system) backups if backup is done as dictated by the EDG. One possible alternative is to do the backup as the install user and then separately backup the few files that are owned by root which do not change from the early stages of the install.

Certificates – PKI Administration

People often forget about the creation of certificates needed for SSL connections and web services security until they are actually needed. The trouble is that in many organizations, the team of people that create certificates for the organization is often small and the process by which certificates are requested and granted can take time. I recommend that when possible certificates be requested and created in advance.

When the request must come from a software component that is being installed as part of the deployment, it is still a good idea to talk to your PKI administrators in advance to make sure that the procedure for issuing the request is clear and to give them a heads up that you’d like the certificate issued as quickly as possible.

Monday, March 12, 2012

In the January / February issue of Oracle magazine, Frank Nimphius wrote a good article on ADF security and OPSS policies.The article includes a good sample application that utilizes ADF and OPSS security, along with a pretty thorough explanation of how the sample application works and was created.You can find the article which includes a link to the sample application download here.