12.2Defining an Optimal Input File Strategy for Oracle I/PM

The input file is the smallest unit of work that the input agent can schedule and process. There are multiple elements to be taken into consideration to achieve the highest performance, scalability, and high availability in an I/PM cluster:

All of the machines in an Oracle I/PM cluster share a common input directory.

Input files from this directory are distributed to each machine via a JMS queue.

The frequency with which the directory is polled for new files is configurable.

Each machine has multiple parsing agents that process the input files. The number of parsing agents is configured via the Work Manager created within the Oracle I/PM deployment.

Optimum performance will be achieved when:

Each Oracle I/PM cluster instance has the maximum affordable number of parsing agents configured via the Work Manager without compromising the performance of the other I/PM activities, such as the user interface and Web services.

The inbound flow of documents is partitioned into input files containing the appropriate number of documents. On average there should be two input files queued for every parsing agent within the cluster.

If one or more machines within a cluster fails, active machines will continue processing the input files. Input files from a failed machine will remain in limbo until the server is restarted. Smaller input files ensure that machine failures do not place large numbers of documents into this limbo state.

For example, consider 10,000 inbound documents per hour being processed by two servers. A configuration of two parsing agents per server produces acceptable overall performance and ingests two documents per second per agent. The four parsing agents at two documents per second is eight documents per second, or 28,800 documents per hour. Note that a single input file of 10,000 documents will not be processed in an hour since a single parsing agent working at 7,200 documents per hour will be unable to complete it. However, if you divide the single input file up into eight input files of 1,250 documents, this ensures that all four parsing agents are fully utilized, and the 10,000 documents are completed in the one hour period. Also, if a failure should occur in one of the servers, the other can continue processing the work remaining on its parsing agents until the work is successfully completed.

When deploying SOA composites to the SOA subsystem used by I/PM, deploy to a specific server's address and not to the LBR address (ecm.mycompany.com). Deploying to the LBR address may require direct connection from the deployer nodes to the external LBR address, which may require additional ports to be opened in the firewalls used by the system.

12.4Managing Space in the SOA Infrastructure Database

Although not all composites may use the database frequently, the service engines generate a considerable amount of data in the CUBE_INSTANCE and MEDIATOR_INSTANCE schemas. Lack of space in the database may prevent SOA composites from functioning. Watch for generic errors, such as "oracle.fabric.common.FabricInvocationException" in the Oracle Enterprise Manager Fusion Middleware Control console (dashboard for instances). Search also in the SOA server's logs for errors, such as:

These messages are typically indicators of space issues in the database that may likely require adding more data files or more space to the existing files. The SOA database administrator should determine the extension policy and parameters to be used when adding space. Additionally, old composite instances can be purged to reduce the SOA infrastructure database size. Oracle does not recommend using the Oracle Enterprise Manager Fusion Middleware Control for this type of operation as in most cases the operations cause a transaction timeout. There are specific packages provided with the Repository Creation Utility to purge instances. For example:

This deletes the first 1,000 instances of the FlatStructure composite (version 10) created between '2010-09-07' and '2010-09-08' that are in "UNKNOWN" state. Refer to Chapter 8, "Managing SOA Composite Applications" in the Oracle Fusion Middleware Administrator's Guide for Oracle SOA Suite for more details on the possible operations included in the sql packages provided. Always use the scripts provided for a correct purge. Deleting rows in just the composite_dn table may leave dangling references in other tables used by the Oracle Fusion Middleware SOA Infrastructure.

12.5Configuring UMS Drivers

Note:

This step is required only if the SOA system used by Oracle I/PM is using Unified Messaging System (UMS).

UMS driver configuration is not automatically propagated in a SOA cluster. When UMS is used by the SOA system that I/PM invokes, this implies that you need to do the following:

Apply the configuration of UMS drivers in each and every one of the servers in the EDG topology that is using the driver.

When server migration is used, servers are moved to a different node's domain directory. It is necessary to pre-create the UMS driver configuration in the failover node. The UMS driver configuration file location is:

(where '*' represents a directory whose name is randomly generated by Oracle WebLogic Server during deployment, for example, "3682yq").

It is required to restart the driver for these changes to take effect (that is, for the driver to consume the modified configuration). Perform these steps to restart the driver:

Log in to the Oracle WebLogic Administration console.

Expand the environment node on the navigation tree.

Click Deployments.

Select the driver.

Click Stop->When work completes and confirm the operation.

Wait for the driver to transition to the "Prepared" state (refresh the administration console page, if required).

Select the driver again, and click Start->Servicing all requests and confirm the operation.

Make sure that you verify in Oracle Enterprise Manager Fusion Middleware Control that the properties for the driver have been preserved.

12.6Scaling the Topology

You can scale out and or scale up the enterprise topology. When you scale up the topology, you add new managed servers to nodes that are already running on one or more managed servers. When you scale out the topology, you add new managed servers to new nodes.

When scaling up the topology, you already have a node that runs a managed server that is configured with the necessary components. The node contains a WebLogic Server home and an Oracle Fusion Middleware home in shared storage. Use these existing installations (such as WebLogic Server home, Oracle Fusion Middleware home, and domain directories) when you create the new managed servers. You do not need to install WebLogic Server binaries at a new location or to run pack and unpack.

12.6.1.1Scale-up Procedure for Oracle I/PM

Perform these steps to scale up the topology for Oracle I/PM:

Using the Oracle WebLogic Server Administration Console, clone WLS_IPM1 to a new managed server. The source managed server to clone should be one that already exists on the node where you want to run the new managed server.

Perform these steps to clone a managed server:

In the Domain Structure window of the Oracle WebLogic Server Administration Console, expand the Environment node and then Servers. The Summary of Servers page opens.

Click Lock & Edit and then select the managed server that you want to clone (WLS_IPM1).

Click Clone.

Name the new managed server WLS_IPMn, where n is a number that identifies the new managed server.

Note:

The remainder of the steps assume that you are adding a new server to ECMHOST1, which is already running WLS_IPM1.

For the listen address, assign the host name or IP to use for this new managed server. If you are planning to use server migration for this server (which Oracle recommends), this should be the virtual host name for the server. This virtual host name should be different from the one used for the existing managed server.

Configure the location for the JMS persistence stores as a directory that is visible from both nodes. By default, the JMS servers used by Oracle I/PM are configured with no persistent store and use WebLogic Server's store (ORACLE_BASE/admin/domain_name/mserver/domain_name/servers/server_name/data/store/default). You must change Oracle I/PM's JMS server persistent store to use a shared base directory as follows:

Log in to the Oracle WebLogic Server Administration Console.

In the Domain Structure window, expand the Services node and then click the Persistence Stores node. The Summary of Persistence Stores page opens.

Click Lock & Edit.

Click New, and then Create FileStore. The Create a New File Store page opens.

Enter the following information:

- Name: IPMJMSServernStore (for example, IPMJMSServer3Store, which allows you identify the service it is created for)

- Target: WLS_IPMn (for example, WLS_IPM3).

- Directory: Specify a directory that is located in shared storage so that it is accessible from both ECMHOST1 and ECMHOST2 (ORACLE_BASE/admin/domain_name/ipm_cluster/jms).

Note:

This directory must exist before the managed server is started or the start operation will fail.

Click OK and activate the changes.

In the Domain Structure window, expand the Services node, then the Messages node, and then click JMS Servers. The Summary of JMS Servers page opens.

Click Next and then specify 'WLS_IPMn' (for example, WLS_IPM3) as the target. Click Finish and activate the changes.

Click on the IpmJmsServer3 JMS server (represented as a hyperlink) in the Name column of the table. The settings page for the JMS server opens.

Click Lock & Edit.

In the Persistent Store drop-down list, select IPMJMSServernStore.

Click Save and activate the changes.

Configure a default persistence store for WLS_IPMn for transaction recovery:

Log in to the Oracle WebLogic Server Administration Console.

In the Domain Structure window, expand the Environment node and then click the Servers node. The Summary of Servers page opens.

Click WLS_IPMn (represented as a hyperlink) in the Name column of the table. The settings page for the WLS_IPMn server opens with the Configuration tab active.

Open the Services tab.

Click Lock & Edit.

In the Default Store section of the page, enter the path to the folder where the default persistent stores will store its data files. The directory structure of the path is as follows:

ORACLE_BASE/admin/domain_name/ipm_cluster_name/tlogs

Note:

This directory must exist before the managed server is started or the start operation will fail.

Click Save and activate the changes.

Disable host name verification for the new managed server. Before you can start and verify the WLS_IPMn managed server, you must disable host name verification. You can re-enable it after you have configured server certificates for the communication between the Oracle WebLogic Administration Server and Node Manager in ECMHOSTn. If the source server from which the new one has been cloned had already disabled host name verification, these steps are not required (the host name verification settings is propagated to the cloned server).

In the Domain Structure window, expand the Environment node and click Servers. The Summary of Servers page opens.

Click WLS_IPMn (represented as a hyperlink) in the Name column of the table. The settings page for the WLS_IPMn server opens with the Configuration tab active.

Open the SSL tab.

Expand the Advanced section of the page.

Click Lock & Edit.

Set host name verification to 'None'.

Click Save.

Start the newly created managed server (WLS_IPM):

Log in to the Oracle WebLogic Server Administration Console.

In the Domain Structure window, expand the Environment node and then click Servers. The Summary of Servers page opens.

Open the Control tab, and shut down all existing WLS_IPMn managed servers in the cluster.

Ensure that the newly created managed server, WLS_IPMn, is running.

Configure server migration for the new managed server.

Note:

Since this is a scale-up operation, the node should already contain a Node Manager and environment configured for server migration. The floating IP for the new managed Oracle I/PM server should also be already present.

Perform these steps to configure server migration:

Log in to the Oracle WebLogic Server Administration Console.

In the Domain Structure window, expand the Environment node and then click Servers. The Summary of Servers page opens.

Click the name of the new managed server (represented as a hyperlink) in Name column of the table for which you want to configure migration. The settings page for the selected server opens.

Open the Migration subtab.

In the Migration Configuration section, select the servers that participate in migration in the Available window and click the right arrow. Select the same migration targets as for the servers that already exist on the node.

For example, for new managed servers on ECMHOST1, which is already running WLS_IPM1, select ECMHOST2. For new managed servers on ECMHOST2, which is already running WLS_IPM2, select ECMHOST1.

Note:

The appropriate resources must be available to run the managed servers concurrently during migration.

Test server migration for the new server. To test migration, perform the following steps from the node where you added the new server:

Abruptly stop the WLS_IPMn managed server. To do this, run "kill -9 pid" on the PID of the managed server. You can identify the PID of the node using the following command:

ps -ef | grep WLS_IPMn

Watch the Node Manager Console for a message indicating that WLS_IPM1's floating IP has been disabled.

Wait for Node Manager to attempt a second restart of WLS_IPMn. Node Manager waits for a fence period of 30 seconds before trying this restart.

Once Node Manager restarts the server, stop it again. Node Manager should log a message indicating that the server will not be restarted again locally.

Note:

After a server is migrated, to fail it back to its original node or machine, stop the managed server from the Oracle WebLogic Administration Console and then start it again. The appropriate Node Manager will start the managed server on the machine to which it was originally assigned.

12.6.1.3Scale-up Procedure for SOA

Perform these steps for scaling up the SOA servers in the topology:

Using the Oracle WebLogic Server Administration Console, clone WLS_SOA1 to a new managed server. The source managed server to clone should be one that already exists on the node where you want to run the new managed server.

Perform these steps to clone a managed server:

In the Domain Structure window of the Oracle WebLogic Server Administration Console, expand the Environment node and then Servers. The Summary of Servers page opens.

Select the managed server that you want to clone (WLS_SOA1).

Click Clone.

Name the new managed server WLS_SOAn, where n is a number that identifies the new managed server.

Note:

The remainder of the steps assume that you are adding a new server to SOAHOST1, which is already running WLS_SOA1.

For the listen address, assign the host name or IP to use for this new managed server. If you are planning to use server migration for this server (which Oracle recommends), this should be the VIP (also called a floating IP) to enable it to move to another node. The VIP should be different from the one used by the managed server that is already running.

This directory must exist before the managed server is started or the start operation fails. You can also assign SOAJMSFileStore_N as the store for the new UMS JMS servers. For the purpose of clarity and isolation, individual persistent stores are used in the following steps.

Create a new JMS server for UMS (for example, UMSJMSServer_N). Use the UMSJMSFileStore_N for this JMS server. Target the UMSJMSServer_N server to the recently created managed server (WLS_SOAn).

Update the subdeployment targets for the SOA JMS Module to include the recently created SOA JMS server. To do this, expand the Services node in the Oracle WebLogic Server Administration Console and then expand the Messaging node. Choose JMS Modules in the Domain Structure window. The JMS Modules page appears. Click SOAJMSModule (represented as a hyperlink in the Names column of the table). The Settings page for SOAJMSModule appears. Click the SubDeployments tab. The subdeployment module for SOAJMS appears.

Note:

This subdeployment module name is a random name in the form of 'SOAJMSServerXXXXXX' resulting from the Configuration Wizard JMS configuration for the first two servers (WLS_SOA1 and WLS_SOA2).

Click the SOAJMSServerXXXXXX subdeployment. Add the new JMS server for SOA called SOAJMSServer_N to this subdeployment. Click Save.

Update the subdeployment targets for the UMSJMSSystemResource to include the recently created UMS JMS server. To do this, expand the Services node in the Oracle WebLogic Server Administration Console and then expand the Messaging node. Choose JMS Modules in the Domain Structure window. The JMS Modules page appears. Click UMSJMSSystemResource (represented as a hyperlink in the Names column of the table). The Settings page for UMSJMSSystemResource appears. Click the SubDeployments tab. The subdeployment module for UMSJMS appears.

Note:

This subdeployment module name is a random name in the form of 'UCMJMSServerXXXXXX' resulting from the Configuration Wizard JMS configuration for the first two servers (WLS_SOA1 and WLS_SOA2).

Click the UMSJMSServerXXXXXX subdeployment. Add the new JMS server for UMS called UMSJMSServer_N to this subdeployment. Click Save.

From the Administration Console, select the server name in the Services tab. Under Default Store, in Directory, enter the path to the folder where you want the default persistent store to store its data files.

Disable host name verification for the new managed server. Before you can start and verify the WLS_SOAn managed server, you must disable host name verification. You can re-enable it after you have configured server certificates for the communication between the Oracle WebLogic Administration Server and Node Manager in SOAHOSTn. If the source server from which the new one has been cloned had already disabled host name verification, these steps are not required (the host name verification settings is propagated to the cloned server).

Select WLS_SOAn in the Names column of the table. The settings page for the server opens.

Open the SSL tab.

Expand the Advanced section of the page.

Click Lock & Edit.

Set host name verification to 'None'.

Click Save.

Start and test the new managed server from the Oracle WebLogic Server Administration Console:

Ensure that the newly created managed server, WLS_SOAn, is running.

Access the application on the LBR (https://ecm.mycompany.com/soa-infra). The application should be functional.

Note:

The HTTP Servers in the topology should round-robin requests to the new added server (a few requests, depending on the number of servers in the cluster, may be required to hit the new server). Its is not required to add all servers in a cluster to the WebLogicCluster directive in Oracle HTTP Server's mod_wl_ohs.conf file. However routing to new servers in the cluster will take place only if at least one of the servers listed in the WebLogicCluster directive is running.

Configure server migration for the new managed server.

Note:

Since this is a scale-up operation, the node should already contain a Node Manager and environment configured for server migration. The floating IP for the new managed SOA server should also be already present.

Perform these steps to configure server migration:

Log in to the Oracle WebLogic Server Administration Console.

In the Domain Structure window, expand the Environment node and then click Servers. The Summary of Servers page opens.

Click the name of the new managed server (represented as a hyperlink) in Name column of the table for which you want to configure migration. The settings page for the selected server opens.

Open the Migration subtab.

In the Migration Configuration section, select the servers that participate in migration in the Available window and click the right arrow. Select the same migration targets as for the servers that already exist on the node.

For example, for new managed servers on SOAHOST1, which is already running WLS_SOA1, select SOAHOST2. For new managed servers on SOAHOST2, which is already running WLS_SOA2, select SOAHOST1.

Note:

The appropriate resources must be available to run the managed servers concurrently during migration.

Test server migration for the new server. To test migration, perform the following steps from the node where you added the new server:

Abruptly stop the WLS_SOAn managed server. To do this, run "kill -9 pid" on the PID of the managed server. You can identify the PID of the node using the following command:

ps -ef | grep WLS_SOAn

Watch the Node Manager Console for a message indicating that WLS_SOA1's floating IP has been disabled.

Wait for Node Manager to attempt a second restart of WLS_SOAn. Node Manager waits for a fence period of 30 seconds before trying this restart.

Once Node Manager restarts the server, stop it again. Node Manager should log a message indicating that the server will not be restarted again locally.

Note:

After a server is migrated, to fail it back to its original node or machine, stop the managed server from the Oracle WebLogic Administration Console and then start it again. The appropriate Node Manager will start the managed server on the machine to which it was originally assigned.

12.6.2Scaling Out the Topology (Adding Managed Servers to New Nodes)

When scaling out the topology, you add new managed servers configured to new nodes.

Prerequisites

Before performing the steps in this section, check that you meet these requirements:

There must be existing nodes running managed servers configured with Oracle Fusion Middleware within the topology.

The new node can access the existing home directories for WebLogic Server and Fusion Middleware. (Use the existing installations in shared storage for creating a new managed server. You do not need to install WebLogic Server or Fusion Middleware binaries in a new location, but you do need to run pack and unpack to bootstrap the domain configuration in the new node.)

When an ORACLE_HOME or WL_HOME is shared by multiple servers in different nodes, it is recommended that you keep the Oracle Inventory and Middleware home list in those nodes updated for consistency in the installations and application of patches. To update the oraInventory in a node and "attach" an installation in a shared storage to it, use ORACLE_HOME/oui/bin/attachHome.sh. To update the Middleware home list to add or remove a WL_HOME, edit the User_Home/bea/beahomelist file. See the steps below.

The new server can use a new individual domain directory or, if the other managed servers domain directories reside on shared storage, reuse the domain directories on those servers.

12.6.2.1Scale-out Procedure for Oracle I/PM

Perform these steps to scale out the Oracle I/PM servers in the topology:

On the new node, mount the existing Middleware home, which should include the ECM installation and (optionally, if the domain directory for managed servers in other nodes resides on shared storage) the domain directory, and ensure that the new node has access to this directory, just like the rest of the nodes in the domain.

To attach ORACLE_HOME in shared storage to the local Oracle Inventory, execute the following command:

To update the Middleware home list, create (or edit, if another WebLogic installation exists in the node) the MW_HOME/bea/beahomelist file and add ORACLE_BASE/product/fmw to it.

Log in to the Oracle WebLogic Administration Console.

Create a new machine for the new node that will be used, and add the machine to the domain.

Update the machine's Node Manager address to map the IP of the node that is being used for scale-out.

Use the Oracle WebLogic Server Administration Console to clone WLS_IPM1 into a new managed server. Name it WLS_IPMn, where n is a number.

Note:

These steps assume that you are adding a new server to node n, where no managed server was running previously.

Assign the host name or IP to use for the new managed server for the listen address of the managed server. If you are planning to use server migration for this server (which Oracle recommends), this should be the VIP (also called a floating IP) for the server. This VIP should be different from the one used for the existing managed server.

Also, assign the newly created server to the machine you added in the step 4. Without this, the machine name of the cloned server will remain.

Create a JMS server for I/PM on the new managed server:

Use the Oracle WebLogic Server Administration Console to first create a new persistent store for the new IPMJMSServer (which will be created in a later step) and name it, for example, IPMJMSFileStore_N. Specify the path for the store as recommended in Section 2.3, "Shared Storage and Recommended Directory Structure" as the directory for the JMS persistent stores:

ORACLE_BASE/admin/domain_name/cluster_name/jms/

Note:

This directory must exist before the managed server is started or the start operation will fail.

Create a new JMS server for I/PM; for example, IPMJMSServer_N. Use the IPMJMSFileStore_N created above for this JMS server. Target the IPMJMSServer_N server to the recently created managed server (WLS_IPMn).

Run the pack command on SOAHOST1 to create a template pack:

Note:

If the domain directory for other managed servers resides on a shared directory, this step is not required. Instead, the new nodes mount the already existing domain directory and use it for the new added managed server.

Disable host name verification for the new managed server. Before you can start and verify the WLS_IPMn managed server, you must disable host name verification. You can re-enable it after you have configured server certificates for the communication between the Oracle WebLogic Administration Server and Node Manager in ECMHOSTn. If the source server from which the new one has been cloned had already disabled host name verification, these steps are not required (the host name verification setting is propagated to the cloned server).

Select WLS_IPMn in the Names column of the table. The settings page for the server opens.

Open the SSL tab.

Expand the Advanced section of the page.

Click Lock & Edit.

Set host name verification to 'None'.

Click Save.

Start Node Manager on the new node. To start Node Manager, use the installation in shared storage from the already existing nodes and then start Node Manager by passing the host name of the new node as a parameter as follows:

Start and test the new managed server from the Oracle WebLogic Server Administration Console:

Ensure that the newly created managed server, WLS_IPMn, is running.

Access the application on the LBR (https://ecm.mycompany.com/imaging). The application should be functional.

Note:

The HTTP Servers in the topology should round-robin requests to the new added server (a few requests, depending on the number of servers in the cluster, may be required to hit the new server). Its is not required to add all servers in a cluster to the WebLogicCluster directive in Oracle HTTP Server's mod_wl_ohs.conf file. However routing to new servers in the cluster will take place only if at least one of the servers listed in the WebLogicCluster directive is running.

Configure server migration for the newly added server.

Note:

Since this new node uses an existing shared storage installation, the node already is using a Node Manager and an environment configured for server migration that includes netmask, interface, wlsifconfig script superuser privileges, and so on. Verify the privileges defined in the new node to make sure server migration will work. Refer to Chapter 10, "Configuring Server Migration" for more details on privilege requirements.

Perform these steps to configure server migration:

Log in to the Oracle WebLogic Server Administration Console.

In the Domain Structure window, expand the Environment node and then click Servers. The Summary of Servers page opens.

Click the name of the server (represented as a hyperlink) in Name column of the table for which you want to configure migration. The settings page for the selected server opens.

Open the Migration subtab.

In the Available field of the Migration Configuration section, select the machines to which to allow migration and click the right arrow.

Note:

Specify the least-loaded machine as the migration target for the new server. The required capacity planning must be completed so that this node has enough available resources to sustain an additional managed server.

Test server migration for the new server. To test migration, perform the following steps from the node where you added the new server:

Abruptly stop the WLS_IPMn managed server. To do this, run "kill -9 pid" on the PID of the managed server. You can identify the PID of the node using the following command:

ps -ef | grep WLS_IPMn

Watch the Node Manager Console for a message indicating that WLS_IPM1's floating IP has been disabled.

Wait for Node Manager to attempt a second restart of WLS_IPMn. Node Manager waits for a fence period of 30 seconds before trying this restart.

Once Node Manager restarts the server, stop it again. Node Manager should log a message indicating that the server will not be restarted again locally.

Note:

After a server is migrated, to fail it back to its original node or machine, stop the managed server from the Oracle WebLogic Administration Console and then start it again. The appropriate Node Manager will start the managed server on the machine to which it was originally assigned.

12.6.2.2Scale-out Procedure for Oracle UCM

Perform these steps to scale out the Oracle UCM servers in the topology:

Note:

These steps assume that you are adding a new UCM server to node n, where no managed server was running previously.

On the new node, mount the existing Middleware home, which should include the ECM installation and domain directory, and ensure that the new node has access to this directory, just like the rest of the nodes in the domain.

To attach ORACLE_HOME in shared storage to the local Oracle Inventory, execute the following command:

To update the Middleware home list, create (or edit, if another WebLogic installation exists in the node) the MW_HOME/bea/beahomelist file and add ORACLE_BASE/product/fmw to it.

Log in to the Oracle WebLogic Administration Console.

Create a new machine for the new node that will be used, and add the machine to the domain.

Update the machine's Node Manager address to map the IP of the node that is being used for scale-out.

Use the Oracle WebLogic Server Administration Console to clone WLS_UCM1 into a new managed server. Name it WLS_UCMn, where n is a number.

Note:

These steps assume that you are adding a new server to node n, where no managed server was running previously.

Assign the host name or IP of ECMHOSTn to use for the new managed server as the listen address of the managed server.

Run the pack command on SOAHOST1 to create a template pack:

Note:

If the domain directory for other managed servers resides on a shared directory, this step is not required. Instead, the new nodes mount the already existing domain directory and use it for the new added managed server.

Start Node Manager on the new node. To start Node Manager, use the installation in shared storage from the already existing nodes and then start Node Manager by passing the host name of the new node as a parameter as follows:

Disable host name verification for the new managed server. Before you can start and verify the WLS_UCMn managed server, you must disable host name verification. You can re-enable it after you have configured server certificates for the communication between the Oracle WebLogic Administration Server and Node Manager in ECMHOSTn. If the source server from which the new one has been cloned had already disabled host name verification, these steps are not required (the host name verification setting is propagated to the cloned server).

Select WLS_UCMn in the Names column of the table. The settings page for the server opens.

Open the SSL tab.

Expand the Advanced section of the page.

Click Lock & Edit.

Set host name verification to 'None'.

Click Save.

Start and test the new managed server from the Oracle WebLogic Server Administration Console:

Ensure that the newly created managed server, WLS_UCMn, is running.

Access the application on the LBR (https://ecm.mycompany.com/cs). The application should be functional.

Note:

The HTTP Servers in the topology should round-robin requests to the new added server (a few requests, depending on the number of servers in the cluster, may be required to hit the new server). Its is not required to add all servers in a cluster to the WebLogicCluster directive in Oracle HTTP Server's mod_wl_ohs.conf file. However routing to new servers in the cluster will take place only if at least one of the servers listed in the WebLogicCluster directive is running.

12.6.2.3Scale-out Procedure for SOA

Perform these steps to scale out the SOA servers in the topology:

On the new node, mount the existing Middleware home, which should include the SOA installation and domain directory, and ensure that the new node has access to this directory, just like the rest of the nodes in the domain.

To attach ORACLE_HOME in shared storage to the local Oracle Inventory, execute the following command:

To update the Middleware home list, create (or edit, if another WebLogic installation exists in the node) the MW_HOME/bea/beahomelist file and add ORACLE_BASE/product/fmw to it.

Log in to the Oracle WebLogic Administration Console.

Create a new machine for the new node that will be used, and add the machine to the domain.

Update the machine's Node Manager address to map the IP of the node that is being used for scale-out.

Use the Oracle WebLogic Server Administration Console to clone WLS_SOA1 into a new managed server. Name it WLS_SOAn, where n is a number.

Note:

These steps assume that you are adding a new server to node n, where no managed server was running previously.

Assign the host name or IP to use for the new managed server for the listen address of the managed server.

If you are planning to use server migration for this server (which Oracle recommends), this should be the VIP (also called a floating IP) for the server. This VIP should be different from the one used for the existing managed server.

Run the pack command on SOAHOST1 to create a template pack:

Note:

If the domain directory for other managed servers resides on a shared directory, this step is not required. Instead, the new nodes mount the already existing domain directory and use it for the new added managed server.

This directory must exist before the managed server is started or the start operation fails. You can also assign SOAJMSFileStore_N as the store for the new UMS JMS servers. For the purpose of clarity and isolation, individual persistent stores are used in the following steps.

Create a new JMS server for UMS (for example, UMSJMSServer_N). Use the UMSJMSFileStore_N for this JMS server. Target the UMSJMSServer_N server to the recently created managed server (WLS_SOAn).

Update the subdeployment targets for the SOA JMS Module to include the recently created SOA JMS server. To do this, expand the Services node in the Oracle WebLogic Server Administration Console and then expand the Messaging node. Choose JMS Modules in the Domain Structure window. The JMS Modules page appears. Click SOAJMSModule (represented as a hyperlink in the Names column of the table). The Settings page for SOAJMSModule appears. Click the SubDeployments tab. The subdeployment module for SOAJMS appears.

Note:

This subdeployment module name is a random name in the form of 'SOAJMSServerXXXXXX' resulting from the Configuration Wizard JMS configuration for the first two servers (WLS_SOA1 and WLS_SOA2).

Click the SOAJMSServerXXXXXX subdeployment. Add the new JMS server for SOA called SOAJMSServer_N to this subdeployment. Click Save.

Update the subdeployment targets for the UMSJMSSystemResource to include the recently created UMS JMS server. To do this, expand the Services node in the Oracle WebLogic Server Administration Console and then expand the Messaging node. Choose JMS Modules in the Domain Structure window. The JMS Modules page appears. Click UMSJMSSystemResource (represented as a hyperlink in the Names column of the table). The Settings page for UMSJMSSystemResource appears. Click the SubDeployments tab. The subdeployment module for UMSJMS appears.

Note:

This subdeployment module name is a random name in the form of 'UCMJMSServerXXXXXX' resulting from the Configuration Wizard JMS configuration for the first two servers (WLS_SOA1 and WLS_SOA2).

Click the UMSJMSServerXXXXXX subdeployment. Add the new JMS server for UMS called UMSJMSServer_N to this subdeployment. Click Save.

From the Administration Console, select the server name in the Services tab. Under Default Store, in Directory, enter the path to the folder where you want the default persistent store to store its data files.

Disable host name verification for the new managed server. Before you can start and verify the WLS_SOAn managed server, you must disable host name verification. You can re-enable it after you have configured server certificates for the communication between the Oracle WebLogic Administration Server and Node Manager in SOAHOSTn. If the source server from which the new one has been cloned had already disabled host name verification, these steps are not required (the host name verification setting is propagated to the cloned server).

Select WLS_SOAn in the Names column of the table. The settings page for the server opens.

Open the SSL tab.

Expand the Advanced section of the page.

Click Lock & Edit.

Set host name verification to 'None'.

Click Save.

Start Node Manager on the new node. To start Node Manager, use the installation in shared storage from the already existing nodes and then start Node Manager by passing the host name of the new node as a parameter as follows:

Start and test the new managed server from the Oracle WebLogic Server Administration Console:

Ensure that the newly created managed server, WLS_SOAn, is running.

Access the application on the LBR (https://ecm.mycompany.com/soa-infra). The application should be functional.

Note:

The HTTP Servers in the topology should round-robin requests to the new added server (a few requests, depending on the number of servers in the cluster, may be required to hit the new server). Its is not required to add all servers in a cluster to the WebLogicCluster directive in Oracle HTTP Server's mod_wl_ohs.conf file. However routing to new servers in the cluster will take place only if at least one of the servers listed in the WebLogicCluster directive is running.

Configure server migration for the newly added server.

Note:

Since this new node uses an existing shared storage installation, the node already is using a Node Manager and an environment configured for server migration that includes netmask, interface, wlsifconfig script superuser privileges, and so on. The floating IP for the new managed SOA server is already present in the new node.

Perform these steps to configure server migration:

Log in to the Oracle WebLogic Server Administration Console.

In the Domain Structure window, expand the Environment node and then click Servers. The Summary of Servers page opens.

Click the name of the server (represented as a hyperlink) in Name column of the table for which you want to configure migration. The settings page for the selected server opens.

Open the Migration subtab.

In the Available field of the Migration Configuration section, select the machines to which to allow migration and click the right arrow.

Note:

Specify the least-loaded machine as the migration target for the new server. The required capacity planning must be completed so that this node has enough available resources to sustain an additional managed server.

Test server migration for the new server. To test migration, perform the following steps from the node where you added the new server:

Abruptly stop the WLS_SOAn managed server. To do this, run "kill -9 pid" on the PID of the managed server. You can identify the PID of the node using the following command:

ps -ef | grep WLS_SOAn

Watch the Node Manager Console for a message indicating that WLS_SOA1's floating IP has been disabled.

Wait for Node Manager to attempt a second restart of WLS_SOAn. Node Manager waits for a fence period of 30 seconds before trying this restart.

Once Node Manager restarts the server, stop it again. Node Manager should log a message indicating that the server will not be restarted again locally.

Note:

After a server is migrated, to fail it back to its original node or machine, stop the managed server from the Oracle WebLogic Administration Console and then start it again. The appropriate Node Manager will start the managed server on the machine to which it was originally assigned.

12.7Performing Backups and Recoveries

Table 12-1 lists the static artifacts to back up in the 11g Oracle ECM enterprise deployment.

Table 12-1 Static Artifacts to Back Up in the 11g ECM Enterprise Deployment

Type

Host

Location

Tier

ORACLE HOME (DB)

CUSTDBHOST1 and CUSTDBHOST

The location is user-defined.

Data tier

MW HOME (OHS)

WEBHOST1 and WEBHOST2

ORACLE_BASE/product/fmw

Web tier

MW HOME (this includes the SOA home as well)

SOAHOST1 and SOAHOST2*

MW_HOME

The SOA home is also under MW_HOME: ORACLE_HOME

Application tier

Installation-related files

OraInventory, User_Home/bea/beahomelist, oraInst.loc, oratab

N/A

* ECMHOST1 and ECMHOST2 use the binaries installed from SOAHOST1 and SOAHOST2. Backup is centralized in SOAHOST1 and SOAHOST2.

Table 12-2 lists the run-time artifacts to back up in the 11g ECM enterprise deployment.

Table 12-2 Run-Time Artifacts to Back Up in the 11g ECM Enterprise Deployment

Type

Host

Location

Tier

Application artifacts (EAR and WAR files)

SOAHOST1, SOAHOST2, ECMHOST1, and ECMHOST2

Find the application artifacts by viewing all of the deployments through the administration console.

12.8.1Page Not Found When Accessing soa-infra Application Through Load Balancer

Problem: A 404 "page not found" message is displayed in the web browser when you try to access the soa-infra application using the load balancer address. The error is intermittent and SOA servers appear as "Running" in the WLS Administration Console.

Solution: Even when the SOA managed servers may be up and running, some of the applications contained in them may be in Admin, Prepared or other states different from Active. The soa-infra application may be unavailable while the SOA server is running. Check the Deployments page in the Administration Console to verify the status of the soa-infra application. It should be in "Active" state. Check the SOA server's output log for errors pertaining to the soa-infra application and try to start it from the Deployments page in the Administration Console.

Problem: The soa-infra application fails to start after changes to the Coherence configuration for deployment have been applied. The SOA server output log reports the following:

Cluster communication initialization failed. If you are using multicast, Please make sure multicast is enabled on your network and that there is no interference on the address in use. Please see the documentation for more details.

Solutions:

When using multicast instead of unicast for cluster deployments of SOA composites, a message similar to the above may appear if a multicast conflict arises when starting the soa-infra application (that is, starting the managed server on which SOA runs). These messages, which occur when Oracle Coherence throws a run-time exception, also include the details of the exception itself. If such a message appears, check the multicast configuration in your network. Verify that you can ping multicast addresses. In addition, check for other clusters that may have the same multicast address but have a different cluster name in your network, as this may cause a conflict that prevents soa-infra from starting. If multicast is not enabled in your network, you can change the deployment framework to use unicast as described in Oracle Coherence Developer's Guide for Oracle Coherence.

When entering well-known address list for unicast (in server start parameters), make sure that the node's addresses entered for the localhost and clustered servers are correct. Error messages like the following are reported in the server's output log if any of the addresses is not resolved correctly:

12.8.3Incomplete Policy Migration After Failed Restart of SOA Server

Problem: The SOA server fails to start through the administration console before setting Node Manager property startScriptEnabled=true. The server does not come up after the property is set either. The SOA Server output log reports the following:

Solution: Incomplete policy migration results from an unsuccessful start of the first SOA server in a cluster. To enable full migration, edit the <jazn-policy> element the system-jazn-data.xml file to grant permission to bpm-services.jar:

12.8.4SOA, I/PM, or UCM Servers Fail to Start Due to Maximum Number of Processes Available in Database

Problem: A SOA, I/PM, or UCM server fails to start. The domain has been extended for new types of managed server (for example, UCM extended for I/PM) or the system has been scaled up (added new servers of the same type). The SOA, I/PM, or UCM server output log reports the following:

<Warning> <JDBC> <BEA-001129> <Received exception while creating connection for pool "SOADataSource-rac0": Listener refused the connection with the following error:
ORA-12516, TNS:listener could not find available handler with matching protocol stack >

Solution: Verify the number of processes in the database and adjust accordingly. As the SYS user, issue the SHOW PARAMETER command:

SQL> SHOW PARAMETER processes

Set the initialization parameter using the following command:

SQL> ALTER SYSTEM SET processes=300 SCOPE=SPFILE

Restart the database.

Note:

The method that you use to change a parameter's value depends on whether the parameter is static or dynamic, and on whether your database uses a parameter file or a server parameter file. See the Oracle Database Administrator's Guide for details on parameter files, server parameter files, and how to change parameter values.

12.8.5Administration Server Fails to Start After a Manual Failover

Problem: Administration Server fails to start after the Administration Server node failed and manual failover to another nodes is performed. The Administration Server output log reports the following:

Solution: When restoring a node after a node crash and using shared storage for the domain directory, you may see this error in the log for the Administration Server due to unsuccessful lock cleanup. To resolve this error, remove the file ORACLE_BASE/admin/domain_name/aserver/domain_name/servers/AdminServer/data/ldap/ldapfiles/EmbeddedLDAP.lok.

12.8.6Error While Activating Changes in Administration Console

Problem: Activation of changes in Administration Console fails after changes to a server's start configuration have been performed. The Administration Console reports the following when clicking "Activate Changes":

An error occurred during activation of changes, please see the log for details.
[Management:141190]The commit phase of the configuration update failed with an exception:
In production mode, it's not allowed to set a clear text value to the property: PasswordEncrypted of ServerStartMBean

Solution: This may happen when start parameters are changed for a server in the Administration Console. In this case, provide username/password information in the server start configuration in the Administration Console for the specific server whose configuration was being changed.

12.8.7SOA or I/PM Server Not Failed Over After Server Migration

Problem: After reaching the maximum restart attempts by local Node Manager, Node Manager in the failover node tries to restart it, but the server does not come up. The server seems to be failed over as reported by Node Manager's output information. The VIP used by the SOA or I/PM server is not enabled in the failover node after Node Manager tries to migrate it (if config in the failover node does not report the VIP in any interface). Executing the command "sudo ifconfig $INTERFACE $ADDRESS $NETMASK" does not enable the IP in the failover node.

Solution: The rights and configuration for sudo execution should not prompt for a password. Verify the configuration of sudo with your system administrator so that sudo works without a password prompt.

12.8.8SOA or I/PM Server Not Reachable From Browser After Server Migration

Problem: Server migration is working (SOA or I/PM server is restarted in the failed over node), but the Virtual_Hostname:8001/soa-infra URL cannot be accessed in the web browser. The server has been "killed" in its original host and Node Manager in the failover node reports that the VIP has been migrated and the server started. The VIP used by the SOA or I/PM server cannot be pinged from the client's node (that is, the node where the browser is being used).

Solution: The arping command executed by Node Mnager to update ARP caches did not broadcast the update properly. In this case, the node is not reachable to external nodes. Either update the nodemanager.properties file to include the MACBroadcast or execute a manual arping:

/sbin/arping -b -q -c 3 -A -I INTERFACEADDRESS > $NullDevice 2>&1

Where INTERFACE is the network interface where the virtual IP is enabled and ADDRESS is the virtual IP address.

12.8.9OAM Configuration Tool Does Not Remove URLs

Problem: The OAM Configuration Tool has been used and a set of URLs was added to the policies in Oracle Access Manager. One of multiple URLs had a typo. Executing the OAM Configuration Tool again with the correct URLs completes successfully; however, when accessing Policy Manager, the incorrect URL is still there.

Solution: The OAM Configuration Tool only adds new URLs to existing policies when executed with the same app_domain name. To remove a URL, use the Policy Manager Console in OAM. Log on to the Access Administration site for OAM, click on My Policy Domains, click on the created policy domain (SOA_EDG), then on the Resources tab, and remove the incorrect URLs.

12.8.10Redirecting of Users to Login Screen After Activating Changes in Administration Console

Problem: After configuring OHS and LBR to access the Oracle WebLogic Administration Console, some activation changes cause the redirection to the login screen for the Administration Console.

Solution: This is the result of the console attempting to follow changes to port, channel, and security settings as a user makes these changes. For certain changes, the console may redirect to the Administration Server's listen address. Activation is completed regardless of the redirection. It is not required to log in again; users can simply update the URL to ecm.mycompany.com/console/console.portal and directly access the home page for the Administration Console.

Note:

This problem will not occur if you have disabled tracking of the changes described in this section.

12.8.11Redirecting of Users to Administration Console's Home Page After Activating Changes to OAM

Problem: After configuring OAM, some activation changes cause the redirection to the Administration Console's home page (instead of the context menu where the activation was performed).

Solution: This is expected when OAM SSO is configured and the Administration Console is set to follow configuration changes (redirections are performed by the Administration Server when activating some changes). Activations should complete regardless of this redirection. For successive changes not to redirect, access the Administration Console, choose Preferences, then Shared Preferences, and unselect the "Follow Configuration Changes" check box.

12.8.12Configured JOC Port Already in Use

Problem: Attempts to start a managed server that uses the Java Object Cache, such as OWSM managed servers, fail. The following errors appear in the logs:

Solution: Another process is using the same port that JOC is attempting to obtain. Either stop that process, or reconfigure JOC for this cluster to use another port in the recommended port range.

12.8.13Using CredentialAccessPermissions to Allow Oracle I/PM to Read Credentials From the Credential Store

Problem: Oracle I/PM creates the credential access permissions during startup and updates its local domain directory copy of the system-jazn-data.xml file. While testing the environment without an LDAP policy store being configured, the Administration Server may push manual updates to the system.jazn-data.xml file to the domain directories where the Oracle I/PM servers reside. This can cause the copy of the file to be overwritten, given rise to a variety of exceptions and errors in the restarts or access to the Oracle I/PM console.

Solution: To re-create the credential access permissions and update the Administration Server's domain directory copy of the system-jazn-data.xml file, use the grantIPMCredAccess command from the Oracle WebLogic Scripting Tool. To do this, start wlst.sh from the ORACLE_HOME associated with Oracle ECM, connect to the Administration Server, and execute the grantIPMCredAccess() command:

When connecting, provide the credentials and address for the Administration Server.

12.8.14Improving Performance with Very Intensive Document Uploads from Oracle
I/PM to Oracle UCM

Problem: If a host name-based security filter is used in Oracle UCM (config.cfg file), a high latency and performance impact may be observed in the system in the event of very intensive document uploads from Oracle I/PM to Oracle UCM. This is caused by the reverse DNS lookup which is required in Oracle UCM to allow the connections from the Oracle I/PM servers.

Solution: Using a host name-based security filter is recommended in preparation of configuring the system for disaster protection and to restore to a different host (since the configuration used is IP-agnostic when using a host name-based security filter). However, if the performance of the uploads needs to be improved, you can use an IP-based security filter instead of a host name-based filter.

Perform these steps to change the host name-based security filter in Oracle UCM to an IP-based filter:

Open the file ORACLE_BASE/admin/domain_name/ucm_cluster/cs/config/config.cfg in a text editor.

12.8.16Regenerating the Master Password for Oracle UCM Servers

In case the cwallet.sso file of the Oracle UCM managed servers domain home becomes inconsistent across the cluster, is deleted, or is accidentally overwritten by an invalid copy in the ORACLE_BASE/admin/domain_name/aserver/domain_name/config/fmwconfig directory, you can perform these steps to regenerate the file:

Stop all Oracle UCM managed servers.

Remove the cwallet.sso file from ORACLE_BASE/admin/domain_name/mserver/domain_name/config/fmwconfig.

Remove the password.hda file from ORACLE_BASE/admin/domain_name/aserver/ucm_cluster/cs/config/private.

Start the WLS_UCM1 server in ECMHOST1.

Verify the creation or update of the cwallet.sso file in ORACLE_BASE/admin/domain_name/mserver/domain_name/config/fmwconfig as well as the creation of the password.hda file in ORACLE_BASE/admin/domain_name/aserver/ucm_cluster/cs/config/private.

Use Oracle UCM's System Properties command-line tool to update the passwords for the database.

Verify that the standalone Oracle UCM applications (Batchloader, System Properties, and so on) are working correctly.

Copy the cwallet.sso file from ORACLE_BASE/admin/domain_name/mserver/domain_name/config/fmwconfig to the Administration Server's domain directory at ORACLE_BASE/admin/domain_name/aserver/domain_name/config/fmwconfig.

Start the second Oracle UCM server, and verify that the Administration Server pushes the updated cwallet.sso file to ORACLE_BASE/admin/domain_name/mserver/domain_name/config/fmwconfig in ECMHOST2 and that the file is the same as created or updated by the Oracle UCM server in ECMHOST1.

Verify that the standalone Oracle UCM applications (Batchloader, System Properties, and so on) are working correctly.

Verify that the standalone Oracle UCM applications work correctly on both nodes at the same time.

12.8.17Logging Out From the WebLogic Server Administration Console Does Not End the User Session

When you log in to the WebLogic Server administration console using Oracle Access Manager single sign-on (SSO), then clicking the logout button does not end the user session. You are not redirected to the OAM login page, which is in accordance with the SSO logout guidelines, but rather the home page is reloaded. To truly log out, you may need to manually clean up the cookies for your web browser.

12.9.1Preventing Timeouts for SQLNet Connections

Much of the EDG production deployment involves firewalls. Because database connections are made across firewalls, Oracle recommends that the firewall be configured so that the database connection is not timed out. For Oracle Real Application Clusters (RAC), the database connections are made on Oracle RAC VIPs and the database listener port. You must configure the firewall to not time out such connections. If such a configuration is not possible, set the*SQLNET.EXPIRE_TIME=n* parameter in the ORACLE_HOME/network/admin/sqlnet.ora file on the database server, where n is the time in minutes. Set this value to less than the known value of the timeout for the network device (that is, a firewall). For Oracle RAC, set this parameter in all of the Oracle home directories.

12.9.2Auditing

Oracle Fusion Middleware Audit Framework is a new service in Oracle Fusion Middleware 11g, designed to provide a centralized audit framework for the middleware family of products. The framework provides audit service for platform components such as Oracle Platform Security Services (OPSS) and Oracle Web Services. It also provides a framework for JavaEE applications, starting with Oracle's own JavaEE components. JavaEE applications will be able to create application-specific audit events. For non-JavaEE Oracle components in the middleware, such as C or JavaSE components, the audit framework also provides an end-to-end structure similar to that for JavaEE applications.

The Oracle Fusion Middleware Audit Framework consists of the following key components:

Audit APIs: These are APIs provided by the audit framework for any audit-aware components integrating with the Oracle Fusion Middleware Audit Framework. During run time, applications may call these APIs, where appropriate, to audit the necessary information about a particular event happening in the application code. The interface allows applications to specify event details such as username and other attributes needed to provide the context of the event being audited.

Audit Events and Configuration: The Oracle Fusion Middleware Audit Framework provides a set of generic events for convenient mapping to application audit events. Some of these include common events such as authentication. The framework also allows applications to define application-specific events.

These event definitions and configurations are implemented as part of the audit service in Oracle Platform Security Services. Configurations can be updated through Enterprise Manager (UI) and WLST (command-line tool).

Audit Bus-stop: Bus-stops are local files containing audit data before they are pushed to the audit repository. In the event where no database repository is configured, these bus-stop files can be used as a file-based audit repository. The bus-stop files are simple text files that can be queried easily to look up specific audit events. When a DB-based repository is in place, the bus-stop acts as an intermediary between the component and the audit repository. The local files are periodically uploaded to the audit repository based on a configurable time interval.

Audit Loader: As the name implies, the audit loader loads the files from the audit bus-stop into the audit repository. In the case of platform and JavaEE application audit, the audit loader is started as part of the JavaEE container start-up. In the case of system components, the audit loader is a periodically spawned process.

Audit Repository: The audit repository contains a predefined Oracle Fusion Middleware Audit Framework schema, created by Repository Creation Utility (RCU). Once configured, all the audit loaders are aware of the repository and upload data to it periodically. The audit data in the audit repository is expected to be cumulative and will grow over time. Ideally, this should not be an operational database used by any other applications; rather, it should be a standalone RDBMS used for audit purposes only. In a highly available configuration, Oracle recommends that you use an Oracle Real Application Clusters (RAC) database as the audit data store.

Oracle Business Intelligence Publisher: The data in the audit repository is exposed through predefined reports in Oracle Business Intelligence Publisher. The reports allow users to drill down the audit data based on various criteria. For example:

The EDG topology does not include Oracle Fusion Middleware Audit Framework configuration. The ability to generate audit data to the bus-stop files and the configuration of the audit loader will be available once the products are installed. The main consideration is the audit database repository where the audit data is stored. Because of the volume and the historical nature of the audit data, it is strongly recommended that customers use a separate database from the operational store or stores being used for other middleware components.