5.1Overview

Oracle Enterprise Repository uses a server-side cache on each application server. Cached data is used only if it is available, otherwise, the database delivers data content to the cache and to the application.

When Oracle Enterprise Repository runs in a cluster, the cluster members must communicate with each other using HTTP. An edit that occurs on one cluster member invalidates the cached element on that cluster member, and communicates the edit to other cluster members. This is accomplished by a system property called cachesyncurl, which accepts a URL to the application as a valid value.

On start, the system writes the cachesyncurl to the database and fetches a list of other server's URLs from the database. A message is sent to all discovered URLs that announces the presence of a new member of the cluster. Each server then refreshes the server list from the database. On a clean server shutdown, the value is removed from the list and a cache-refresh notification is broadcast to the server list.

When edits invalidate an element in the local cache, a message is sent to all the other servers noting which cached elements must be invalidated. Upon receipt of the message, the designated element is removed from the cache. On subsequent data request, the cache contains no data, so it first caches and then the database delivers data to the application.

Clustering and Installation Requirements

Session affinity

Server-side HTTP cache communication

Options

Failover

Requires session management and persistent sessions

Load Balancing

Supported Application Servers

The application servers listed here are currently supported for use with clustering for Oracle Enterprise Repository:

Oracle WebLogic Server

IBM WebSphere Application Server

For information about the supported versions of these application servers, see the Supported Configurations documentation, available on Oracle Enterprise Repository index page at

5.4 Step 3: Create the Clustered Environment

For information about clustering on WebLogic or WebSphere, see the application server documentation, and the organizational standards.

For WebLogic:

See Using WebLogic Server Clusters, available from Oracle.

For WebSphere Application Server:

See WebSphere Software Information Center. Locate the documentation for the specific appserver version and navigate to: All topics by feature -> Servers -> Clusters -> Balanced workloads with clusters.

5.5 Step 4: Move the Application Properties to the Database

Property files always take precedence when reading properties into the Oracle Enterprise Repository application. The application looks for properties and their corresponding values, first within the database, and then within the property files. Any properties read from the database are overwritten by the corresponding properties in the files. However, if there are no files, the properties within the database are only referenced, then properties that exist solely within the database are never overwritten.

This procedure begins with deploying one application.

In the Admin screen, click System Settings in the left pane. The System Settings section is displayed in the main pane, as shown in Figure 5-1.

Figure 5-1 System Settings Section

Scroll to the bottom and click the Move settings to database button.

A confirmation message appears.

Remove the properties files from the classpath.

Restart the appserver.

Locate the configuration files folder (usually located within the ./WEB-INF/classes/ folder or oer_home) within the application server.

Remove the property files listed below from the configuration folder:

enterprise.properties

ldap.properties

containerauth.properties

eventing.properties

juddi.properties

openapiserverlog.properties

These properties are written to the entSettings table within the database.

Modify the cmee.properties file. Remove all property values except those containing URL values. Update the URL references to point to the proxy server path being used to load balance access to the cluster members.

Any properties enabled after this procedure are written to the database, not to the properties files.

5.6 Step 5: Configure the cluster.properties File on Each Cluster Member

To configure the cluster.properties file on each cluster member:

Stop each cluster member.

On each cluster member create a file called cluster.properties, which resides in the same place as all other .properties files.

For exploded directory deployments this location is the WEB-INF/classes directory beneath the webapp.

For ear file deployments, this location is the oer_home directory.

The contents of cluster.properties is based on the property cmee.server.paths.servlet in the cmee.properties file. However, the host name in the path should refer to the host name of the cluster member, not the proxy host name of the entire cluster.

cluster.properties
#cluster.properties
cachesyncurl=http://<SERVLET-PATH>/<APP_PATH>
Example:
#cluster.properties
cachesyncurl=http://node1.example.com:7101/oer
# oer is the name of the Oracle Enterprise Repository application during
# deployment
Other properties that are optional
# alias is used as an alternate/convenient name to refer
# to the server
# example: server1
# default: same value as =cachesyncurl=
alias=EclipseServer
# registrationIntervalSeconds is the number of seconds between
# attempts to update the server's registration record in the database
# default: 120
registrationIntervalSeconds=120
# registrationTimeoutSeconds is the number of seconds before a server
# is considered to be inactive/not running
# make sure this value is higher than the registrationIntervalSeconds
# default: 240
registrationTimeoutSeconds=240
# maxFailures is the number of consecutive attempts that are made
# to deliver a message to another server after which it is determined
# to be unreachable
# default: 20
maxFailures=20
# maxQueueLength is the number of messages that are queued up to
# send to another server after which server are determined to be
# unreachable
# default: 4000
maxQueueLength=5000
# email.to is the address of the email recipient for clustering status
# messages
email.to=jsmith@company.com
# email.from is the address of the sender for clustering status messages
email.from=jsmith@company.com
# email.subject is the subject line of the message for clustering status
# messages
email.subject=Oracle Enterprise Repository Clustering communication failure
# email.message is the body of the message for clustering status messages
email.message=This is an automated message from the Oracle Enterprise
Repository informing you of a cluster member communication failure.

The time delay should not be more than 120 seconds between the application server and the database server. Network Time Protocol is recommended to keep these servers in sync. The clustering process calculates the difference of time between messaging between the nodes of the cluster.

Before restarting the server, you must add eventing.properties, if JMS Clustering is enabled, and this should contain cmee.eventframework.jms.producers.client.id property with unique value on each of the cluster member. For example, cmee.eventframework.jms.producers.client.id=OER_JmsProducer1

Restart each cluster member.

Note:

After a cluster member is inactivated due to exceeding maxFailures, then the only way to activate is by restarting the server.

5.7 Step 6: Validate the Installation

Messages are sent to the standard out log of each cluster member.

"running in single server mode"

Indicates that Oracle Enterprise Repository clustering is not configured and the application is running in single server mode.

"running in multi server mode with a sync-url of..."

Indicates the Oracle Enterprise Repository clustering is functioning and the application is running in clustered mode.

Variables

cachesyncurl

The value of the cachesyncurl in the cluster.properties file, which references the same URL as the individual node's instance with the path of /cachesync appended. Most cluster configurations have a proxy server load-balancing each node within the clustered server.

Example:

Node1: cachesyncurl=http://node1.example.com:7101/oer

Node2: cachesyncurl=http://node1.example.com:7101/oer

It is also possible to validate the clustering installation by viewing the clustering diagnostic page from the Oracle Enterprise Repository Diagnostics screen. Click Cluster Info on the Diagnostics screen to view the Cluster Diagnostic page. This page lists information about all servers registered in the cluster, as well as information about inter-server communications.

Oracle HTTP Server Config File

If Oracle HTTP Server is used to route from the load balancer to the nodes, the HTTP Server Config (httpd.conf) file should include two entries: /oer and /oer-web. A sample httpd.conf file is shown below:

In this example, host 10.0.0.1 is the HTTP server proxying for the cluster machines, host 10.0.0.2 is the Admin Server, hosts 10.0.0.3 and 10.0.0.4 are the Oracle Enterprise Repository managed servers.

5.8Clustering JVM Parameter for WebLogic Server

If cluster nodes are deployed using a centralized administration console, it may be necessary to apply a JVM Parameter to allow the appropriate Oracle Enterprise Repository clustering operation in the absence of the cluster.properties file.

This JVM parameter should be applied statically for each member of the cluster or within the managed server startup command file. This JVM parameter can be set within the JAVA_OPTIONS environment variable for WebLogic Application servers or within CATALINA_OPTS or JAVA_OPTS for Tomcat servers. The JVM Parameter is as follows:

-Dcmee.cachesyncurl=http://<member host name>:<port>/<APP_PATH>

5.9Clustered JMS Servers for Advanced Registration Flows

Note:

This feature is only available when using the "Advanced Registration Flows" subsystem for automating the asset registration process. Also, "JMS Clustering" applies only to the embedded ActiveMQ JMS servers in Oracle Enterprise Repository and not to external JMS servers. You must have JDBC persistence, if you are using ActiveMQ.

In a clustered Oracle Enterprise Repository environment using the Advanced Registration Flows subsystem, each member Oracle Enterprise Repository server in the cluster has one embedded ActiveMQ JMS server for increased reliability and scalability. For example, for a two-node cluster, there would be two Oracle Enterprise Repository servers, such as server01 and server02, with each having one embedded JMS server. JMS server clustering is enabled using the Oracle Enterprise Repository "Eventing" System Settings, as described in External Integrations: Eventing. After clustering is enabled for the embedded JMS servers, you then must specify the connection URL information for the embedded JMS servers on server01 and server02.