Production server setups are configured using MS Cluster Services (MSCS). This gives the feature of High Availability which many enterprises demand for their operations.

This document explains how to implement CA Access Control (AC) in such an environment. The aims are to install AC so you have a reliable AC installation and you are able to administer AC security policies with a minimum of effort.

Solution:

Valid Microsoft Cluster modes are:

Active/Active: In Active/Active mode, both nodes are allowed to have varying workloads and applications residing on them. They each perform work independent of the other, with instances of those applications setup for Failover/Failback residing on both nodes.

Active/Standby: In Active/Standby mode, the active node is online and doing work, while the inactive node sits in a "hot standby" mode waiting for any type of failure to occur on the primary node.

Partial Cluster Solution: In this mode there are a mixture of applications/resources that can Failover/Failback, along with those that cannot (non-cluster-aware). The cluster-aware applications are set up in a normal manner under MSCS using shared storage resources, and those that aren't utilize local storage resources found on the node where they permanently reside.

Virtual Server Only: This model utilizes MSCS virtual server mode, without having formed an actual cluster. It can be deployed on any node that has cluster server running at the time.

Hybrid Solution: The hybrid model is a combination of all the others previously described.

Access Control services run actively on each cluster node but having a single PMDB stored in a central location of the cluster, served only by a single active cluster node. One of the other nodes will stand in however in case of a failure of the PMDB server.

Compatibility CA Access Control and Microsoft Cluster

CA Access Control is compatible with MSCS but it is not cluster-aware. That means it operates as if there was no cluster software installed on the server and treats each node of a cluster as a standalone server.

To use CA AC in a cluster environment, you best install CA AC on each node of the cluster utilising a common PMDB infrastructure allowing to implement the very same set of rules synchronously on each of the cluster nodes. It is possible to protect the quorum disk, as well as other cluster enabled or local file systems on the nodes. AC can also intercept network traffic coming in via the virtual IP addresses.

If AC detects that it is running in a cluster environment and the cluster has its own network with separate network adapters used for cluster internal communications only, network interception is disabled for these network adapters. For network interfaces that connect the cluster to the rest of the enterprise, network interception works as usual.

Note: This feature is not enabled if the cluster uses the same network interface for cluster internal communications and communication to the rest of the network.

Example

Suppose you have two nodes:

NODE1 that has two IP addresses:- 10.0.0.1 is an internal cluster network IP address.- 192.168.0.1 is an outside network connection.

NODE2 has also two IP addresses:- 10.0.0.2 is an internal cluster network IP address.- 192.168.0.2 is an outside network connection.

The cluster itself has an additional virtual IP address of 192.168.0.3.

Network interception does not prevent NODE1 from connecting to NODE2 and vice versa as long as they do their communications using the internal cluster network IP addresses (10.0.0.1 and 10.0.0.2).

Network interception acts as defined by AC rules if NODE1 or NODE2 are contacted using outside network IP addresses (192.168.0.1 and 192.168.0.2).

In addition, network interception acts as defined by AC rules if the cluster is contacted at its 192.168.0.3 IP address.

Using the PMDB feature of AC to have identical security policies on all nodes of the Cluster.

As the cluster comprises of several nodes having AC installed on all of them you need a method to administer the common security policies in an efficient and reliable manner. The PMDB feature of AC let's you do this. The PMDB can be implemented during initial installation of AC.

The architectural overview of the setup looks like figure 1:

Figure 1

Setup Access Control on each Cluster Node the very same way

Initially setup AC in a "normal" way on each node to host both a seosdb and a pmdb database locally.

During installation of AC allow Local Administrators of each node as well as Domain Administrators to be able to administer the local AC security policies of the nodes.

To do this add the user accounts to the list of Access Control Administrators.Also specify the physical and virtual hostnames of all the cluster nodes to allow administration.

Also add additional machines having the Policy Manager installed and additional AC Admin users to administer AC running on the cluster nodes.While installing the product you can specify it on this screen.

Figure 2 below demonstrates a typical AC setup on one of the two cluster nodes:

Figure 3 shows specification of the parent PMDB allowed to push updates to the localhost. Add the virtual pmdb hosted on any of the cluster nodes, since the local seosdb needs to accept updates from each of them. Alternatively it is possible to specify the keyword _NO_MASTER_ disabling the security checking and allowing each PMDB with relevant terminal record definition to send updated to the localhost.

Figure 3

Figure 4 below shows that both nodes of the cluster are subscribers of the virtual PMDB running on each of the nodes.

Figure 4

After installation you can confirm with Windows Explorer that your PMDB and local database were created properly:

Figure 5

The PMDB is called pmdb and the local database is called seosdb.

Assuming that at this stage AC is setup on each Cluster Node and hosting a local PMDB and having each node subscribed to it:

Figure 6

Put the common PMDB on the Cluster Disk and setup a cluster resource to allow failover

Now consolidate each node's PMDB to a single database stored on a filesystem hosted on one of the Cluster Disks.

In the Cluster Administrator create a new Resource in the default Cluster Group of Type "Physical Disk" pointing to a filesystem on the shared cluster disks.

Make the resource dependent on the Cluster's IPAddress to allow failover when any node's network is failing.

Figure 7

After the shared disk has been defined select "Bring Online" to make it accessible.

On this shared disk create a folder R:\eTrustAccessControl\Data\pmdb which will hold the pmdb's files.

Stop Access Control on both nodes with secons -s

Copy all files from the default pmdb folder C:\Program Files\CA\eTrustAccessControl\Data\pmdb to the above cluster folder (do this just once on the node which is the current owner of the shared disk).

In the Cluster Administrator create a new resource for Access Control's PMDB of type "Generic Script" in the same cluster group as the shared disk created before.

Give the script resource the following properties to assign the above script to it and make it dependent from the shared disk and the virtual cluster name and cluster's virtual IP address:

Figure 9

Figure 10

Figure 11

Figure 12

After Access Control's PMDB resource has been defined select "Bring Online" to startup the pmdb service on the active cluster node.

Verify the failover of the clustered AC implementation

Startup Access Control on each node.

Bring the cluster resources online.

Notice the PMDB is started only on one single cluster node.

Startup the Policy Manager and connect to the clustered pmdb.

Add new Access Control resources, e.g. add a new user.

Verify with selang on each node that the new resource was added to the local seosdb.

Shutdown the active cluster node.

See the PMDB service starting up on the other cluster node.

Remaining in the previous Policy Manager session delete the previously created resource.

Verify with selang on each node that the resource was removed from to the local seosdb again.

Create in the Policy Manager another resource.

Startup the stopped cluster node again.

See in selang that the clustered pmdb made up the transaction for this node although it was down.

Configure the Policy Manager to connect to the PMDB on the virtual cluster host

To ensure that the Policy Manager and selang by default connect to the clustered PMDB served by the virtual node, modify the following registry key on any machine having the GUI installed (you need to stop Access Control in case its engine is installed as well on this system)

HKEY_LOCAL_MACHINE\SOFTWARE\ComputerAssociates\eTrustAccessControl\Client\ConnectTo and set its value to pmdb@<virtual cluster name>

When you write rules in the PMDB it pushes those same rules to the subscribers. This ensures you have identical security policies on the local database (seosdb) of each cluster node.

Verify AC functioning like normal although it is now running on the Cluster

One command to confirm that the PMDB is running correctly is 'sepmd -L pmdb' submit at the command line.Find all the hostnames of the cluster nodes in the Subscriber list as available without errors.

Figure 13

To verify replication is working correctly create a new resource or edit an existing resource (i.e. a FILE rule) using selang or Policy Manger.

If you now rerun sepmd -L pmdb on one of the AC Cluster nodes, notice that the offsets of both subscribers have changed and no errors have occured. This means that the local database (seosdb) of each cluster node has received the same rule.

Figure 14 below shows this:

Figure 14

Now you may write your policies for local resources (for the C: drives of the cluster nodes) and the shared resources of the cluster (i.e. for the Q: drive of the cluster). As long as you are writing your policies to the PMDB the rules will be propagated to the cluster nodes.

Limitations

When setting up any rule for any cluster resource ensure that the Cluster Service' service account has been setup as AC account and have been granted full access to all the AC defined resources to avoid malfunction of the Cluster.

AC Rules applied for the shared resource, of class PROGRAM LOGINAPPL and SPECIALPGM, will fail being created on the cluster nodes that do not currently hold the shared resources.

PROGRAM rules applied to binaries on the shared resource, become untrusted after the shared resource moves over to another cluster node.

Possible Solutions

As usual, first setup any new rule in Warning mode and verify overall functionality closely monitoring the AC audit logs by using the seaudit utility to view the logs.

To allow PROGRAM, LOGINAPPL or SPECIALPGM rules for binaries on the shared resource, connect with the Policy Manager to the localhost being in control of the shared resource and create the rule, switch the shared resource to each node and create the rule on all of them.

Trusted programs can be retrusted as part of the script that controls shared resource cluster turnover (seretrust utility can be used, with the base_path option set to the shared resource).