Enable/Disable For
All VMs in Cluster

The Controlled Startup
functionality is enabled by the presence of the
/etc/broadhop/cluster_state file.

To enable this feature
on all CPS VMs in the cluster, execute the following commands on the Cluster
Manager VM to create this file and to use the syncconfig.sh script to push
those changes out to the other VMs.

touch
/etc/broadhop/cluster_state

syncconfig.sh

To disable this
feature on all VMs in the cluster, remove the cluster_state file on the Cluster
Manager VM and sync the configuration:

rm
/etc/broadhop/cluster_state

syncconfig.sh

Enable/Disable For
Specific VM

To enable this feature on a specific VM, create a
/etc/broadhop/cluster_state file on the VM:

touch /etc/broadhop/cluster_state

To disable this feature again on a specific VM, delete the
/etc/broadhop/cluster_state file on the VM:

rm /etc/broadhop/cluster_state

Note

This is temporary measure and should only be used for diagnostic
purposes. Local modifications to a VM can be overwritten under various
circumstances, such as running
syncconfig.sh.

Switching Active and
Standby Policy Directors

In CPS, the active and
standby strategy applies only to the Policy Directors (lb). The following are
the two Policy Directors in the system:

This command
will force the failover of the VIP from the active Policy Director to the
standby Policy Director.

Step 3

To confirm the
switchover, SSH to the other Policy Director VM and run the following command
to determine if the VIP is now associated with this VM:

ifconfig -a

If you see the
eth0:0 or eth1:0 interfaces in the list and marked as “UP” then that is the
active Policy Director.

Multi-user Policy
Builder

Multiple users can be
logged into Policy Builder at the same time.

In the event that two
users attempt to make changes on same screen and one user saves their changes
to the client repository, the other user may receive errors. In such cases the
user must return to the login page, revert the configuration, and repeat their
changes.

Revert
Configuration

The user can revert
the configuration if changes since the last publish/save to client repository
are not wanted.

This can also be
necessary in the case of a ‘syn conflict’ error where both pcrfclient01 and
pcrfclient02 are in use at the same time by different users and publish/save to
client repository changes to the same file. The effect of reverting changes is
that all changes since the publish/save to client repository will be undone.

Step 1

On the Policy
Builder login screen, verify the user for which changes need to be reverted is
correct. This can be done by clicking
Edit and verifying that the Username and Password
fields are correct.
Figure 2. Verifying the User

Step 2

Click
Revert.

The following
confirmation dialog opens.

Figure 3. Revert Confirmation Message

Step 3

Click
OK to revert back to the earlier configuration. The
following dialog confirms that the changes are reverted successfully.
Figure 4. Success Confirmation Message

Control Center
Access

After the installation is complete, you need to configure the Control
Center access. This is designed to give the customer a customized Control
Center username.

Restart the CPS
system, so that the changes done above are reflected in the VMs:

restartall.sh

To add a new
user to Control Center and specify the group you've specified in the
configuration file above, refer to
Add a Control Center User.

Multiple Concurrent
User Sessions

CPS Control Center
supports session limits per user. If the user exceeds the configured session
limit, they are not allowed to log in. CPS also provides notifications to the
user when other users are already logged in.

When a user logs in to
Control Center, a Welcome message displays at the top of the screen. A session
counter is shown next to the username. This represents the number of login
sessions for this user. In the following example, this user is logged in only
once ( [1] ).

Figure 5. Welcome
Message

The user can click the
session counter ([1]) link to view details for the session(s), as shown below.

Figure 6. Viewing Session
Details

When another user is
already logged in with the same username, a notification displays for the
second user in the bottom right corner of the screen, as shown below.

Figure 7. Login
Notification for a Second User

The first user also
receives a notification, as shown, and the session counter is updated to [2].

Figure 8. Login
Notification for First User

Figure 9. Indication of
Two Users with Same Username

These notifications
are not displayed in real time; CPS updates this status every 30 seconds.

Configure Session
Limit

The session limit can be configured
by the runtime argument, which can be configured in the qns.conf file.

-Dcc.user.session.limit=3 (default value is 5)

Configure Session
Timeout

The default session
timeout can be changed by editing the following file on the Policy Server (QNS)
instance:

The same timeout
value must be entered on all Policy Server (QNS) instances.

When the number of
sessions of the user exceeds the session limit, the user is not allowed to log
in and receives the message “Max session limit per user exceed!”

Important
Notes

If a user does not log
out and then closes their browser, the session remains alive on the server
until the session times out. When the session timeout occurs, the session is
deleted from the memcached server. The default session timeout is 15 minutes.
This is the idle time after which the session is automatically deleted.

When a Policy Server
(QNS) instance is restarted, all user/session details are cleared.

When the memcached
server is restarted without also restarting the Policy Server (QNS) instance,
all http sessions on the Policy Server (QNS) instance are invalidated. In this
case the user is asked to log in again and after that, the new session is
created.

Backing Up and
Restoring

As a part of routine
operations, it is important to make backups so that in the event of failures,
the system can be restored. Do not store backups on system nodes.

For detailed
information about backup and restore procedures, see the
CPS Backup and Restore Guide.

Adding or Replacing
Hardware

Hardware replacement
is usually performed by the hardware vendor with whom your company holds a
support contract.

Hardware support is
not provided by Cisco. The contact persons and scheduling for replacing
hardware is made by your company.

Before replacing
hardware, always make a backup. See the
CPS Backup and Restore Guide.

Unless you have a
readily available backup solution, use VMware Data Recovery. This solution,
provided by VMware under a separate license, is easily integrated into your CPS
environment.

The templates you
download from the Cisco repository are partially pre-configured but require
further configuration. Your Cisco technical representative can provide you with
detailed instructions.

Note

You can download the
VMware software and documentation from the following location:

Target VM
Configuration

Execute the
df
command to examine the current disks that are mounted and accessible.

Step 4

Create an ext4
file system on the new disk:

mkfs -t ext4
/dev/sdb

Note

b in
/dev/sdb is the second SCSI disk. It warns that you are
performing this operation on an entire device, not a partition. That is
correct, since you created a single virtual disk of the intended size. This is
assuming you have specified the correct device. Make sure you have selected the
right device; there is no undo.

Step 5

Execute the
following command to verify the existence of the disk you created:

# fdisk -l

Step 6

Execute the
following command to create a mount point for the new disk:

# mkdir
/<NewDirectoryName>

Step 7

Execute the
following command to display the current
/etc/fstab:

# cat /etc/fstab

Step 8

Execute the
following command to add the disk to
/etc/fstab so that it is available across
reboots:

/dev/sdb
/<NewDirectoryName> ext4 defaults 1 3

Step 9

Reboot the VM.

shutdown -r now

Step 10

Execute the
df
command to check the file system is mounted and the new directory is available.

Update the collectd
process to use the new file system to store KPIs

After the disk is
added successfully,
collectd can use the new disk to store the KPIs.

Step 1

SSH into
pcrfclient01/pcrfclient02.

Step 2

Execute the
following command to open the logback.xml file for editing:

vi
/etc/collectd.d/logback.xml

Step 3

Update the file
element <file> with the new directory that was added in the
/etc/fstab.

Step 4

Execute the
following command to restart
collectd:

monit restart collectd

Note

The content of
logback.xml will be overwritten to the default path after a new upgrade. Make
sure to update it after an upgrade.

The Client
Repository stores data captured from the Policy Builder GUI in Subversion. This
is a place where trial configurations can be developed and saved without
affecting the operation of the Cisco Policy Builder server data.

The default URL is
http://pcrfclient01/repos/configuration.

The Server
Repository is where a copy of the client repository is created/updated and
where the CPS picks up changes. This is done on Publish from Policy Builder.

Note

Publishing will
also do a Save to Client Repository to ensure the Policy Builder and Server
configurations are not out of sync.

The default URL is
http://pcrfclient01/repos/run.

Export and Import
Service Configurations

You can export and
import service configurations for the migration and replication of data. You
can use the export/import functions to back up both configuration and
environmental data or system-specific information from the configuration for
lab-to-production migration.

You can import the
binary in the following two ways:

Import the
binary produced by export - All configuration exported will be removed (If
environment is included, only environment will be removed. If environment is
excluded, environment will not be removed). The file passed is created from the
export API.

Additive Import
- Import the package created manually by adding configuration. The new
configurations gets added into the server without impacting the existing
configurations. The import is allowed only if the CPS running version is
greater than or equal to the imported package version specified in the
configuration.

Step 1

In a browser,
navigate to the export/import page, available at the following URLs:

HA/GR:
https://<lbvip01>:7443/doc/import.html

All-In-One (AIO):
http://<ip>:7070/doc/import.html

Step 2

Enter the API
credentials.

Step 3

Select the file
to be imported/exported.

The following
table describes the export/import options:

Table 2 Export and
Import Options

Option

Description

Export

All data

Exports
service configuration with environment data, which acts as a complete backup of
both service configurations and environmental data.

Exclude
environment

Exports
without environment data, which allows exporting configuration from a lab and
into another environment without destroying the new system's
environment-specific data.

Only
environment

Exports
only environment data, which provides a way to back up the system-specific
environmental information.

Export
URL

Found in
Policy Builder or viewed directly in Subversion.

Export
File Prefix

Provide
a name (prefix) for the export file.

Note: The exported
filename automatically includes the date and time when the export was
performed, for example:
prefix_2016-01-12_11-03-56_3882276668.cps

Note: The file
extension
.cps is used so that the file is not opened or modified by
mistake by another application. The file should be used for export/import
purposes only.

Import

Import
URL

URL is
updated/created. We recommend importing to a new URL and use Policy Builder to
verify/publish.

Commit
Message

Message
recorded with the import. Provide details that is useful to record.

After you
select the file, the file's information is displayed.

Step 4

Select
Import or
Export.
CPS
displays response messages that indicate the status of the export/import.

HAProxy

HAProxy is an
opensource load balancer used in High Availability (HA) and Geographic
Redundancy (GR) CPS deployments. It is used by the CPS Policy Directors (lbs)
to forward IP traffic from lb01/lb02 to other CPS nodes. HAProxy runs on the
active Policy Director VM.

Enable SSL

CPS uses encryption on
all appropriate communication channels in HA deployments. No additional
configuration is required.

Default SSL
certificates are provided with CPS but it is recommended that you replace these
with your own SSL certificates. Refer to Replace SSL Certificates in the
CPS Installation Guide for VMware
for more information.

Audit History

The Audit History is a way to track usage of the various GUIs and APIs it provides to the customer.

If enabled, each request is submitted to the Audit History database for historical and security purposes. The user who made the request, the entire contents of the request and if it is subscriber related (meaning that there is a networkId value), all networkIds are also stored in a searchable field.

Capped
Collection

By default, the Audit
History uses a 1 GB capped collection in MongoDB. The capped collection
automatically removes documents when the size restriction threshold is hit. The
oldest document is removed as each new document is added. For customers who
want more than 1 GB of audit data, contact the assigned Cisco Advanced Services
Engineer to get more information.

Configuration in
Policy Builder is done in GB increments. It is possible to enter decimals, for
example, 9.5 will set the capped collection to 9.5 GB.

PurgeAuditHistoryRequests

When using a capped
collection, MongoDB places a restriction on the database and does not allow the
deletion of data from the collection. Therefore, the entire collection must be
dropped and re-created. This means that the PurgeAuditHistory queries have no
impact on capped collections.

AuditRequests

As a consequence of the XSS defense changes to the API standard
operation, any XML data sent in an AuditRequest must be properly escaped even
if inside CDATA tags.

Add and
configure the appropriate plug-in configurations for Audit History and Unified
API.

Step 3

Publish the
Policy Builder configuration.

Step 4

Start the CPS
servers.

Step 5

Restart the
Policy Builder with the following property:

-Dua.client.submit.audit=true

-Dua.client.server.url=https://lbvip02:8443/ua/soap

or

-Dua.client.server.url=http://lbvip02:8080/ua/soap

Read
Requests

The Audit History does not log read
requests by default.

GetRefDataBalance

GetRefDataServices

GetSubscriber

GetSubscriberCount

QueryAuditHistory

QueryBalance

QuerySession

QueryVoucher

SearchSubscribers

The Unified API also has a Policy Builder configuration option to log
read requests which is set to false by default.

APIs

All APIs are automatically logged into the Audit Logging History
database, except for QueryAuditHistory and KeepAlive. All Unified API requests
have an added Audit element that should be populated to provide proper audit
history.

Querying

The query is very
flexible - it uses regex automatically for the id and dataid, and only one of
the following are required: id, dataid, or request. The dataid element
typically will be the networkId (Credential) value of a subscriber.

Note

Disable Regex. The
use of regular expressions for queries can be turned off in the Policy Builder
configuration.

The id element is the
person or application who made the API request. For example, if a CSR logs into
Control Center and queries a subscriber balance, the id will be that CSR's
username.

The dataid element is
typically the subscriber's username. For example, if a CSR logs into Control
Center and queries a subscriber, the id will be that CSR's username, and the
dataid will be the subscriber's credential (networkId value). For queries, the
dataid value is checked for spaces and then tokenized and each word is used as
a search parameter. For example, “networkId1 networkId2” is interpreted as two
values to check.

The fromDate
represents the date in the past from which to start the purge or query. If the
date is null, the api starts at the oldest entry in the history.

The toDate represents
the date in the past to which the purge or query of data includes. If the date
is null, the api includes the most recent entry in the purge or query.

Purging

By default, the Audit History database is capped at 1 GB. Mongo provides
a mechanism to do this and then the oldest data is purged as new data is added
to the repository. There is also a PurgeAuditHistory request which can purge
data from the repository. It uses the same search parameters as the
QueryAuditHistory and therefore is very flexible in how much or how little data
is matched for the purge.

Note

Regex Queries! Be very careful when purging records from the Audit
History database. If a value is given for dataid, the server uses regex to
match on the dataid value and therefore will match many more records than
expected. Use the QueryAuditHistory API to test the query.

Purge
History

Each purge request is logged after the purge operation completes. This
ensures that if the entire repo is destroyed, the purge action that destroyed
the repo will be logged.

Control
Center

The Control Center version 2.0
automatically logs all requests.

PurgeAuditHistoryRequest

This API purges the Audit History.

The query is very
flexible - it uses regex automatically for the id and dataid, and only one of
the following are required: id, dataid, or request. The dataid element
typically will be the networkId (Credential) value of a subscriber.

The id element is the
person or application who made the API request. For example, if a CSR logs into
Control Center and queries a subscriber balance, the id will be that CSR's
username.

The dataid element is
typically the subscriber's username. For example, if a CSR logs into Control
Center and queries a subscriber, the id will be that CSR's username, and the
dataid will be the subscriber's credential (networkId value). For queries, the
dataid value is checked for spaces and then tokenized and each word is used as
a search parameter. For example, “networkId1 networkId2” is interpreted as two
values to check.

The fromDate
represents the date in the past from which to start the purge or query. If the
date is null, the api starts at the oldest entry in the history.

The toDate represents
the date in the past to which the purge or query of data includes. If the date
is null, the api includes the most recent entry in the purge or query.

Note

Size-Capped Database

If the database is
capped by size, then the purge request ignores the request key values and drops
the entire database due to restrictions of the database software.

QueryAuditHistoryRequest

This API queries the Audit History.

The query is very
flexible - it uses regex automatically for the id and dataid, and only one of
the following are required: id, dataid, or request. The dataid element
typically will be the networkId (Credential) value of a subscriber.

The id element is the
person or application who made the API request. For example, if a CSR logs into
Control Center and queries a subscriber balance, the id will be that CSR's
username.

The dataid element is
typically the subscriber's username. For example, if a CSR logs into Control
Center and queries a subscriber, the id will be that CSR's username, and the
dataid will be the subscriber's credential (networkId value). For queries, the
dataid value is checked for spaces and then tokenized and each word is used as
a search parameter. For example, "networkId1 networkId2" is interpreted as two
values to check.

The fromDate
represents the date in the past from which to start the purge or query. If the
date is null, the api starts at the oldest entry in the history.

The toDate represents
the date in the past to which the purge or query of data includes. If the date
is null, the api includes the most recent entry in the purge or query.

the username
of person who performed the action. In the above example the CSR who issued the
debit request.

comment_key

Some
description of the audit action.

data_id_key

The credential
of the subscriber. It is a list so if the subscriber has multiple credentials
then they will all appear in this list. Please note that it is derived from the
request data so for a CreateSubscriber request there may be multiple
credentials sent in the request then each will be saved in the data_id_key
list. In the DebitRequest case only the one credential is listed because the
request only has the single networkId field.

timestamp_key

The time the
request was logged If the timestamp value is null in the request then the Audit
module automatically populates this value.

request_key

The name of
the request. This provides a way to search on type of API request.

Audit
Configuration

Click
Audit
Configuration in the right pane to open the
Audit
Configuration dialog box.
Figure 11. Audit
Configuration dialog box

Step 3

Under
Audit
Configuration there are different panes:
General Configuration,
Queue
Submission Configuration,
Database Configuration, and
Shard
Configuration. An example configuration is provided in the
following figures:
Figure 12. Queue
Submission Configuration pane

Figure 13. Database
Configuration pane

Figure 14. Shard
Configuration pane

The following
parameters are used to size and manage the internal queue that aids in the
processing of Audit messages.

The application
offloads message processing to a queue to speed up the response time from the
API.

Table 4 Audit
Configuration Parameters

Parameter

Description

General Configuration

Capped
Collection

Select
this check-box to activate capped collection function.

Capped
Collection Size

By
default, the Audit History uses a 1 GB capped collection in MongoDB. The capped
collection automatically removes documents when the size restriction threshold
is hit.

Configuration in Policy Builder is done in GB increments. It is
possible to enter decimals, for example,9.5 will set the capped collection to
9.5 GB.

Log Read
Requests

Select
this check-box if you want read requests to be logged.

Include
Read Requests in Query Results

Select
this check-box only if you want to include read requests to be displayed in
query results.

Disable
Regex Search

If you
select this check-box, the use of regular expressions for queries is turned off
in the Policy Builder configuration.

Search
Query Results Limit

This
parameter limits the search results.

Queue
Submission Configuration

Message
Queue Size

Total
number of messages the queue can hold at any given time.

Message
Queue Sleep

The
amount of time for the runnable to sleep between batch processing. The time is
in milliseconds.

Message
Queue Batch Size

The
number of messages to process in a given wake cycle.

Message
Queue Pool Size

The
number of threads in the execution pool to handle message processing.

Database Configuration

Db Write
Concern

Controls
the write behavior of sessionMgr and for what errors exceptions are raised.
Default option is OneInstanceSafe.

Db
Read Preference

Read
preference describes how sessionMgr clients route read operations to members of
a replica set. The recommended option is typically Secondary Preferred.

This
parameter is used to enter the amount of time to wait before starting failover
database handling. The time is in milliseconds.

Max
Replication Wait time Ms

This
option specifies a time limit, in milliseconds, for the write concern. This
parameter is applicable only if you select TwoInstanceSafe in Db Write Concern.

This
parameter causes write operations to return with an error after the specified
limit, even if the required write concern eventually succeeds. When these write
operations return, MongoDB does not undo successful data modifications
performed before the write concern exceeded the replication wait time limit.
This time is in milliseconds.

Shard Configuration

Primary Ip Address

The IP address of the sessionmgr node hosting the Audit database.

Secondary Ip Address

The IP address of the sessionmgr node that provides fail over support for the primary database.

This is the mirror of the database specified in the Primary IP Address field. Use this only for replication or replica pairs architecture.

This field is present but deprecated to maintain backward compatibility.

Port

Enter the Port number of the Audit database as defined in /etc/broadhop/mongoConfig.cfg.

The default value in Policy Builder is 27017.

For All-In-One deployments, the default Audit database port number is configured as 27017 (no update is needed to this field).

For HA or GR deployments, the default Audit database port is 27725. You must update this field to match the Audit database port (27725) or as defined in /etc/broadhop/mongoConfig.cfg.

According to
your network requirements, configure the parameters in Audit Configuration and
save the configuration.

Pre-configured
auditd

In the
/usr/share/doc/audit-version/ directory, the audit
package provides a set of pre-configured rules files.

The Linux Audit system provides a way to track security-relevant
information on your system. Based on pre-configured rules, Audit generates log
entries to record as much information about the events that are happening on
your system as possible.

In the
/usr/share/doc/audit-version/ directory, the audit
package provides a set of pre-configured rules files.

To use these
pre-configured rule files, create a backup of your original
/etc/audit/audit.rules file and copy the
configuration file of your choice over the
/etc/audit/audit.rules file:

Policy Tracing and
Execution Analyzer

Cisco Policy Server
comes with a set of utilities to actively monitor and trace policy execution.
These utilities interact with the core policy server and the mongo database to
trigger and store traces for specific conditions.

Administering Policy
Traces

All commands are
located on the Control Center virtual machine within
/var/qps/bin/control directory. There are two main
scripts which can be used for tracing:
trace_ids.sh and
trace.sh.

The
trace_ids.sh script maintains all rules for activating
and deactivating traces within the system.

The
trace.sh script allows for the real time or historical
retrieval of traces.

Before running
trace_ids.sh and
trace.sh, confirm which database you are using for
traces. For more information, refer to
Policy Trace Database.
If no database has been configured, then by default the scripts connects to
primary database member of SPR-SET1.

By default , if
-d option is not provided then the script connects to
primary database member of SPR-SET1. If you are not using the SPR database, you
need to find out the which database you are using. To find out which database
you are using, refer to
Policy Trace Database.
Make sure to update the commands mentioned in
Step 1
to
Step 4
accordingly.

This script starts a
selective trace and outputs it to standard out.

Step 1

Specific audit
ID tracing:

/var/qps/bin/control/trace_ids.sh -i
<specific id>

Step 2

Remove trace for
specific audit ID:

/var/qps/bin/control/trace_ids.sh -r
<specific id>

Step 3

Remove trace for
all IDs:

/var/qps/bin/control/trace_ids.sh -x

Step 4

List all IDs
under trace:

/var/qps/bin/control/trace_ids.sh -l

Adding a
specific audit ID for tracing requires running the command with the
-i argument and passing in a specific ID. The policy
server matches the incoming session with the ID provided and compares this
against the following network session attributes:

Credential
ID

Framed IPv6
Prefix

IMSI

MAC Address

MSISDN

User ID

If an exact
match is found then the transaction are traced. Spaces and special characters
are not supported in the audit ids.

By default , if
-d option is not provided then the script connects
to primary database member of SPR-SET1. If you are not using the SPR database,
you need to find out the which database you are using. To find out which
database you are using, refer to
Policy Trace Database.
Make sure to update the commands mentioned in
Step 1
to
Step 4
accordingly.

This script starts a
selective trace and outputs it to standard out.

Step 1

Specific audit
ID tracing:

/var/qps/bin/control/trace.sh
-i
<specific id>

Specifying the
-i argument for a specific ID causes a real time
policy trace to be generated while the script is running. Users can redirect
this to a specific output file using standard Linux commands.

Step 2

Dump all traces
for specific audit ID:

/var/qps/bin/control/trace.sh
-x
<specific id>

Specifying the
-x argument with a specific ID, dumps all historical
traces for a given ID. Users can redirect this to a specific output file using
standard Linux commands.

Step 3

Trace all:

/var/qps/bin/control/trace.sh
-a

Specifying the
-a argument causes all traces to output in real time
policy trace while the script is running. Users can redirect this to a specific
output file using standard Linux commands.

Step 4

Trace all
errors:

/var/qps/bin/control/trace.sh
-e

Specifying the
-e argument causes all traces triggered by an error to
output in real time policy trace while the script is running. Users can
redirect this to a specific output file using standard Linux commands.

Policy Trace
Database

The default location
of the policy trace database is the administrative database and can be
optionally specified in the trace database fields. These fields are defined at
the cluster level in the system configurations.

Note

Make sure to run all
trace utility scripts from
/var/qps/bin/control directory only.

Configure Traces
Database in Policy Builder

From left pane,
open up the
name of
your system and select the required cluster.

Step 3

From right pane,
select the check box for
Trace
Database.

The following
table provides the parameter descriptions under
Trace Database check box:

Table 5 Trace
Database Parameters

Parameter

Description

Primary
Database IP Address

The IP
address of the sessionmgr node that holds trace information which allows for
debugging of specific sessions and subscribers based on unique primary keys.

Secondary Database IP Address

The IP
address of the database that provides fail over support for the primary
database.

This is
the mirror of the database specified in the Primary IP Address field. Use this
only for replication or replica pairs architecture. This field is present but
deprecated to maintain downward compatibility.

Database
Port

Port
number of the database for Session data.

Default
value is 27717.

TACACS+

Overview

Cisco Policy Suite (CPS) is built around a distributed system that runs
on a large number of virtualized nodes. Previous versions of the CPS software
allowed operators to add custom accounts to each of these virtual machines
(VM), but management of these disparate systems introduced a large amount of
administrative overhead.

CPS has been designed to leverage the Terminal Access Controller Access
Control System Plus (TACACS+) to facilitate centralized management of users.
Leveraging TACACS+, the system is able to provide system-wide authentication,
authorization, and accounting (AAA) for the CPS system.

Further the system allows users to gain different entitlements based on
user role. These can be centrally managed based on the attribute-value pairs
(AVP) returned on TACACS+ authorization queries.

TACACS+ Service
Requirements

To provide
sufficient information for the Linux-based operating system running on the VM
nodes, there are several attribute-value pairs (AVP) that must be associated
with the user on the ACS server used by the deployment. User records on
Unix-like systems need to have a valid “passwd” record for the system to
operate correctly. Several of these fields can be inferred during the time of
user authentication, but the remaining fields must be provided by the ACS
server.

A standard “passwd”
entry on a Unix-like system takes the following form:

<username>:<password>:<uid>:<gid>:<gecos>:<home>:<shell>

When authenticating
the user via TACACS+, the software can assume values for the username,
password, and gecos fields, but the others must be provided by the ACS server.
To facilitate this need, the system depends on the ACS server provided these
AVP when responding to a TACACS+ Authorization query for a given username:

uid

A unique integer
value greater than or equal to 501 that serves as the numeric user identifier
for the TACACS+ authenticated user on the VM nodes. It is outside the scope of
the CPS software to ensure uniqueness of these values.

gid

The group
identifier of the TACACS+ authenticated user on the VM nodes. This value should
reflect the role assigned to a given user, based on the following values:

gid=501 (qns-su)

This group
identifier should be used for users that are entitled to attain super-user (or
'root') access on the CPS VM nodes.

gid=504 (qns-admin)

This group
identifier should be used for users that are entitled to perform administrative
maintenance on the CPS VM nodes.

Note

For
stopping/starting the Policy Servrer (QNS) process on node, the qns-admin user
should use
monit:

For example,

sudo monit stop qns-1
sudo monit start qns-1

gid=505 (qns-ro)

This group
identifier should be used for users that are entitled to read-only access to
the CPS VM nodes.

home

The user's home
directory on the CPS VM nodes. To enable simpler management of these systems,
the users should be configured with a pre-deployed shared home directory based
on the role they are assigned with the gid.

home=/home/qns-su should be used for users in the
qns-su group (gid=501)

home=/home/qns-admin should be used for users in the
qnsadmin group (gid=504)

home=/home/qns-ro should be used for users in the
qns-ro group (gid=505)

shell

The system-level
login shell of the user. This can be any of the installed shells on the CPS VM
nodes, which can be determined by reviewing the contents of
/etc/shells on one of the CPS VM nodes. Typically,
this set of shells is available in a CPS deployment:

/bin/sh

/bin/bash

/sbin/nologin

/bin/dash

/usr/bin/sudosh

The
/usr/bin/sudosh shell can be used to audit user's
activity on the system.

Caching of TACACS+
Users

The user environment
of the Linux-based VMs needs to be able to lookup a user's
passwd entry via different columns in that record at
different times. The TACACS+ NSS module provided as part of the CPS solution
however is only able to query the Access Control Server (ACS) for this data
using the
username. For this reason the system relies upon the
Name Service Cache Daemon (NSCD) to provide this facility locally after a user
has been authorized to use a service of the ACS server.

More details on the
operations of NSCD can be found by referring to online help for the software
(nscd --help) or in its man page (nscd(8)). Within the CPS solution it provides
a capability for the system to lookup a user's
passwd entry via their
uid as well as by their
username.

To avoid cache
coherence issues with the data provided by the ACS server the NSCD package has
a mechanism for expiring cached information.

The default NSCD
package configuration on the CPS VM nodes has the following characteristics:

Valid responses
from the ACS server are cached for 600 seconds (10 minutes)

Invalid responses
from the ACS server (user unknown) are cached for 20 seconds

Cached valid
responses are reloaded from the ACS server 5 times before the entry is
completely removed from the running set -- approximately 3000 seconds (50
minutes)

The cache are
persisted locally so it survives restart of the NSCD process or the server

It is possible for an
operator to explicitly expire the cache from the command line. To do so the
administrator need to get the shell access to the target VM and execute the
following command as a root user:

# nscd -i
passwd

The above command will
invalidate all entries in the passwd cache and force the VM to consult with the
ACS server for future queries.

There may be some
unexpected behaviors of the user environment for TACACS+ authenticated users
connected to the system when their cache entries are removed from NSCD. This
can be corrected by the user by logging out of the system and logging back into
it or by issuing the following command, which forces the system to query the
ACS server:

# id -a
“$USER”

Porting All-In-One Policy Builder Configuration to HA

This section describes how to port the Policy Builder configuration from
an All-In-One (AIO) environment to a High Availability (HA) environment.

Prerequisites

This procedure
assumes the datastore that will be used to have the virtual disk has sufficient
space to add the virtual disk.

This procedure
assumes the datastore has been mounted to the VMware ESX server, regardless of
the backend NAS device (SAN or iSCSI, etc).

Porting the Policy
Builder Configuration

Policy Builder
configuration can be utilized between environments, however, the configuration
for Systems and Policy Enforcement Points is environment-specific and should
not be moved from one environment to another.

The following
instructions will not overwrite the configuration specific to the environment.
Please note that as the Systems tab and Policy Enforcement Points data is not
moved, the HA system should have these things configured and running properly
(as stated above).

The following steps
describe the process to port a configuration from an AIO environment to an HA
environment.

Step 1

If the HA
environment is currently in use, ensure that SVN backups are up to date.

Step 2

Find the URL
that Policy Builder is using to load the configuration that you want to use.
You can find this by clicking
Edit on the initial page in Policy Builder.

The URL is
listed in the URL field. For the purpose of these instructions, the following
URL will be used for exporting the configuration from the AIO environment and
importing the configuration to the HA environment:

http://pcrfclient01/repos/configuration

Figure 15. Repository configuration

Step 3

On the AIO,
export the Policy Builder configuration by entering the following commands:

The following
steps assume you will replace the existing default Policy Builder configuration
located at http://pcrfclient01/repos/configuration on your HA environment. If
you would like to access your old configuration, copy it to a new location. For
example:

If you are
already logged into Policy Builder, reload the Policy Builder URL in your
browser to access the new configuration.

Step 12

Check for errors
in Policy Builder. This often indicates a software mismatch.

Errors are shown
with an (x) next to the navigation icons in the left pane of Policy Builder.
For example:

Figure 16. Error in Policy Builder

Step 13

Publish the
configuration. Refer to the
CPS Mobile Configuration Guide for detailed steps.

Network Cutter
Utility

CPS supports a new
network cutter utility, which keeps monitoring Policy Server (QNS) VMs
failures. When any of the Policy Server VMs are down, utility cuts those
unnecessary connections to avoid sending traffic to Policy Server VMs that are
down, and this also results in avoiding timeouts.

This utility is
started by
monit
on Policy Director (lb) VMs and keeps monitoring policy server VMs failures.

Utility stores log on
/var/log/broadhop/network-cutter.log file.

You can verify the
status of network cutter utility on lb01/02 VMs using
monit
summary and
network-cutter status command:

monit summary | grep cutter
Process 'cutter' Running

service network-cutter status
network-cutter (pid 3735) is running

You can verify if
network cutter utility has been started using
ps -ef | grep
cutter command: