About the Cisco Application Control Engine

ACE Overview

The ACE provides access control, load balancing, and high availability functionality for the Cisco TelePresence Exchange System server cluster.

Clients gain access to the server cluster through the ACE. The ACE provides a virtual IP address (VIP) that acts as a proxy for the servers. The ACE distributes client requests to the servers based on the service requested, the load-balancing algorithm, the health of the servers, and session persistence requirements.

The ACE distributes the following types of incoming Cisco TelePresence Exchange System traffic:

•SIP traffic to the call engines

•HTTP traffic to the IVR application on the call engines

•HTTP traffic to the administration servers

ACE Topology

You can configure up to four interfaces on the ACE appliance.

•You must configure one interface to serve as the outside interface.

The outside interface connects to the users of the Cisco TelePresence Exchange System cluster.

If you have a redundant ACE in the application, you must configure the outside interface as a trunk to support both a native VLAN for untagged traffic, and a fault tolerant (FT) VLAN to provide a communication path between the two ACE appliances. The two ACE appliances are in an active/standby configuration. The ACE in standby is known as the peer.

•You must configure one interface to serve as the inside interface to provide access to the Cisco TelePresence Exchange System.

Note The inside and outside interfaces must belong to different VLANs.

Configuration Overview

The ACE appliance provides server load balancing for three types of message traffic:

•SIP call control

•HTTP messages for the IVR service

•HTTP messages for the administration console

To configure the ACE for the Cisco TelePresence Exchange System, complete the following procedures:

Create a real server for each server in the Cisco TelePresence Exchange System cluster.

4. Configure access control lists.

Create access control lists (ACLs) to filter incoming or outgoing traffic on an interface based on configurable criteria (such as protocol type or IP address ranges).

5. Configure health probes.

Create a health probe for each traffic type supported by Cisco TelePresence Exchange System. A health probe defines the type of message that the ACE will periodically send to the servers, and the expected responses.

6. Configure the server farms.

Create a server farm for each Cisco TelePresence Exchange System traffic type. A server farm is a virtual server that provides a specific service. The ACE load-balances the incoming requests among the real servers that are associated with the server farm. The ACE also monitors server health (by sending periodic probes) and distributes work only to the operational real servers.

7. Configure session persistence.

Create a sticky group for each server farm. A sticky group defines how to identify the session that is associated with each incoming message.

8. Configure a management class map and a policy map.

Create these policies to allow remote management access to the Cisco TelePresence Exchange System cluster.

9. Configure Layer 7 load balancing policy maps and class maps.

Define Layer 7 policy maps and class maps for each of the three traffic types. Layer 7 class maps and policy maps define the classification and policy for traffic based on upper-layer message parameters such as HTTP header fields and SIP header fields.

10. Configure Layer 3 and Layer 4 policy maps and class maps.

Define Layer 3 and Layer 4 policy maps and class maps for each of the three traffic types. These class maps and policy maps define the classification and policy for traffic based on Layer 3 and Layer 4 message parameters such as source IP address, port, and protocol.

Each Layer 7 policy must be included in a Layer 3 and Layer 4 policy.

11. Configure VLAN interfaces.

Activate the management and load-balancing policies by associating the policy maps with the VLAN interfaces.

12. Configure miscellaneous ACE parameters and logging options.

Configure various parameters and settings that are important for correct operation of the Cisco TelePresence Exchange System.

Note All IP addresses shown in the configurations are for example purposes only.

Configuring the Hostname

By default the hostname of the ACE is switch. You can assign a specific name to the ACE. For configurations in which a redundant pair of ACEs is in use, you need to define both a hostname for the primary ACE (active system) and a peer hostname for the standby system.

All configuration for the ACE is done on the primary ACE. All configuration and changes in status are regularly communicated to the standby ACE through the fault-tolerant VLAN.

To configure the hostname for the ACE, do the following task:

Command

Purpose

Step 1

switch/Admin# configure
terminal

Enters configuration mode.

Step 2

switch/Admin(config)# peer
hostnamename

Configures the hostname for the peer (standby) ACE. The active ACE regularly communicates its configuration to the peer ACE. (Required only for redundant ACE configuration).

The hostname is a case-sensitive text string from 1 to 32 alphanumeric characters in length.

The default value of hostname is switch.

Step 3

switch/Admin(config)#
hostnamename

Configures the hostname for the active ACE.

Step 4

hostname/Admin(config)# exit

Exits configuration mode.

The following example shows how to set the hostname for an ACE in a non-redundant configuration to ACE_1:

switch/Admin# configure terminal

switch/Admin(config)# hostname ACE_1

ACE_1/Admin(config)# exit

The following example shows how to set hostnames for two ACEs in a redundant configuration where ACE_1 is the active ACE and ACE_2 is the peer ACE that is in standby:

switch/Admin# configure terminal

switch/Admin(config)# peer hostname ACE_2

switch/Admin(config)# hostname ACE_1

ACE_1/Admin(config)# exit

Configuring Interfaces

You can configure up to four interfaces on the ACE. You must configure at least one outside interface and one inside interface. The outside interface connects to the users of the Cisco TelePresence Exchange System server cluster and the inside interface connects to the server cluster.

Delays the processing of hardware link down and link up notifications. Delay values are in ms. (Required only for the redundant ACE configuration).

Step 9

ACE_1/Admin(config-if)# qos
trust cos

Sets the trusted state of an interface by defining which packet classifications the interface can carry. Definable classifications are CoS, ToS, and DSCP. (Required only for the redundant ACE configuration).

Step 10

ACE_1/Admin(config-if)#
switchport trunk native vlan
vlan_ID

Assigns a native trunk VLAN to the interface for untagged traffic. (Required for redundant ACE configurations.)

Step 11

ACE_1/Admin(config-if)#
switchport trunk allowed vlan
vlan_ID

Assigns a VLAN to the interface that can receive and transmit traffic on the trunk. You can define multiple VLANs on this trunk. In redundant ACE configurations, you define a fault-tolerant VLAN to provide a communication path for the heartbeat between the redundant ACE pair, in addition to a native VLAN. (Required for redundant ACE configurations.)

Non-Redundant Configuration

The following example shows how to configure and enable port 1 as the inside interface and port 2 as the outside interface for a non-redundant ACE configuration:

Interfaces 3 and 4 are not configured or enabled in this configuration and instead are shut down.

ACE_1/Admin# config

ACE_1/Admin(config)# interface gigabitEthernet 1/1

ACE_1/Admin(config-if)# switchport access vlan350

ACE_1/Admin(config-if)# no shutdown

ACE_1/Admin(config)# interface gigabitEthernet 1/2

ACE_1/Admin(config-if)# switchport access vlan340

ACE_1/Admin(config-if)# no shutdown

ACE_1/Admin(config)# interface gigabitEthernet 1/3

ACE_1/Admin(config-if)# shutdown

ACE_1/Admin(config)# interface gigabitEthernet 1/4

ACE_1/Admin(config-if)# shutdown

ACE_1/Admin(config-if)# exit

Redundant Configuration

The following example shows how to configure port 1 as the inside interface and port 2 as the outside trunk interface and ports 3 and 4 as access interfaces in a redundant ACE configuration:

ACE_1/Admin# config

ACE_1/Admin(config)# interface gigabitEthernet 1/1

ACE_1/Admin(config-if)# switchport access vlan350

ACE_1/Admin(config-if)# no shutdown

ACE_1/Admin(config)# interface gigabitEthernet 1/2

ACE_1/Admin(config-if)# speed1000

ACE_1/Admin(config-if)# duplex full

ACE_1/Admin(config-if)# carrier-delaydown30 up 30

ACE_1/Admin(config-if)# switchport trunk native vlan340

ACE_1/Admin(config-if)# switchport trunk allowed vlan340, 999

ACE_1/Admin(config-if)# no shutdown

ACE_1/Admin(config)# interface gigabitEthernet 1/3

ACE_1/Admin(config-if)# switchport access vlan390

ACE_1/Admin(config-if)# noshutdown

ACE_1/Admin(config)# interface gigabitEthernet 1/4

ACE_1/Admin(config-if)# switchport access vlan410

ACE_1/Admin(config-if)# noshutdown

ACE_1/Admin(config-if)# exit

Configuring Real Servers

Configure a real server for each physical administration and call engine server in the cluster.

To configure a real server, do the following task:

Command

Purpose

Step 1

ACE_1/Admin(config)# rserver
name

Enters real server configuration mode for the specified real server.

Step 2

ACE_1/Admin(config-rserver-ho
st)# ip address ip_address

Configures the IP address for the real server.

Step 3

ACE_1/Admin(config-rserver-ho
st)# inservice

Places the real server in-service.

Step 4

ACE_1/Admin(config-rserver-ho
st)# exit

Exits real server configuration mode.

The following example shows how to configure the administration real servers:

ACE_1/Admin(config)# rserver CTX-ADMIN-1

ACE_1/Admin(config-rserver-host)# ip address 10.22.139.123

ACE_1/Admin(config-rserver-host)# inservice

ACE_1/Admin(config-rserver-host)# exit

ACE_1/Admin(config)# rserver CTX-ADMIN-2

ACE_1/Admin(config-rserver-host)# ip address 10.22.139.124

ACE_1/Admin(config-rserver-host)# inservice

ACE_1/Admin(config-rserver-host)# exit

The following example shows how to configure the call engine real servers:

ACE_1/Admin(config)# rserver SIPE-1

ACE_1/Admin(config-rserver-host)# ip address 10.22.139.125

ACE_1/Admin(config-rserver-host)# inservice

ACE_1/Admin(config-rserver-host)# exit

ACE_1/Admin(config)# rserver SIPE-2

ACE_1/Admin(config-rserver-host)# ip address 10.22.139.126

ACE_1/Admin(config-rserver-host)# inservice

ACE_1/Admin(config-rserver-host)# exit

ACE_1/Admin(config)#

Configuring Access Control Lists

Access control lists (ACLs) allow you to filter incoming or outgoing traffic on an interface based on configurable criteria (such as protocol type or IP address ranges).

For the Cisco TelePresence Exchange System, configure an ACL to permit all IP traffic from any source address to any destination address. To create the ACL, enter the following command in configuration mode:

ACE_1/Admin(config)# access-list ALL line 8 extended permit ip any any

Configuring Health Probes

You can configure health probes to monitor the health of the Cisco TelePresence Exchange System server cluster. The ACE appliance periodically sends a probe message to each server and evaluates the response to determine the state of the server.

The following sections describe the health probes that you can configure for the server cluster:

Configuring a SIP Health Probe

You can define SIP (UDP and TCP) probes to monitor the health of the call processing service.

To configure a SIP health probe, do the following task:

Command

Purpose

Step 1

ACE_1/Admin(config)# probe sip
{udp | tcp}name

Enter the type of SIP probe (UDP or TCP) and the name of the probe.

Step 2

ACE_1/Admin(config-probe-sip)#
interval seconds

Configures the time interval between probes (in seconds). The default value is 15 seconds.

Step 3

ACE_1/Admin(config-probe-sip)#
faildetect retry-count

Configures the number of consecutive failed probes before the server state is marked as failed. The default value is 2.

Step 4

ACE_1/Admin(config-probe-sip)#
passdetect interval seconds

Configures the time interval (in seconds) between sending probes to a failed server, or the number of consecutive successful probe responses before marking the server state as active.

Step 5

ACE_1/Admin(config-probe-sip)#
passdetect count number

Configures the number of consecutive successful probe responses before marking the server state as active.

Step 6

ACE_1/Admin(config-probe-sip)#
expect status min_number
max_number

Configures the range (minimum and maximum values) of status codes that an ACE expects in the probe response. To configure a single status code, enter the same number for min_value and max_value.

Step 7

ACE_1/Admin(config-probe-sip)#
open timeout

Configures the time interval (in seconds) to wait for a TCP connection to be established. By default, the ACE waits 10 seconds to open and establish the connection with the server.

The following example shows how to configure a SIP UDP probe:

ACE_1/Admin(config)# probe sip udp SIP-OPTION

ACE_1/Admin(config-probe-sip)# interval 2

ACE_1/Admin(config-probe-sip)# faildetect 1

ACE_1/Admin(config-probe-sip)# passdetect interval 4

ACE_1/Admin(config-probe-sip)# passdetect count 2

ACE_1/Admin(config-probe-sip)# expect status 200 200

ACE_1/Admin(config-probe-sip)# open 1

The following example shows how to configure a SIP TCP probe:

ACE_1/Admin(config)# probe sip tcp SIP-TCP-OPTION

ACE_1/Admin(config-probe-sip)# interval 2

ACE_1/Admin(config-probe-sip)# faildetect 1

ACE_1/Admin(config-probe-sip)# passdetect interval 4

ACE_1/Admin(config-probe-sip)# passdetect count 2

ACE_1/Admin(config-probe-http)# expect status 200 200

ACE_1/Admin(config-probe-http)# open 1

Creating Server Farms

A server farm is a connected group of real servers that perform the same function. You must define at least two real servers to include in a server farm.

To create a server farm and define real server membership for those server farms, do the following task:

Command

Purpose

Step 1

ACE_1/Admin(config)#
serverfarm host name

Creates the server farm and enters the server farm configuration mode for the specified server farm.

Step 2

ACE_1/Admin(config-sfarm-host)
# failaction purge

Configures the action that is taken if a real server in the server farm goes down. Purge indicates that ACE removes the connection to the real server and sends a reset (RST) to the server.

Step 3

ACE_1/Admin(config-sfarm-host)
# probe name

Specifies the probe to use for monitoring the health of real servers in this server farm.

Step 4

ACE_1/Admin(config-sfarm-host)
# rserver name

Associates the specified real server as a member of this server farm.

Step 5

ACE_1/Admin(config-sfarm-host-
rs)# inservice

Places the real server in service.

Step 6

ACE_1/Admin(config-sfarm-host-
rs)# exit

Exits server farm real-server configuration mode

Step 7

ACE_1/Admin(config-sfarm-host)
# exit

Exits server farm configuration mode.

For the Cisco TelePresence Exchange System:

•Create a server farm for the administration console service and associate at least two administration servers (on which the administration console runs) to the server farm.

•Create a server farm for the IVR application and associate at least two call engine servers (on which the IVR application runs) to the server farm.

•Create a server farm for the SIP (call processing) service and associate at least two call engine servers (on which the SIP service runs) to the server farm.

Real servers can belong to multiple server farms. Although the SIP service and IVR application both run on the call engine (real server), you define a separate server farm for each service because the health probes and the session persistence criteria are different for the two services.

The following example shows how to configure a server farm for the administration console on the administration servers:

ACE_1/Admin(config)# serverfarm host CTX-ADMIN

ACE_1/Admin(config-sfarm-host)# failaction purge

ACE_1/Admin(config-sfarm-host)# probe ctx-admin

ACE_1/Admin(config-sfarm-host)# rserver CTX-ADMIN-1

ACE_1/Admin(config-sfarm-host-rs)# inservice

ACE_1/Admin(config-sfarm-host-rs)# exit

ACE_1/Admin(config-sfarm-host)# rserver CTX-ADMIN-2

ACE_1/Admin(config-sfarm-host-rs)# inservice

ACE_1/Admin(config-sfarm-host-rs)# exit

The following example shows how to configure a server farm for the IVR application on the call engine servers:

ACE_1/Admin(config)# serverfarm host IVR_SERVERS

ACE_1/Admin(config-sfarm-host)# failaction purge

ACE_1/Admin(config-sfarm-host)# probe IVR

ACE_1/Admin(config-sfarm-host)# rserver SIPE-1

ACE_1/Admin(config-sfarm-host-rs)# inservice

ACE_1/Admin(config-sfarm-host-rs)# exit

ACE_1/Admin(config-sfarm-host)# rserver SIPE-2

ACE_1/Admin(config-sfarm-host-rs)# inservice

ACE_1/Admin(config-sfarm-host-rs)# exit

The following example shows how to create a server farm for the SIP service on the call engine servers:

ACE_1/Admin(config)# serverfarm host SIP_FARM

ACE_1/Admin(config-sfarm-host)# failaction reassign

ACE_1/Admin(config-sfarm-host)# probe SIP_UDP-OPTION

ACE_1/Admin(config-sfarm-host)# rserver SIPE-1

ACE_1/Admin(config-sfarm-host-rs)# inservice

ACE_1/Admin(config-sfarm-host-rs)# exit

ACE_1/Admin(config-sfarm-host)# rserver SIPE-2

ACE_1/Admin(config-sfarm-host-rs)# inservice

ACE_1/Admin(config-sfarm-host-rs)# exit

Configuring Session Persistence

Session persistence ensures that the system directs all messages for a session to the same real server. Session persistence is also known as stickiness.

On the ACE, you configure session persistence by defining sticky groups. The sticky group defines how to identify sessions based on the value of specific fields within the incoming messages.

For the Cisco TelePresence Exchange System, configure a sticky group for each of the server farms.

This section addresses sticky group configuration and includes the following topics:

Creating SIP Header Sticky Groups

The SIP header sticky group identifies sessions based on fields in the SIP message header.

For the call processing service, create a sticky group based on the SIP Call ID field. All messages with the same call ID will be directed to the same real server.

To create a SIP header sticky group, do the following task:

Command

Purpose

Step 1

ACE_1/Admin(config)# sticky
sip-header Call-ID name2

Creates a SIP header sticky group, which recognizes sessions based on the Call ID field in the header.

Step 2

ACE_1/Admin(config-sticky-hea
der)# timeout minutes

Configures a timeout value for the sticky group. The value is the number of minutes that the ACE retains the sticky information for each client session. The default value is 1440 minutes.

Step 3

ACE_1/Admin(config-sticky-hea
der)# serverfarm name1

Associates a server farm with this sticky group.

The following example shows how to create a sticky group that uses the SIP call ID field to identify sessions:

ACE_1/Admin(config)# sticky sip-header Call-ID SIP_FARM

ACE_1/Admin(config-sticky-header)# timeout 5

ACE_1/Admin(config-sticky-cookie)# serverfarm SIP_FARM

Creating HTTP Cookie Sticky Groups

The HTTP cookie sticky group identifies sessions based on the cookie value in the HTTP header. The system directs all messages with the same cookie value to the same server. The ACE can insert a cookie into the server response for the first client message. The ACE uses this cookie value to identify the session, and then forwards this same cookie value in all subsequent client messages.

To create the HTTP cookie sticky group, do the following task:

Command

Purpose

Step 1

ACE_1/Admin(config)# sticky
http-cookie name1 name2

Creates an HTTP cookie sticky group, which recognizes sessions based on the cookie value (name1) in the HTTP header. Name2 is the name of the sticky group.

Step 2

ACE_1/Admin(config-sticky-coo
kie)# cookie insert
browser-expire name

Enables cookie insertion. The ACE inserts a session cookie in the server response to the client, to ensure stickiness to the same server.

Browser-expire allows the client browser to expire the cookie after the session ends.

Step 3

ACE_1/Admin(config-sticky-coo
kie)# serverfarm name

Associates the sticky group with the specified SIP server farm.

The following example shows how to configure an HTTP cookie sticky group for the administration console:

ACE_1/Admin(config)# sticky http-cookie ctx_1 WEB_STICKY

ACE_1/Admin(config-sticky-cookie)# cookie insert browser-expire

ACE_1/Admin(config-sticky-cookie)# serverfarm CTX-ADMIN

Creating HTTP Header Sticky Groups

The HTTP header sticky group identifies sessions based on the value of fields in the HTTP header. You can configure the sticky group to use a specific portion of the header.

To create an HTTP header sticky group, do the following task:

Command

Purpose

Step 1

ACE_1/Admin(config)# sticky
http-header name1 name2

Creates an HTTP header sticky group. Name1 is the HTTP header name. Name2 is the name of the sticky group.

Creates a Layer 7 class map for HTTP server load balancing. The match-any keyword indicates that a message matches this class map if any of the configured match statements are true. The name has a maximum of 64 alphanumeric characters and must not contain spaces.

Configuring Layer 7 Load Balancing Policy Maps

A Layer 7 load balancing policy map specifies the traffic (based on a class map) to send to each server farm for load balancing. The order of classes in the policy map is significant, as traffic is sent to the server farm that is associated with the first matching traffic class in the policy.

Associates a class map with this policy map. You can associate multiple class maps with a policy map.

Step 3

ACE_1/Admin(config-pmap-lb-c)
# sticky-serverfarm name

Specifies that the traffic that matches this class is load-balanced to the specified sticky server farm.

Step 4

ACE_1/Admin(config-pmap-lb-c)
exit

Exits class map HTTP load balancing configuration mode.

The following example shows how to create a Layer 7 policy map to load balance IVR traffic by using the IVR_STICKY server farm. The system load balances all other traffic by using the WEB_STICKY server farm:

ACE_1/Admin(config)# policy-map type loadbalance first-match VXML-LB

ACE_1/Admin(config-pmap-lb)# class IVR

ACE_1/Admin(config-pmap-lb-c)# sticky-serverfarm IVR_STICKY

ACE_1/Admin(config-pmap-lb-c)# class class-default

ACE_1/Admin(config-pmap-lb-c)# sticky-serverfarm WEB-STICKY

Note Class-default is a pre-configured class map that matches all traffic.

The following example shows how to create a policy map to load balance SIP traffic across the SIP_FARM server farm:

The following example shows how to create a Layer 4 policy map to enable traffic inspection for all SIP traffic:

ACE_1/Admin(config)# policy-map multi-match SIP_INSPECT

ACE_1/Admin(config-pmap)# class SIP_TRAFFIC

ACE_1/Admin(config-pmap-c)# inspect sip

The following example shows how to apply UDP connection timeout settings for all SIP UDP traffic:

ACE_1/Admin(config)# policy-map multi-match UDP_TIMEOUT

ACE_1/Admin(config-pmap)# class SIP_UDP_CLASS

ACE_1/Admin(config-pmap-c)# connection advanced-options UDP-Timeout

Configuring VLAN Interfaces

Each Gigabit Ethernet port must be associated with a VLAN. For redundant configurations of the Cisco TelePresence Exchange System using the ACE, you must also define a fault-tolerant (FT) VLAN. The redundant ACE pair constantly communicate over the dedicated FT VLAN to determine the operating status of each appliance. The standby member uses the heartbeat packet to monitor the health of the active member. The active member uses the heartbeat packet to monitor the health of the standby member. Each ACE peer can also contain one or more FT groups. Each FT group consists of two members: one active context and one standby context. An FT group has a unique group ID that you assign.

You also must configure a different IP address within the same subnet on each appliance for the FT VLAN.

Note Do not use this dedicated VLAN for any other network traffic, including HSRP and data.

For multiple contexts, the FT VLAN resides in the system configuration file. Each FT VLAN on the ACE has one unique MAC address that is associated with it. The ACE uses these device MAC addresses as the source or destination MACs for sending or receiving redundancy protocol state and configuration replication packets.

Note An ACE appliance and an ACE module operating as peers cannot operate as redundant pairs for the Cisco TelePresence Exchange System. System redundancy must employ the same ACE device type and software release.

To configure a VLAN interface, do the following task:

Command

Purpose

Step 1

ACE_1/Admin(config)#
interface vlan

vlan_number

Enters configuration mode for the specified VLAN interface.

Step 2

ACE_1/Admin(config-if)# ip
address ip-address mask

Configures the IP address and mask for the VLAN interface.

Step 3

ACE_1/Admin(config-if)# aliasip address ip-address mask

Defines the default route when a redundant ACE configuration exists. (Required only for redundant ACE configurations).

Step 4

ACE_1/Admin(config-if)# peerip address ip-address mask

Defines the IP address and mask for the fault tolerant VLAN interface. (Required only for redundant ACE configurations).

Step 5

ACE_1/Admin(config-if)#
access-group {input | output}name

Associates the specified access group list (ACL) with the VLAN. The ACL will be applied to all incoming traffic (input) or outgoing traffic (output).

Step 6

ACE_1/Admin(config)#
service-policy { input |
output }name

Associates the specified service policy with the VLAN. The service policy will be applied to all incoming traffic (input) or outgoing traffic (output). (Not configured on fault tolerant VLANs).

Step 7

ACE_1/Admin(config)# ft
interface interface_name

Creates a fault tolerant VLAN to provide a communication path for updates from the active ACE to its peer (standby). (Required only for redundant ACE configurations).

Step 8

ACE_1/Admin(config-ft-intf)#
ip address ip-address mask

Configures the IP address and mask for the VLAN interface. (Required only for redundant ACE configurations).

Step 9

ACE_1/Admin(config-ft-intf)#
peer ip address ip-address
mask

Specifies the IP address and mask of the ACE peer. (Required only for redundant ACE configurations).

Step 10

ACE_1/Admin(config-ft-intf)#
no shutdown

Enables the VLAN interface.

Step 11

ACE_1/Admin(config-ft-intf)#
exit

Exits fault tolerant interface configuration mode.

Step 12

ACE_1/Admin(config)# ft peer
peer_id

Configures an ACE local redundancy peer.

Step 13

ACE_1/Admin(config-ft-peer)#
ft-interface vlanvlan_id

Associates the fault-tolerant (FT) VLAN with the peer.

Note This VLAN ID must also be configured on the switch. Only a layer 2 definition is required.

Step 14

ACE_1/Admin(config-ft-peer)#
heartbeat interval frequency

heartbeat count number

Configures the heartbeat interval and count for the fault-tolerant peer. Values are in milliseconds (ms).

Step 15

ACE_1/Admin(config-ft-peer)#
query-interface vlan vlan_id

Defines the actual (routable) VLAN and interface that the fault-tolerant peer uses to send health-check and replication messages. A query interface allows the standby ACE to determine whether the active ACE is down or if there is a connectivity problem with the FT VLAN. A query interface helps prevent two redundant contexts from becoming active at the same time for the same FT group.

Step 16

ACE_1/Admin(config-ft-peer)#
no shutdown

Enables the query interface.

Step 17

ACE_1/Admin(config-ft-peer)#
exit

Exits fault-tolerant peer configuration mode.

Step 18

ACE_1/Admin(config)# ft groupgroup_id

Creates a fault-tolerant group for redundancy.

Step 19

ACE_1/Admin(config-ft-group)#
peerpeer_id

Associates the peer with the fault-tolerant group.

Step 20

ACE_1/Admin(config-ft-group)#
no preempt

Disables preemption on the fault-tolerant group. Preemption ensures that the group member with the higher priority always asserts itself and becomes the active member.

Step 21

ACE_1/Admin(config-ft-group)#
prioritynumber

Configures the priority of the active group member.Values are 1 to 255.Configure a higher priority for the group on the module on which you want the active member to initially reside.

Step 22

ACE_1/Admin(config-ft-group)#
associate-context name

Associates a context with each fault-tolerant group. You must associate the local ACE with the fault-tolerant group. You can assign multiple contexts.

Step 23

ACE_1/Admin(config-ft-group)#
inservice

Places a fault-tolerant group in service.

Non-Redundant Configuration

The following example shows how to configure VLAN 340 as the outside interface. The service-policy commands activate the Layer 3 and Layer 4 policies on this VLAN. The Layer 7 load-balancing policies become active because they are encapsulated in the Layer 3 and Layer 4 policies:

ACE_1/Admin(config)# interface vlan 340

ACE_1/Admin(config-if)# ip address10.22.139.102 255.255.255.240

ACE_1/Admin(config-if)# access-group input ALL

ACE_1/Admin(config-if)# service-policy input remote_mgmt_allow_policy

ACE_1/Admin(config-if)# service-policy input L4-POLICY

ACE_1/Admin(config-if)# service-policy input UDP_TIMEOUT

ACE_1/Admin(config-if)# service-policy input IVR_LB

ACE_1/Admin(config-if)# no shutdown

ACE_1/Admin(config-if)# exit

The following example shows how to configure the VLAN 350 interface as the inside interface:

ACE_1/Admin(config)# interface vlan 350

ACE_1/Admin(config-if)# ip address10.22.139.113 255.255.255.240

ACE_1/Admin(config-if)# access-group input ALL

ACE_1/Admin(config-if)# service-policy input remote_mgmt_allow_policy

ACE_1/Admin(config-if)# service-policy input UDP_TIMEOUT

ACE_1/Admin(config-if)# service-policy input SIP_INSPECT

ACE_1/Admin(config-if)# no shutdown

ACE_1/Admin(config-if)# exit

Redundant Configuration

The following example shows how to configure VLAN 340 as the outside interface to support redundancy. The service-policy commands activate the Layer 3 and Layer 4 policies on this VLAN. The Layer 7 load-balancing policies become activated because they are encapsulated in the Layer 3 and Layer 4 policies:

Configuring the IP Default Route

Configure the default IP route for the inside VLAN to be the ACE inside interface. This configuration ensures that all traffic originating from the Cisco TelePresence Exchange System cluster transits through the ACE.

To define the default IP route (gateway), enter the following command:

Configuring the Sticky Resource Class

Create a sticky resource class to reserve the required system resources.

You define the resource requirement as a percentage of the total available resources.

For example, you can create a sticky resource class that allows access to the ACE for no less that 20 percent of the total number of stickiness connections that the ACE appliance supports. You must configure a minimum value for sticky to allocate resources for sticky entries, because the sticky software receives no resources under the unlimited (no limit) setting. The maximum value is either the same as the minimum value (equal-to-min) or has no limit.

To configure a sticky resource class and the number of sticky entries supported, do the following task:

Step 1 To define a resource class that allows call stickiness, enter the following command:

ACE_1/Admin#(config)# resource-class sticky

ACE_1/Admin#(config-resource)#

Step 2 To define the minimum and maximum entries allowed in the sticky resource class table, enter the following commands:

Assigning the Admin Context to the Sticky Resource Class

You can operate the ACE in a single context or in multiple contexts. Multiple contexts use virtualization to partition the ACE into multiple virtual devices. Each context can contain its own set of policies, interfaces, resources, and administrators.

By default, the system enables a single virtual context known as the Admin context.

Use the member command to associate the sticky resource class to the Admin context.

The following example shows how to assign the sticky resource class to the default Admin context:

ACE_1/Admin(config)# context Admin

ACE_1/Admin(config-context)# member sticky

Configuring ACE Logging Options

You can configure the logging severity level, which specifies the severity system messages that the ACE logs. The ACE supports eight logging levels. Severity level values are 0 to 7; the lower the level number, the more severe the error.

The ACE logs messages of the specified level and those lower. For example, if the logging severity level is 3, the ACE logs messages with a severity level of 0, 1, 2, and 3.

To enable logging of syslog messages on the ACE, do the following task:

Step 1 To enable logging to all output locations, enter the following commands:

ACE_1/Admin# configure

ACE_1/Admin#(config)# logging enable

To stop message logging to all output locations, enter the nologging enable command at the configuration mode.

Step 2 To enable logging of syslog messages and to assign a security level to specify which syslog messages the system logs, do this task:

a. To enable logging of syslog messages during a console session by using the logging consoleseverity_level configuration mode command, enter the following command:

ACE_1/Admin#(config)# logging console 2

By default, the ACE does not display syslog messages during console sessions. To disable logging on the ACE, enter the nologging console command at the configuration mode.

b. To identify the date and time of a syslog message by using the logging timestamp configuration mode command, enter the following command:

ACE_1/Admin#(config)# logging timestamp

By default, the ACE does not generate a timestamp for syslog messages.

c. To identify the severity level of messages that are sent to the syslog server by using the logging trapseverity_level configuration mode command, enter the following command:

ACE_1/Admin#(config)# logging trap3

Todisable logging of traps, enter the no logging trap command at the configuration mode.

d. To enable logging of Simple Network Management Protocol (SNMP) messages and to set the severity level for log messages that are sent to a network management system (NMS) by using the logging historyseverity_level configuration mode command, enter the following command:

ACE_1/Admin#(config)# logging history7

To disable logging of SNMP messages, enter the nologging history command at the configuration mode.

e. To enable system logging to a local buffer and to limit the messages sent to the buffer based on severity level by using the logging bufferedseverity_level configuration mode command, enter the following command:

ACE_1/Admin#(config)# logging buffered7

f. To change the logging facility to a value other than the default of 20 (LOCAL4) by using the logging facilitynumber configuration mode command, enter the following command:

ACE_1/Admin#(config)# logging facility23

The number can be a value from 16 (LOCAL0) to 23 (LOCAL7).

Most UNIX systems expect messages to use facility 20. The ACE allows you to change the syslog facility type to identify the behavior of the syslog daemon (syslogd) on the host.

To reset the logging facility to the default value of 20, enter the nologging facility command at the configuration mode.

g. To specify that the ACE hostname serves as the device ID within the syslog message, enter the following command:

ACE_1/Admin#(config)# logging device-id hostname

To disable use of the hostname as the device ID in the syslog message, enter the no logging device-id command.

h. To specify the syslog server (host) that receives the ACE syslog messages, enter the following command:

ACE_1/Admin#(config)# logging hostip_address

For the ip_address variable, enter the IP address of the host that serves as the syslog server.

You do not need to specify a port for the syslog server because by default it uses a UDP port of 514.

You can use multiple logging host commands to specify additional servers to receive the syslog messages.

To disable logging of ACE syslog messages to a syslog server, enter the no logging host ip_address.

i. To control the display of a specific system logging message or to change the severity level that is associated with the specified system logging message by using the logging messagesyslog_id [level severity_level] configuration mode command, enter the following commands:

ACE_1/Admin#(config)# logging message 111088level 3

ACE_1/Admin#(config)# logging message 607002 level 3

ACE_1/Admin#(config)# logging message 607004 level 3

ACE_1/Admin#(config)# logging message 607005 level 3

To disable logging of the specified syslog message, use the no logging message syslog_id command at the configuration mode.