Introduction

The Cisco wireless solution provides the framework to integrate and extend wireless networks efficiently and economically. The solution extends wireless into important elements of the network infrastructure, providing the same level of security, scalability, reliability, ease of deployment, and management for wireless LANs. This document provides information about configuring the Cisco Catalyst 6500 series WLSM in a typical wireless network.

The WLSM is one component in the larger wireless LAN solution. The following are additional required components:

Understanding Wireless LAN Services

The WLSM provides the following features for 802.11 wireless clients on Catalyst 6500 series switches:

•Fast, uninterrupted, secure Layer 2 and Layer 3 wireless roaming

•Radio-management aggregation

•WLSM scalability (support for up to 600 access points)

•Graceful tunnel resiliency and redundancy

•RADIUS assigned mobility group

•Improved multicast support

•Support for 240 mobility groups

•Support for WDS information MIB

Figure 1 shows the system view for the WLSM. Traffic between the access point and the Catalyst 6500 series switch is IP directed. The two devices may be separated by bridges or routers.

Figure 1 WLSM System View

Wireless LAN context control protocol (WLCCP) messages carry authentication message exchanges between the access point and the wireless domain services (WDS) running on the Catalyst 6500 series switch. The Catalyst 6500 series switch acts as an authenticator by learning the location of every associated wireless client node.

The switch learns the MAC-to-IP bindings of the wireless clients either by snooping on the DHCP exchanges or by snooping ARP or IP packets from the wireless nodes. These two learning mechanisms enable the switch to provide uninterrupted Layer 3 mobility to roaming wireless nodes.

You configure a multipoint generic routing encapsulation (mGRE) tunnel between the Catalyst 6500 series switch and each access point so that mobile users can roam between access points and maintain Layer 3 connectivity. The multipoint GRE tunnels simulate logical Layer 3 networks between access points, providing an easier and faster solution for Layer 3 roaming.

Understanding WDS

WDS is a feature for access points in Cisco IOS software and the basis of the Catalyst 6500 series WLSM. WDS is a core function that enables other features such as these:

•Fast Secure Roaming

•Wireless LAN Solution Engine (WLSE) interaction

•Radio Management

You must establish relationships between the access points that participate in WDS and the Wireless LAN Services Module, before any other WDS-based features work. One of the purposes of WDS is to reduce the time required for client authentication by eliminating the need for the authentication server to validate user credentials.

In order to use WDS, you must designate one access point or the Wireless LAN Services Module as the WDS. A WDS access point must establish a relationship to an authentication server by authenticating to it with a WDS username and password. The authentication server can be either an external RADIUS server or the Local RADIUS Server feature in the WDS access point. The Wireless LAN Services Module must have a relationship with the authentication server, even though it does not need to authenticate to the server.

Other access points, called infrastructure access points, communicate with the WDS. Before registration occurs, the infrastructure access points must authenticate themselves to the WDS. An infrastructure server group on the WDS defines this infrastructure authentication.

Client authentication is defined by one or more client server groups on the WDS.

When a client attempts to associate to an infrastructure access point, the infrastructure access point passes the credentials of the user to the WDS for validation. If it is the first time that the WDS sees the credentials, it turns to the authentication server to validate the credentials. The WDS then caches the credentials so that it does not have to return to the authentication server when that user attempts authentication again. Reauthentication can occur under any of the following conditions:

•When the access points rekey

•When the client roams between access points

•When the user starts up the client device

Any RADIUS-based Extensible Authentication Protocol (EAP) can be tunneled through WDS, such as these protocols:

•Lightweight EAP (LEAP)

•Protected EAP (PEAP)

•EAP-Transport Layer Security (EAP-TLS)

•EAP-Flexible Authentication through Secure Tunneling (EAP-FAST)

The WDS and the infrastructure access points communicate over WLCCP. These multicast messages can not be routed, so a WDS and its associated infrastructure access points must be in the same IP subnet and on the same LAN segment. Between the WDS and the WLSE, WLCCP uses TCP and User Datagram Protocol (UDP) on port 2887. When the WDS and WLSE are on different subnets, the packets cannot be translated with a protocol like Network Address Translation (NAT).

Current design recommendations specify one WDS access point per thirty infrastructure access points. The Wireless LAN Services Module can handle up to 600 infrastructure access points.

Layer 2 and Layer 3 Mobility

Layer mobility occurs when a wireless LAN client moves between wireless access points that are within the same IP subnet. Layer 3 mobility occurs when a wireless LAN client moves between wireless access points that are in different IP subnets. (See Figure 2.)

Fast secure roaming enables a client to change its connection between access points in the same subnet (Layer 2 mobility) or between subnets (Layer 3 mobility) to support time-sensitive applications such as VoIP, video on demand, VPN over wireless, and client/server-based applications.

Figure 2 Examples of Layer 2 and Layer 3 Mobility

Layer 2 Mobility

Layer 2 mobility occurs when a wireless LAN device physically moves enough so that its radio associates to a different access point. The original and the updated access points offer coverage for the same IP subnet, so that the wireless LAN client is still valid after the roam.

Layer 3 Mobility

Mobility in a wireless LAN environment can present a challenge as the physical reach of the network grows. Applications such as voice require roam times below 150 ms and require IP address continuity regardless of the Layer 3 boundaries that are crossed. Deploying a sprawling Layer 2 network can subject user traffic to delays and loss of service due to issues such as broadcast storms and Spanning Tree Protocol (STP) reconvergence times.

Layer 3 mobility provides a better performing and more scalable approach. Access points may be deployed in any location in a large Layer 3 network without requiring a single VLAN to be carried throughout the wired switch infrastructure. An overlay of multipoint GRE (mGRE) tunnels allows clients to roam to other access points residing on different Layer 3 subnets without loss of connectivity or a change in IP addressing.

The Cisco Layer 3 mobility solution consists of various hardware and software components. For more information about the Cisco wireless solution go to Cisco.com:

Wireless Domain Services (WDS) coordinates these devices and the mobile nodes. The WDS runs on the WLSM. These components must be configured to work together as a unified system.

Configuring Layer 3 mobility requires linkage between different hardware and software components. Linkage is best accomplished by separating the functional components into modules, configuring each module individually, and verifying that each module works properly before proceeding to the next.

New Features in Release 2.1.1

The following sections describe the new features supported in Release 2.1.1:

Increased Access Point Scalability

Memory and software improvements have increased scalability from 300 to 600 access points.

Multiple WLSMs per Catalyst 6500 Chassis

In Release 2.1.1, the Supervisor 720 now supports two WLSMs in a chassis. In this configuration, only one WLSM can be active; the other is operating in a standby state. If the active WLSM fails, the standby WLSM becomes active in a matter of seconds, and combined with graceful tunnel resiliency, the WLSM switchover is seamless and transparent to the user. New clients and roaming clients are minimally affected because of the short time it takes to bring the standby WLSM to the active state.

Running Hot Standby Router Protocol (HSRP) on all WLSMs acheives intra-switch and inter-switch hot standby WLSM redundancy. In order to avoid unnecessary failovers and make use of a graceful recovery feature, disable preemption for HSRP.

Graceful Tunnel Resiliency

Graceful tunnel resiliency is a high availability feature that provides near Stateful Switchover (SSO) capability. In the event of a WLSM failure, graceful tunnel resiliency maintains data traffic forwarding for all existing Mobile Nodes (MNs) that are authenticated. This is done for a configurable grace period. MN authentication and session states are refreshed without disruption to their data traffic after the WLSM reboots or a backup WLSM takes over. Only new authentications or roaming is affected when the WLSM is down or in a recovery state.

Support for 240 Mobility Groups

This feature provides increased scalability and flexibility by supporting up to 240 mobility groups. A larger number of mobility groups allows for multiple policies based on user posture validation. Also, each mobility domain may be set as a smaller group to address big flat IP subnet concerns.

No additional WLSM configuration is required for this feature.

Improved Multicast Support

Release 2.1.1 provides an IGMP snooping-based multicast solution. IGMP snooping is performed on the access point to allow forwarding of downstream multicast traffic from the native network infrastructure to clients of dynamic RADIUS-assigned mobility groups. Multicast traffic forwarding for any mobility group can be turned on or off with the CLI on the Supervisor 720.

The Catalyst 6500 series wireless LAN handles multicast traffic differently from unicast IP traffic. When a wireless user sends upstream IP multicast traffic, the access point encapsulates the packet with a GRE header and forwards the packet over the tunnel. The only exception in this scenario (upstream IP multicast traffic flow) is Internet Group Management Protocol (IGMP) join messages, which are locally bridged by the access point to the local infrastructure.

Downstream IP multicast traffic from the Supervisor 720 to the access point is not sent via the fast secure roaming tunnel. Instead, IP multicast traffic sent to the access point is forwarded using the underlying network infrastructure. Via the locally bridged IGMP messages, the access point dynamically constructs a wireless client-to-multicast group association table. This IGMP snooping operation permits flexible creation of a multicast group-to-wireless client association table at the access point and permits the access point to efficiently use bandwidth by only forwarding multicast traffic when there is a multicast-requesting client associated. However, due to the asymmetric multicast traffic flow, all network nodes between the supervisor engine and the access point must be configured to enable downstream multicast traffic to reach its destination.

RADIUS Assigned Mobility Groups

The fast secure roaming tunnels used with the Catalyst 6500 series WLSM are the components of the solution which permits Layer 3 mobility and fast secure roaming. The fast secure roaming tunnels may be assigned statically by associating a network-ID with each SSID at the access point, or dynamically per user via RADIUS authentication. The primary advantage of RADIUS-based mobility group or tunnel assignment is that it dramatically simplifies the configuration of access points because they are dynamically assigned the necessary mobility groups for users. The access point needs only to be configured for a single SSID. This permits the segmentation of different user groups on the access point (such as employees, contractors, guests, etc.) to different mobility groups and different network access policies from the Catalyst 6500 series switch.

It is also possible to combine the following deployment models to assign the desired mobility group or fast secure roaming tunnel for clients that use RADIUS authentication:

•Creation of static tunnels for clients that do not support RADIUS authentication

•RADIUS vendor-specific attributes

No extra configuration on the WLSM or Supervisor 720 is required to enable dynamic mobility group assignment. The configuration of the access point and RADIUS server control whether mobility groups are dynamically assigned at the access point using the WLSM's authentication transactions. Mobility group/ tunnel IDs must be configured at the Supervisor 720 for either static or dynamic mobility group operation.

Support for WDS Information MIB

Release 2.1.1 greatly improves MIB support for the WLSM by supporting the CISCO-WDS-INFO-MIB by introducing the capability of querying the WLSM for client, access point, and WLSE status and statistics. This information may be used to query the WLSM for client association, roaming and performance data, or custom SNMP applications.

Configuring the Wireless LAN Services Module

The initial Wireless LAN Services Module configuration consists of the following tasks:

Adding the Wireless LAN Services Module to the Corresponding VLAN

Note By default, the Wireless LAN Services Module is in trunking mode with native VLAN 1.

To add the Wireless LAN Services Module to the corresponding VLAN, perform this task:

Command

Purpose

Router(config)# wlan module mod
allowed-vlan vlan_ID

Configures the VLANs allowed over the trunk to the Wireless LAN Services Module.

Note One of the allowed VLANs must be the admin VLAN.

This example shows how to add a Wireless LAN Services Module that is installed in slot 5 to a specific VLAN:

Router(config)# wlan module 5 allowed-vlan 100

Router(config)# end

Configuring the Loopback Interface

The loopback interface is a software-only virtual interface that emulates an interface.

To configure the loopback interface, perform this task:

Command

Purpose

Step 1

Router(config)# interface loopback number

Configures a loopback interface and enters interface configuration mode. The number argument specifies the number of the loopback interface that you want to create or configure. There is no limit on the number of loopback interfaces that you can create.

Step 2

Router(config-if)# ip addressip_addr [subnet]

Assigns an IP network address and network mask to the interface.

Step 3

Router(config-if)# exit

Exits configuration mode.

The following example shows how to configure a loopback interface:

Router(config)# interface loopback 0

Router(config-if)# ip address 10.1.1.2 255.255.255.0

Router(config-if)# exit

Configuring the Wireless mGRE Tunnel

The infrastructure that enables Layer 3 mobility consists of Multipoint Generic Routing Encapsulation (mGRE) tunnels. Each tunnel has a single termination point on the Supervisor 720 module of the Catalyst 6500 that hosts the WLSM. The other logical endpoint of the tunnel exists on all access points participating in the Layer 3 mobility network. Clients that associate to a participating access point associate to a particular SSID. The SSID is mapped (either statically or dynamically via RADIUS) to a mobility network that tunnels all client traffic to the Catalyst 6500. The Supervisor 720 maintains a database of the clients (mobile nodes) and the access points to which they are associated. Roaming from one access point to another simply requires updating the database and changing the forwarding information for that mobile node.

To configure wireless mGRE tunnels, perform this task:

Command

Purpose

Step 1

Router(config)# ip dhcp snooping

(Optional) Enables DHCP snooping.

Note This command is required if you enable DHCP snooping on the tunnel interface for untrusted wireless networks.

Sets the authentication and encryption key for all RADIUS communications between the module and the RADIUS server. The radius-server key command has no default value; however, the key must match the encryption key used on the RADIUS server.

Step 6

wlan(config)# wlccp authentication-server
infrastructure leap-devices

Defines a method that authenticates the other access points.

Step 7

wlan(config)# wlccp authentication-server
client any leap-devices

Defines a method that authenticates the client devices (a client server group) and what EAP types those clients use.

This example shows how to configure the Wireless LAN Services Module as the WDS device:

Configuring Local Authentication

To configure the WLSM as a local authenticator, refer to Chapter 8, "Configuring an Access Point as a Local Authenticator," in the Cisco IOS Software Configuration Guide for Cisco Aironet Access Points at this URL:

Configuring the Access Points

To configure the access points to use the WDS, refer to Chapter 11, "Configuring WDS, Fast Secure Roaming, and Radio Management," in the Cisco IOS Software Configuration Guide for Cisco Aironet Access Points at this URL:

Configuring the DHCP Snooping Database

Wireless clients, or mobile nodes, assigned to an untrusted wireless network must be configured to use DHCP to obtain IP addresses from a DHCP server. The switch should have DHCP snooping enabled on the tunnel corresponding to the wireless network. Because the DHCP snooping database is not synchronized between the active and standby Supervisor 720, Cisco recommends that you store the DHCP snooping database on an external server. Storing the database on an external server allows the standby Supervisor to retrieve the accumulated states if a switchover occurs.

To configure DHCP snooping database options, perform these tasks:

Command

Purpose

Router(config)# ip dhcp snooping database
{url}

Specifies the URL that stores the DHCP snooping database entries; url takes the following forms:

Specifies (in seconds) the duration for which the database transfer should be delayed after the database changes. The default is 300 seconds. The range is from 15 to 86400 seconds.

1Due to issues with storing the DHCP snooping database on the bootflash device, as documented in caveat CSCee23185, and the limited storage capacity on the bootflash device, we recommend that you store the database on an external server. When a file is stored in a remote location that is accessible through FTP, TFTP, or RCP, a redundant supervisor engine configured with RPR or SSO takes over the database when a switchover occurs.

This example shows how to specify the amount of time before writing DHCP snooping entries:

Router(config)# ip dhcp snooping database write-delay 15

Note When you configure RPR and RPR+ redundancy, you must store the DHCP snooping database to an external server. Otherwise, mobile nodes in an untrusted network will lose connectivity after the supervisor engine switchover.

When you configure SSO redundancy, tunnel endpoints for mobile nodes are always synchronized to the standby supervisor engine. As a result, mobile nodes do not lose connectivity after a supervisor engine switchover, even if DHCP snooping database entries are not stored externally. However, after the switchover, the DHCP snooping database is emptied. Therefore, it is always advisable to have the DHCP snooping database to be stored externally for all modes of redundancy so that it will be retrieved automatically by the new active supervisor engine.

Configuring Graceful Tunnel Resiliency

To configure graceful tunnel resiliency, you need to configure the wireless LAN recovery time on the Supervisor 720. This parameter is set to 0 by default. Setting the recovery time to a value establishes the period of time that the Supervisor 720 maintains data communications with authenticated mobile nodes. If a WLSM failure occurs, the graceful recovery begins and the recovery timer starts.

When the WLSM comes back online, it reauthenticates the mobile nodes at a specific rate determined by the wlccp wds recovery rate value, which is the number of mobile nodes the WLSM reauthenticates per second. The default value is 40 authentications per second.

No configuration is required on the access points.

To enable and set the wireless LAN recovery time on the Supervisor 720, begin from the Privileged EXEC mode and perform this task:

Command

Purpose

Step 1

Router #configure terminal

Enters configuration mode.

Step 1

Router (config)# wlan recovery time seconds

Specifies the recovery time or grace period in seconds for client operation without refreshing wireless LAN session context after a WLSM failure occurs. The default is 0 (which disables the feature) and the range is 0-65535 seconds.

Step 1

WLSM (config)# end

Exit configuration mode.

Step 1

WLSM# write mem

Saves configuration to NVRAM.

To verify or change the WLSM recovery rate setting, open the WLSM console, begin from Privileged EXEC mode, and perform this task:

Command

Purpose

Step 1

WLSM# configure terminal

Enters configuration mode.

Step 2

WLSM (config)# wlccp wds recovery rateseconds

Specifies the number of MN re-authentications per second that the AAA server processes after a WLSM comes back online. The recovery rate throttles the load on the AAA server in the event of a WLSM failover. The default is 40 seconds and the range is 0-1000 seconds.

Step 3

WLSM (config)# end

Exit configuration mode.

Step 4

WLSM# write mem

Saves configuration to NVRAM.

Use the show mobility mn command to check the output on the Supervisor 720 during a recovery period, as shown in the following example:

Router# show mobility mn

MN Mac Address MN IP Address AP IP Address Wireless Network-ID Flags

-------------- ------------- ------------- ------------------- -----

0007.0eb9.3d78 172.16.3.26 10.10.0.67 102 G

Flags: D=Dynamic network ID, F=Fresh, G=Grace Period

You can check the status of a mobile node using the show dot11 associations command on the access point. This mobile node would be shown in a rediscover state, as shown in the following example:

ap# show dot11 associations

802.11 Client Stations on Dot11Radio0:

SSID: [test]

MAC Address IP Address Device Name Parent State

0007.0eb9.3d78 10.10.0.67 350-client testap1 self Rediscover

Configuring Two WLSMs on One Chassis

To configure two WLSMs on the same chassis, use the standby ip command to activate HSRP on each WDS. Beginning in the Privileged EXEC mode, perform this task:

Command

Purpose

Step 1

WLSM# config terminal

Enters configuration mode.

Step 2

WLSM (config)# wlan vlan x

Accesses the VLAN used for Supervisor 720 and WLSM communications.

Step 3

WLSM (config-vlan)# standby group # ip ip address

Configures the standby HSRP group and virtual IP address.

Step 4

WLSM (config-vlan)# end

Exit configuration mode.

Step 5

WLSM# write mem

Save config to NVRAM.

WLSM Graceful Tunnel Resiliency Performance Limitations

Performance is limited during the graceful recovery process. During the period that the WLSM is down, you can expect the following limitations:

•No new authentications are allowed.

•If a client attempts to roam, it is deauthenticated.

•When the WLSM is back up, fast roaming (CCKM) is not available and client roaming requires a full reauthentication until the WLSM mobile node session context is refreshed.

Previous versions of wireless LAN software supported only one WLSM per chassis. Release 2.1.1 supports two WLSMs per chassis, and combined with graceful tunnel resiliency, provides a near intra-chassis WLSM switchover. In a two-WLSM per chassis configuration, only one WLSM can be active; the other is designated the standby WLSM. If the active WLSM fails, the standby WLSM takes over. Because the switchover takes place almost instantaneously, you should experience no traffic loss.

Configuration Examples

Figure 3 shows the configuration for Supervisor 720 and two WLSMs in a single chassis. The Supervisor 720 configuration is a selected portion from a complete configuration; however the WLSM configuration is complete.

Figure 4 shows an interswitch redundancy configuration. The two switches are connected in a back-to-back configuration using f1/38 on Switch 1 and f2/38 on Switch 2. The access points communicate with the Wireless LAN Services Module through IP address 100.0.0.25, which is the HSRP IP address configured on both Wireless LAN Services Modules.

Figure 4 Sample Interswitch HSRP Topology (One WLSM per Switch)

Switch 1 Configuration

This example shows the configuration of the Wireless LAN Services Module configured with HSRP:

wlan vlan 100

ipaddr 100.0.0.200 255.0.0.0

gateway 100.0.0.100

admin

standby 1 ip 100.0.0.25

!

This example shows the configuration of the tunnel interface on the Supervisor Engine 720:

interface Tunnel252

ip address 113.0.0.1 255.0.0.0

ip helper-address 90.90.90.90

no ip redirects

ip dhcp snooping packets

tunnel source Loopback62

tunnel mode gre multipoint

mobility network-id 252

end

This example shows the configuration of the loopback interface. The loopback interface is configured as the source IP address for the tunnel between the Supervisor Engine 720 and the access point:

interface Loopback62

ip address 62.0.0.1 255.255.255.255

end

This example shows the configuration of VLAN 100. The IP address assigned to VLAN 100 is used as the default gateway on the Wireless LAN Services Module. The Wireless LAN Services Module sends packets destined for the ACS server to the default gateway IP address:

interface Vlan100

ip address 100.0.0.100 255.0.0.0

end

This example shows the configuration of the interface between the Supervisor Engine 720 in Switch 1 and the Supervisor Engine 720 in Switch 2. This interface can be a trunk or access port. This port carries the VLAN that is used for HSRP. In this example, the two Wireless LAN Services Module use VLAN 100 and HSRP IP address 100.0.0.25.

interface FastEthernet1/38

no ip address

switchport

switchport trunk encapsulation dot1q

switchport trunk allowed vlan 1,6,100

switchport mode trunk

end

Switch 2 Configuration

This example shows the configuration of the Wireless LAN Services Module configured with HSRP:

wlan vlan 100

ipaddr 100.0.0.250 255.0.0.0

gateway 100.0.0.150

admin

standby 1 ip 100.0.0.25

This example shows the configuration of the tunnel interface on the Supervisor Engine 720:

interface Tunnel252

ip address 113.0.0.2 255.0.0.0

ip helper-address 90.90.90.90

no ip redirects

ip dhcp snooping packets

tunnel source Loopback62

tunnel mode gre multipoint

mobility network-id 252

mobility trust

end

This example shows the configuration of the loopback interface. The loopback interface is configured as the source IP address for the tunnel between the Supervisor Engine 720 and the access point:

interface Loopback62

ip address 62.0.0.2 255.255.255.255

end

This example shows the configuration of VLAN 100. The IP address assigned to VLAN 100 is used as the default gateway on the Wireless LAN Services Module. The Wireless LAN Services Module sends packets destined for the ACS server to the default gateway IP address:

interface Vlan100

ip address 100.0.0.150 255.0.0.0

end

This example shows the configuration of the interface between the Supervisor Engine 720 in Switch 2 and the Supervisor Engine 720 in Switch 1. This interface can be a trunk or access port. This port carries the VLAN that is used for HSRP. In this example, the two Wireless LAN Services Module use VLAN 100 and HSRP IP address 100.0.0.25.

interface FastEthernet2/38

no ip address

switchport

switchport trunk encapsulation dot1q

switchport trunk allowed vlan 1,6,100

switchport mode trunk

end

Use the show wlccp wds mobility command to verify HSRP status:

WLSM> show wlccp wds mobility

LCP link status: up

HSRP state: Active

Total # of registered AP: 3

Total # of registered MN: 2

Tunnel Bindings:

Network ID Tunnel IP MTU EPOC ID FLAGS

========== =============== ========= ========= =======

100 10.80.0.1 1476 0 TB M

101 10.80.0.2 1476 0 TB M

102 10.80.0.3 1476 0 TB M

103 10.80.0.4 1476 0 M

Flags:T=Trusted, B=IP Broadcast enabled, S=TCP MSS Adjust,

M=IP Multicast enabled, I=MN IP Discovery, N=Nonexistent

Use the show mobility status command to check the redundancy status of each WLSM on the Supervisor 720:

Sup720...#show mobility status

Primary WLAN Module is located in Slot: 1 (HSRP State: Active)

LCP Communication status : up

Secondary WLAN Module is located in Slot: 2(HSRP State: Standby)

LCP Communication status : up

WLSM recovery period remaining: 0 seconds

MAC address used for Proxy ARP: 0005.5f54.5800

Number of Wireless Tunnels : 4

Number of Access Points : 3

Number of Mobile Nodes : 1

Wireless Tunnel Bindings:

Tunnel Src IP Address Wireless Network-ID Flags

--------------- --------------- ------------------- -------

Tunnel100 10.80.0.1 100 TB M

Tunnel101 10.80.0.2 101 TB M

Tunnel102 10.80.0.3 102 TB M

Tunnel103 10.80.0.4 103 M

Flags: T=Trusted, B=IP Broadcast enabled, M=IP Multicast enabled

A=TCP Adjust-mss enabled, D=Discover passive MN's IP address

Use the show redundancy states command to check the redundancy status on the Supervisor 720:

HSRP Configuration Guidelines for Interswitch Topology

•NAT tables are not synchronized between the switches; therefore, NAT tables are lost after an interswitch failover.

•In this example, an external DHCP server is mandatory so that the mobile nodes receive the same IP address after an interswitch failover.

•Configure the DHCP server so that it sends both tunnel IP addresses as the default gateways. Although you can specify either of the IP addresses as the default gateway, it is beneficial to the mobile client to see both gateways when they display their IP configuration.

•The Wireless LAN Services Module communicates with the ACS server, the DHCP server, and the Wireless LAN Solution Engine by using the VLAN IP address of the wireless LAN and not the HSRP IP address. Since Router 1 might have equal-cost routes to the VLAN IP subnet of the wireless LAN (100.0.0.0/8), you should configure static routes on Router 1 to reach the VLAN IP addresses of the wireless LAN. For example, Router 1 should point to Switch 1 to reach the Wireless LAN Services Module wireless LAN VLAN IP address in Switch 1, and Router 1 should also point to Switch 2 to reach the Wireless LAN Services Module wireless LAN VLAN IP address in Switch 2.

Note If you do not configure the static routes, Router 1 can still use dynamic routing to send packets to the active Wireless LAN Services Module. However, Router1 sees equal-cost routes for the Wireless LAN Services Module VLAN subnet and uses both switches to send packets to the active Wireless LAN Services Module. As a result, some packets travel an extra hop through the switch with the standby Wireless LAN Services Module. Also, if one of the switches crashes, Router 1 will not know about it immediately, and there is a chance that some packets may be lost during this period.

•The loopback62 interface on both switches is configured with a host route IP address. This IP address is used as the destination IP address for the GRE packets for mobile nodes in tunnel 252. As a result, Router 2 should know the host-specific routes to reach these IP addresses. If OSPF is used, then there will not be any issues because OSPF by default advertises loopback addresses as host routes, and Router 2 can send the tunnel packets to the correct switch.

For example, if Switch 1 has the active Wireless LAN Services Module, then the access point sends packets to 62.0.0.1, and if Switch 2 has the active Wireless LAN Services Module, then the access point sends packets to 62.0.0.2. Router 2 should know that to reach 62.0.0.1, it need to send packets to Switch 1, and to reach Switch 2, it should send packets to 62.0.0.2.

Another option is to configure the IP address for the loopback62 interface for each switch in a different subnet, so that Router 2 sees the different subnets from only one switch.

•When using route processor redundancy (RPR) or stateful switchover (SSO), the standby ip configuration in the examples is adequate; there is no need to configure other HSRP options.

•When using route processor redundancy plus (RPR+), you should change the default HSRP timer configuration to avoid unnecessary transitions between the Wireless LAN Services Modules after an RPR+ switchover.

For example, Wireless LAN Services Module 2 (with IP address 100.0.0.250) is the active module and Wireless LAN Services Module 1 (with IP address 100.0.0.200) is the standby module. The HSRP timers are set to the default (hello timer of 3 seconds and holdtime timer of 10 seconds). If an RPR+ switchover occurs on Switch 2, Wireless LAN Services Module 1 becomes active. However, from the Wireless LAN Services Module 2 point of view, it is still active and keeps sending HSRP hellos, but the hellos will not reach Wireless LAN Services Module 1. Once the system is stabilized after the RPR+ switchover, Wireless LAN Services Module 2 starts seeing the hellos from Wireless LAN Services Module 1. Because Wireless LAN Services Module 2 is already in active state and its IP address is higher than that of Wireless LAN Services Module 1, Wireless LAN Services Module 2 sends a coup message to Wireless LAN Services Module 1, which returns to standby state.

To avoid this unnecessary transition of states, enter the standbygroup_numbertimershellotimeholdtime command under wireless LAN VLAN configuration on both the Wireless LAN Services Modules to increase the HSRP timers. (For example, set the hello timer to 60 seconds, and set the holdtime timer to 180 seconds.)

Recovering a Lost Password

Note You can download the password recovery script from the Cisco.com software center.

Note You must have access to the supervisor engine to perform the WLSM password recovery procedures. To recover the enable password on the supervisor engine, refer to the software configuration guide for your software platform.

Note To run the password recovery script, the WLSM must be in the application partition (AP).

To recover a lost password on the WLSM, perform this task:

Command

Purpose

Step 1

Router> enable

Initiates enable mode enable.

Step 2

Router# copy tftp: pclc#mod-fs:

Downloads the script to the specified module.

Note You can locate this special image from the Cisco.com software center. The image name ends with passwd.recovery.x.x.x.bin where x.x.x is the image version number.

Step 3

wlan(config)# enable passwordpassword

Specifies a local enable password.

Step 4

wlan(config)# line vty
starting-line-number ending-line-number

Identifies a range of lines for configuration and enters line configuration mode.

Step 5

wlan(config-line)# login

Enables password checking at login.

Step 6

wlan(config-line)# passwordpassword

Specifies a password on the line.

Step 7

wlan(config-line)# end

Exits line configuration mode.

Step 8

wlan# copy system:running-config
nvram:startup-config

Saves the configuration to NVRAM.

Step 9

Router# hw-module modulemodreset cf:4

Resets the module.

This example shows how to recover a lost password on the WLSM that is installed in slot 5:

Upgrading the Images

The compact Flash on the Wireless LAN Services Module has two bootable partitions: application partition (AP) and maintenance partition (MP). By default, the application partition boots every time. The application partition contains the binaries necessary to run the wireless LAN image. The maintenance partition is booted if you need to upgrade the application partition.

You can upgrade both the application software and the maintenance software. However, you are not required to upgrade both images at the same time. Refer to the release notes for the Wireless LAN Services Module for the latest application partition and maintenance partition software versions.

The entire application and maintenance partitions are stored on the FTP or TFTP server. The images are downloaded and extracted to the application partition or maintenance partition depending on which image is being upgraded.

To upgrade the application partition, change the boot sequence to boot the module from the maintenance partition. To upgrade the maintenance partition, change the boot sequence to boot the module from the application partition. Set the boot sequence for the module using the supervisor engine CLI commands. The maintenance partition downloads and installs the application image. The supervisor engine must be executing the run-time image to provide network access to the maintenance partition.

Before starting the upgrade process, you will need to download the application partition image or maintenance partition image to the TFTP server.

A TFTP or FTP server is required to copy the images. The TFTP server should be connected to the switch, and the port connecting to the TFTP server should be included in any VLAN on the switch.

Note The SUP_OSBOOTSTATUS system message shows that the application partition (AP) has booted.

Step 9

Console (enable) show module [mod]

Displays that the application partition for the module has booted.

1To access the MSFC from the switch CLI directly connected to the supervisor engine console port, enter the switch consolemod command. To exit from the MSFC CLI and return to the switch CLI, press Ctrl-C three times at the Router> prompt.

Note The SUP_OSBOOTSTATUS system message shows that the maintenance partition (MP) has booted.

Step 9

Console (enable) show module [mod]

Displays that the application partition for the module has booted.

1To access the MSFC from the switch CLI directly connected to the supervisor engine console port, enter the switch consolemod command. To exit from the MSFC CLI and return to the switch CLI, press Ctrl-C three times at the Router> prompt.

Subscribe to the What's New in Cisco Product Documentation as a Really Simple Syndication (RSS) feed and set content to be delivered directly to your desktop using a reader application. The RSS feeds are a free service and Cisco currently supports RSS Version 2.0.

All other trademarks mentioned in this document or Website are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (0601R)