This document describes causes of high CPU utilization on the Cisco
Catalyst 3550 Series Switches. This document also lists common network or
configuration scenarios that can cause high CPU utilization on the Catalyst
3550 Series Switches.

Cisco Catalyst switches use the show processes
cpu command in order to identify the causes of high CPU
utilization. The show processes cpu command shows
CPU utilization averaged over the past five seconds, one minute, and five
minutes. CPU utilization numbers do not provide a true linear indication of the
utilization with respect to the offered load. These are some of the major
reasons:

In a real world network, the CPU has to handle various system
maintenance functions, such as network management.

The CPU has to process periodic and event-triggered routing
updates.

There are other internal system overhead operations, such as polling
for resource availability, that are not proportional to traffic
load.

The information in this document is based on Catalyst 3550 Series
Switches.

The information in this document was created from the devices in a
specific lab environment. All of the devices used in this document started with
a cleared (default) configuration. If your network is live, make sure that you
understand the potential impact of any command.

Before you look at the CPU packet-handling architecture and
troubleshoot high CPU utilization, you must understand the different ways in
which hardware-based forwarding switches and Cisco IOS® Software-based routers
use the CPU. The common misconception is that high CPU utilization indicates
the depletion of resources on a device and the threat of a crash. A capacity
issue is one of the symptoms of high CPU utilization on Cisco IOS routers.
However, a capacity issue is almost never a symptom of high CPU utilization
with hardware-based forwarding switches.

The CPU utilization of 20% to 50% is normal on a Catalyst 3550 Switch,
even under minimal load. CPU utilization does not reflect the total number of
packets being switched or the total load on the switch. The CPU is responsible
for the process of IP traffic (broadcast, telnet, SNMP) on the management VLAN,
for the process of control packets Spanning-Tree Protocol (STP), Cisco
Discovery Protocol (CDP), DDSN Transfer Protocol (DTP), Port Aggregation
Protocol (PAgP), Link Aggregation Control Protocol (LACP), Unidirectional Link
Detection (UDLD), address learning, routing protocols, port status and LED
operations. If the CPU utilization is extremely high (around 90% - 99%), this
does not directly affect the switching of data. However, high CPU utilization
might start to affect protocols such as STPs. The CPU on the Catalyst 3550 is
used for management purposes only. The CPU is not used to forward packets. This
is handled by ASICs. The increase in the CPU utilization does not affect
traffic forwarding.

The first step to troubleshoot the high CPU utilization is to check the
Cisco IOS version release notes of your Catalyst 3550 Switch for the possible
known IOS bug. This way you can eliminate the IOS bug from your troubleshooting
steps. Refer to
Cisco
Catalyst 3550 Series Switches Release Notes for a description of new
features, system requirements, limitations, restrictions, caveats, and
troubleshooting information for a particular software release for Catalyst 3550
Switches.

Generic Routing Encapsulation (GRE) tunnels are not supported on the
Cisco Catalyst 3550 Switch. Even though the CLI commands are there to configure
the GRE, it is not officially supported. Refer to the
Unsupported
VPN Configuration Commands section of
Unsupported
CLI Commands for Catalyst 3550 for this information. The reason for this
is that the Cisco Catalyst 3550 Switch uses hardware-based Cisco Express
Forwarding (CEF) switching. There is no method to CEF-switch GRE packets. GRE
packets must be encapsulated by the software. The hardware does not have the
capability to encapsulate the packets. Consequently, this traffic is processed
or software switched. The process or software switched traffic can quickly
cause the CPU to spike.

An extended ping from one interface to another interface on the same
switch can cause high CPU utilization. This can occur when a large number of
ping packets are sent and received. This is an expected behavior. The
workaround is to not perform a ping from one interface to another on the same
switch. Refer to Cisco bug ID
CSCea19301
(registered customers only)
for more
information.

The VUR_MGR bg process is the Vegas Unicast Routing Manager process
which is a platform specific module that interfaces with IOS. This process
implements the hardware independent fuctionality required in the platform for
unicast routing. Each time an Address Resolution Protocol (ARP) is resolved for
a destination, the corresponding entry needs to be programmed in hardware.

The VUR_MGR bg process is responsible for unicast routing and is high
if the switch is learning routing information. It is also high if you see
frequent routing changes. Issue the clear ip route
command in exec mode to clear the condition. However, this does not prevent the
condition to recur.

The Cisco IOS software process called IP input takes care of
process-switching IP packets. If the IP input process uses unusually high CPU
resources, the switch is process-switching a lot of IP traffic. Refer to
Troubleshooting
High CPU Utilization in IP Input Process for information on how to
troubleshoot high CPU utilization due to the IP input process.

When you insert a GigaStack GBIC in a GBIC module slot, the CPU
utilization increases by six percent. This increase occurs for each GigaStack
GBIC added to the switch. The VegasPM process in the show
processes cpu command output shows this CPU utilization. The
VegasPM process manages the Gigastack GBIC operation on the switch.

As a workaround, if the network design permits, use other types of
GBICs such as fiber. These do not cause additional CPU utilization. Refer to
Cisco bug ID
CSCdx90515
(registered customers only)
for more
information.

The TTY Background process is a generic process used by all terminal
lines (console, aux, async, and so on). Normally there should not be any impact
on the performance of the switch, because this process has a lower priority
compared to the other processes that need to be scheduled by the Cisco IOS
software.

If this process takes high CPU utilization, check whether logging
synchronous is configured under line con 0. Refer to Cisco bug ID
CSCdy01705
(registered customers only)
for more
information.

On the Catalyst 3550 Switch, Layer 3 forwarding of IPv4 in the
Subnetwork Access Protocol (SNAP) can only be done in the software.
SNAP-encapsulated IPv4 packets that are directed to the router MAC address or
the Hot Standby Router Protocol (HSRP) group MAC address (if this is the active
router in the HSRP group) are forwarded to the switch CPU. This action can
potentially cause high CPU utilization levels.

Packets received from media types that require SNAP encapsulation of
IPv4 packets require the switch to forward SNAP-encapsulated packets. In
general, Layer 2 forwarding of IPv4 in SNAP encapsulation takes place in the
hardware, unless a VLAN map or port Access Control List (ACL) contains an IP
ACL. However, this cannot take place on the Cisco Catalyst 3550 Switch.

This is a hardware limitation, and there is no workaround. Refer to
Cisco bug ID
CSCed59864
(registered customers only)
for more
information.

ICMP redirect messages are used by routers and switches to notify the
hosts on the data link that a better route is available for a particular
destination. By default, Cisco routers and switches send ICMP redirects.

You can expect the sourcing device to act on the ICMP redirect that
the Catalyst 3550 sends, and to change the next hop for the destination.
However, not all devices respond to an ICMP redirect. If the device does not
respond, the Catalyst 3550 must send redirects for every packet that the switch
receives from the sending device. These redirects can consume a great deal of
CPU resources. The high CPU utilization is caused due to the high amount of
ICMP redirect traffic that hits the CPU. The workaround is to disable ICMP
redirects with the no ip redirects interface mode
command.

This scenario can also occur when you have configured secondary IP
addresses. When you enable the secondary IP addresses, the IP redirect is
automatically disabled. Make sure you do not manually enable the IP redirects
when you have configured secondary IP addresses.

If these broadcast storms are frequent, then you might have to look
into the design of the network. If the broadcast storms are occasional, you can
configure the Storm Control feature in order to equip the device against the
storm. Refer to
Configuring
Port-Based Traffic Control for more information.