Wednesday, April 18, 2012

Cisco Catalyst Switch Operation and Configuration

As a CCNA, you should have a solid understanding of the operations of a L2 switch.
Terminology such as CDMA/CD, collision domain, transparent bridging, MAC address learning, unknown unicast flooding, and bridging loop shouldn’t be new to you.

The figure below shows the typical operations within a L2 Catalyst switch and the decision processes that take place to forward each frame through it.

The figure below shows the operations and decision processes within a multilayer Catalyst switch.

Operations within a Multilayer Catalyst Switch

Since the TTL value in the L3 packet header has changed, the IP header checksum must be recalculated.
And since the change of L3 packet affects the L2 payload, the L2 FCS checksum must be recalculated.
The packet and frame rewrites are accomplished efficiently in hardware through ASICs.
Catalyst switches use the following tables for their switching operations:

CAM

Content-Addressable Memory
As frames arrive upon switch ports, the source MAC addresses are learned and recorded in the CAM table, along with the port of arrival, the VLAN, and a timestamp.
If a MAC address learned upon a port has moved to another port, the MAC address and timestamp are recorded for the most recent port; and then the previous entry is deleted. As a frame enters a switch with the same port, its timestamp or aging timer will be refreshed. MAC addresses are unique and a host should never be seen on multiple switch ports unless problems exist in the network (MAC Address Table Trashing / Instability). If a switch notices that a MAC address is being learned on alternating switch ports, it generates an error message about the MAC address is flapping between interfaces.

TCAM

Ternary Content-Addressable Memory
In traditional routers, access lists are made up of matching statements that are evaluated in sequential order, which can consume additional time and latency upon packet forwarding. In multilayer switches, all the matching processes are implemented in hardware.
TCAM allows a packet to be evaluated against an entire access list in a single table lookup. Most switches have multiple TCAMs so that both the inbound and outbound security and QoS ACLs can be evaluated simultaneously with a L2 or L3 forwarding decision.

Switching and routing traffic through hardware-switching using ASICs is considerably faster than the traditional software-switching of frames via a CPU.
Many ASICs, especially ASICs for L3 routing, use specialized memory referred to as Ternary Content Addressable Memory (TCAM) along with packet-matching algorithms to achieve high performance, whereas CPUs simply use higher processing rates to achieve greater degrees of performance.
ASICs can generally reach throughput at wire speed without performance degradation for advanced features such as QoS marking, ACL processing, and IP packet rewriting.
The L3 switching performance on current-generation Catalyst switches is equal to L2 switching performance in terms of throughput.

Some Catalyst switches support stacking, where a group of switches can be managed as a single entity. Traditionally, the Catalyst 29xx and 35xx switches have supported stacking; however, inter-switch performance is limited and the ability to manage the stack as a single entity has had some restrictions. The Catalyst 3750 Series switches include a stacking technology called StackWise that includes a 32Gbps high-speed backplane and allows up to 9 switches to be completely managed as a single switch. The Catalyst 2960-S Series switches include a stacking technology called FlexStack that includes a 20Gbps high-speed backplane and allows up to 4 switches to be completely managed as a single switch.

The Nexus families of switches are relatively new switches targeted for deployment in the data center.
Nexus switches offer lossless delivery at line rate, Fibre Channel over Ethernet (FCoE), and advanced high-availability features, eg: Virtual Port Channel (vPC).
Unfortunately, since Nexus switches are targeted for data centers, they lack some features found in Catalyst switches, eg: inline power support for IP phones.

The current trend is deploys Nexus switches for data centers and deploy Catalyst switches for campus.
The use of the Catalyst switches in the campus and Nexus switches in the data center is a market transition from earlier models that used Catalyst switches throughout the enterprise.

The show interfaces EXEC command with the output filter that shows only the interfaces that are in used and only the input and output data rates is very useful when accessing the utilization over various connections in a campus network.

Gigabit Ethernet interfaces use port negotiation to negotiate various parameters that are related to the physical operation of the interface. Port negotiation is used to exchange the following information:

Flow control capabilities – Allows an interface to advertise whether it supports flow control features.

Remote fault information.

Note that speed is not included in the list of auto-negotiated features, as Gigabit Ethernet interfaces operate only at 1000Mbps or 1Gbps. Hence there is no need to negotiate the speed.
Gigabit Ethernet interfaces have port negotiation enabled by default, and it does not require modification for most situations. The only time that needs to disable negotiation is when connecting to a device that does not support port negotiation.
The [no] speed nonegotiate interface subcommand enables / disables port negotiation for an interface, even though Gigabit Ethernet port negotiation has nothing to do with speed.Note: The speed nonegotiate interface subcommand is only available on Fiber-based GE interfaces.Caution: When connecting 2 Gigabit Ethernet interfaces with one end configured with this command, the end configured with this command will be UP (connected) while the other end will be DOWN (notconnect)! This is only applicable for fiber connections due to the reason above.

Flow Control on Gigabit Ethernet Interfaces

Flow control is a feature defined in the IEEE 802.3x specification.
It enables a receiving device to signal congestion to a sending device, which allows the sending device to temporarily halt transmission in order to alleviate congestion at the receiving device.

IEEE 802.3x Flow Control

The figure above demonstrates how flow control works. The following events occur:

The transmitter is sending data (frames) to the receiver.
Note that the figure represents only one direction of the connection.
The transmitter and receiver roles are swapped for frames sent in the reverse direction.

The receive buffer on the interface connected to the transmitter is full and causing a congestion. This is common when traffic is sent to a switch on a high-speed interface and is then forwarded to a lower speed interface. If the receiver receives any more frames from the transmitter, they are discarded until the congestion is alleviated when the receive buffer is emptied. In order to avoid this situation, the receiver sends a pause frame with a wait time value to the transmitter to instruct the transmitter to stop sending frames for the specified wait time.

Assuming the transmitter supports flow control, it stops sending frames for the wait time period. After the wait time period is over, the transmitter starts transmitting frames again.

If the receiver clears its receive buffer before the wait time period is over, it sends a pause frame with a wait time value of 0 to informs the transmitter that it can starts transmitting again. This ensures that the connection is not idle if congestion is cleared before the transmitter is allowed to restart transmission.

The transmitter restarts transmission, either due to the wait time is expired or a pause frame with a wait time of 0 is being received.

If congestion occurs at the receiver interface again, the process above starts again.

All Cisco Catalyst switches that include Gigabit Ethernet capabilities include support for flow control. However, the support for flow control might be limited depends upon the type of Gigabit Ethernet port. All the Gigabit Ethernet interfaces on all Cisco Catalyst switches support the ability to receive and respond to pause frames – act like the transmitter in the figure. However, some Gigabit Ethernet ports do not support the ability to send pause frames – act like the receiver in the figure.
The ability to receive and respond to pause frames is referred to as input flow control;
and the ability to send pause frames is referred to as output flow control.

The table below describes the flow control capabilities of the various Cisco Catalyst switch Ethernet ports:

Oversubscribed ports are referred to oversubscription to the switch backplane.
Uplink ports are non-oversubscribed ports – they have at least a 1Gbps full-duplex connection to the switch backplane.

The show interface {intf-type intf-num}capabilities EXEC command can be used to determine whether a port supports the send and/or receive flow control features.

An example of modules that includes oversubscribed ports is the 18-Port 1000BASE-X module – WS-X4418-GB for the Catalyst 4000 / 4500 Series, which includes 2 uplink 1000BASE-X ports and 16 oversubscribed 1000BASE-X ports.

Internal Bandwidth Allocation for WS-X4418-GB

On the Catalyst 4000 / 4500 Series switch, each module is provided with 3 x 2Gbps full-duplex connections to the switching fabric.
The figure above shows how bandwidth is allocated internally for the WS-X4418-GB module.
Port 1 and Port 2 are uplink ports – they are each allocated 1Gbps full-duplex bandwidth (non-blocking) to the switching backplane. The receive buffers on these ports never experience congestion, as the ingress bandwidth is same as the egress bandwidth.
Even if frames received on an uplink port are eventually switched out a lower speed Fast Ethernet port, congestion occurs at the egress port, as the frames received have been emptied from the receive buffer of the uplink port and switched to the transmit buffer of the egress port.
Hence, the uplinks ports have no requirement for the ability to send flow control pause frames.
Ports 3 – 10 and ports 11 – 18 are oversubscribed ports – the total bandwidth of each group of ports (8Gbps) is shared among a 2Gbps connection to the backplane (blocking is possible).
The oversubscription rate is 4:1 – if all ports in a group are connected and receiving traffic at the maximum possible rate of 1Gbps, the bandwidth allocated to each port is only 250Mbps, causing congestion on the receive buffers of each port. Therefore, the oversubscribed ports must be able to send flow control pause frames in order to signal congestion to transmitting devices.

Note that flow control should not be configured in conjunction with QoS features on the Catalyst 2950 and Catalyst 3550 switches – all QoS features must be disabled globally before configuring flow control. This restriction does not apply to the Catalyst 4000 / 4500 and Catalyst 6000 / 6500 switches.

QoS features are disabled globally on Catalyst 2950 / 3550 switches by default.
However, it is a good practice to verify that QoS is disabled and disable QoS globally if required.
The no mls qos global configuration command is used to disable QoS globally.

The flow control capabilities are exchanged during the port negotiation process.
An interface can be configured to process pause frames received and/or send pause frames based on the flow control capabilities indicated by the remote device.
The flowcontrol {receive | send} {on | off | desired} interface subcommand configures an interface whether responds to pause frames received and generate pause frames when congestion occurs (if output flow control is supported).
The receive keyword is used to enable or disable a port responding to pause frames.
The send keyword is used to enable or disable the sending of pause frames when congestion occurs.
The on and off keywords enable or disable the send or receive feature respectively.
The desired keyword configures an interface to enable the send or receive feature only if the remote device indicates the support of the input or output flow control capability during port negotiation.

Note: On the Catalyst 2950 / 3550 switches, the receive feature is set to off and the send feature is set to desired for Gigabit Ethernet ports by default. The receive feature is set to off as the Catalyst 3550 is a non-blocking switch; if congestion does occur, it normally happens at the transmit queue of a lower-speed port, not at the receive queue (buffer) of a Gigabit Ethernet port.
For Fast Ethernet ports, the receive and send features are set to off by default.

Below demonstrates configuring a Gigabit Ethernet interface to enable the send and receive features only if the remote switch indicates the support of flow control capability during port negotiation.

Note that the shaded line indicates that the input (receive) flow control is off, which is due to the fact that it is connected to a Gigabit Ethernet interface that does not support output (send) flow control, eg: uplink port of a Catalyst 4000 / 4500 supervisor or module, or has disabled output flow control. The output (send) flow control is enabled, which is due to the fact that the remote switch port has indicated the support of input flow control during the port negotiation process.

Jumbo Frames

Storage technologies such as storage area network (SAN) and network attached storage (NAS) have gained much popularity in recent years. They allow organizations to consolidate storage and provide greater scalability and performance for data center environments.

NAS allows storage devices, eg: disk farms and tape libraries, to be accessible via the data network, reducing the total cost of ownership compared to implementing SANs – out-of-band networks dedicated to storage. Protocols such as SCSI over IP (iSCSI) and Fibre Channel over IP (FCIP) allow servers to mount volumes located on a NAS as virtual file systems,
The amount of traffic generated by a NAS is significant, as file operations are typically data intensive, reading and/or writing sometimes gigabytes of information in a single file request.
The relatively small default Ethernet MTU – a maximum payload of 1500 bytes, means that 6 frames are required to execute even the most basic I/O operations, eg: read or write an 8KB sector from / upon disk.

A NAS must be able to burst large amount of data with very low latencies in order for it to effectively provide disk access for other storage devices.
Generating large amounts of 1500-byte Ethernet frames increases system utilization as well as latency.Jumbo frames extend the Ethernet MTU from 1518 bytes up to 9216 [1] bytes and therefore improves data throughput for data-intensive and high data transfer applications, eg: storage (NAS) and video.
[1] The actual value varies upon different Cisco Catalyst switch platforms.

Below describes the 2 types of oversized Ethernet frames:

Baby giants – These frames are up to 1600 bytes in size (MTU of 1548 or 1552 bytes) and accommodate applications such as MPLS, where one or more MPLS labels are inserted between the Ethernet and IP headers.

Jumbo frames – These frames are up to 9216 bytes in size and are designed for applications such as storage that require a large MTU to burst large amounts of information efficiently and with low latency.

Below demonstrates configuring the support for baby giants on a system-wide basis on the Catalyst 2950 / Catalyst 3550 using the system mtu {mtu-size} global configuration command.

SW1#sh system mtu
System MTU size is 1500 bytes
SW1#
SW1#conf t
Enter configuration commands, one per line. End with CNTL/Z.
SW1(config)#system mtu ?
<1500-1546> MTU size in bytes
SW1(config)#system mtu 1546
Changes to the System MTU will not take effect until the next reload is done.
SW1(config)#
SW1(config)#do sh system mtu
System MTU size is 1500 bytes
On next reload, system MTU will be 1546 bytes
SW1(config)#

The change upon the MTU is not applied until after the switch is rebooted.

A Catalyst switch detects an error condition on every switch port for every possible cause by default. If an error condition is detected, the switch port is disabled and is placed into the error-disabled state. This behavior can be tuned so that only certain causes may trigger any switch port from being disabled.
The [no] errdisable detect cause {all | cause-name} global configuration command enables or disables the specified cause. Repeat this command to enable or disable multiple causes.
For the BPDU Guard and 802.1X Security features, the [no] errdisable detect cause {bpduguard | security-violation} shutdown vlan global command can be used to configure the switch to shutdown just the offending VLAN on the port instead of shutting down the entire port upon a violation.

Below lists the causes that can trigger the error disable state:

all – detects every possible cause.

arp-inspection – detects errors with dynamic ARP inspection.

bpduguard – detects when an STP BPDU is received upon a port configured for STP PortFast.

channel-misconfig – detects errors with EtherChannel bundle.

dhcp-rate-limit – detects errors with DHCP snooping.

dtp-flap – detects when trunking encapsulation is changing from one type to another.

gbic-invalid – detects the presence of an invalid GBIC or SFP module.

inline-power – detects errors with inline power.

l2ptguard – detects errors with Layer 2 Protocol Tunneling.

link-flap – detects when the link-state of a port is flapping between the up and down states.

loopback – detects when an interface has been looped back.

mac-limit – detects when the maximum number of secure MAC addresses (Port Security) allowed on an interface is exceeded.

pagp-flap – detects when the ports of an EtherChannel bundle no longer have consistent configurations.

vmps – detects errors when dynamically assigning a port to a VLAN through the VMPS – VLAN Management Policy Server.

By default, error-disabled ports have to be manually re-enabled with the shutdown and no shutdown interface subcommands. The errdisable recovery interval {sec} global configuration command configures a Catalyst switch to recover its ports from error-disabled state after a specified period of time. The interval can be set from 30 to 86’400 seconds (24 hours).
The causes from which the error-disabled port can recover automatically must be defined using the errdisable recovery cause {all | cause-name} global configuration command.

Note: Although this mechanism protects Cisco Catalyst and Nexus switches-based networks, occurrences of the error disable are highly undesirable and often indicating a serious problem, and therefore should be avoided at all time.