Overview

The case for Quality of Service (QoS) in WANs/VPNs is largely self-evident because of the relatively low-speed bandwidth links at these Places-in-the-Network (PINs), as compared to Gigabit/Ten Gigabit campus networks, where the need for QoS is sometimes overlooked or even challenged. This is sometimes due to network administrators equating QoS with queuing policies only; whereas, the QoS toolset extends considerably beyond just queuing tools. Classification, marking, and policing are all important QoS functions that are optimally performed within the campus network, particularly at the access layer ingress edge (access edge).

Always perform QoS in hardware rather than software when a choice exists. Cisco IOS routers perform QoS in software. This places additional demands on the CPU, depending on the complexity and functionality of the policy. Cisco Catalyst switches, on the other hand, perform QoS in dedicated hardware Application-Specific Integrated Circuits (ASICs) and as such do not tax their main CPUs to administer QoS policies. You can therefore apply complex QoS policies at Gigabit/Ten Gigabit line rates in these switches.

Classify and mark applications as close to their sources as technically and administratively feasible. This principle promotes end-to-end Differentiated Services/Per-Hop Behaviors. Sometimes endpoints can be trusted to set Class of Service (CoS) of Differentiated Services Code Point (DSCP) markings correctly, but this is not always recommended as users could easily abuse provisioned QoS policies if permitted to mark their own traffic. For example, if DSCP Expedited Forwarding (EF) received priority services throughout the enterprise, a user could easily configure the NIC on a PC to mark all traffic to DSCP EF, thus hijacking network priority queues to service their non-real time traffic. Such abuse could easily ruin the service quality of real time applications (like VoIP) throughout the enterprise.

Police unwanted traffic flows as close to their sources as possible. There is little sense in forwarding unwanted traffic only to police and drop it at a subsequent node. This is especially the case when the unwanted traffic is the result of Denial of Service (DoS) or worm attacks. Such attacks can cause network outages by overwhelming network device processors with traffic.

Enable queuing policies at every node where the potential for congestion exists, regardless of how rarely this in fact may occur. This principle applies to campus edge and interswitch links, where oversubscription ratios create the potential for congestion. There is simply no other way to guarantee service levels than by enabling queuing wherever a potential speed mismatch exists.

Protect the control plane and data plane by enabling control plane policing (on platforms supporting this feature) as well as data plane policing (scavenger class QoS) on campus network switches to mitigate and constrain network attacks.

However, before these strategic QoS design principles can be translated into platform-specific configuration recommendations, a few additional campus-specific considerations need to be taken into account and are discussed below.

Medianet Campus QoS Design Considerations

There are several considerations unique to the campus that factor into QoS designs, including:

Internal DSCP

For the most part, Cisco Catalyst switches perform QoS operations by assigning each packet (where “packet” is being used loosely in this chapter to describe Layer 2 frames as well as Layer 3 packets) an internal DSCP value (which is sometimes referred to as a “QoS label”, but is not to be confused with an MPLS label). This internal DSCP value is used to determine if a packet is to be remarked or policed or to which queue it is to be assigned or whether it should be dropped. The internal DSCP value may—or may not be—the same as the actual DSCP value of an IP (IPv4 or IPv6) packet; furthermore, an internal DSCP value is generated even for non-IP protocols (such as Layer 2 protocols like Spanning Tree as well as non-IP Layer 3 protocols like IPX).

The manner in which the internal DSCP value is generated for a packet depends on the trust state of the port on which the packet was received, which is described next.

Trust States and Operation

There are four (static) trust states with which a switch port can be configured:

Untrusted—A port in this trust state disregards any and all Layer 2 or Layer 3 markings that a packet may have and generates an internal DSCP value of 0 (by default, unless explicitly overridden by the [mls] qos cos interface configuration command) for all received packets. This port trust state can be enabled with the interface configuration command no [mls] qos trust.

Note Cisco switches—with the exception of the 4500/4900 family—use the mls prefix for these QoS commands, whereas the 4500/4900 family omits this prefix. Otherwise, these commands are compatible across Cisco Catalyst 2960, 2975, 3560, 3750, 4500, 4900, and 6500 series platforms.

Trust CoS—A port in this trust state accepts the 802.1p CoS marking of a 802.1Q tagged packet and use this value—in conjunction with the CoS-to-DSCP mapping table—to calculate an internal DSCP value for the packet. The default CoS-to-DSCP mapping table multiplies each CoS value by a factor of 8 to calculate the default internal DSCP (for example, CoS 1 maps to DSCP 8, CoS 2 maps to DSCP 16, and so on); however, the default CoS-to-DSCP mapping table can be modified with the [mls] qos map cos-dscp global configuration command (for example to map CoS 5 to the non-default DSCP value of EF [46]). In the case of an untagged packet (such as a packet received from the native VLAN) the default Internal DSCP value of 0 is applied. This port trust state can be enabled with the interface configuration command [mls] qos trust cos.

Trust IP Precedence—A port in this trust state accepts the IP Precedence (IPP) marking of a packet (that is, the first three bits of the IPv4 or IPv6 Type of Service byte) and uses this value—in conjunction with the IP Precedence-to-DSCP mapping table—to calculate an internal DSCP value for the packet. The default IPP-to-DSCP mapping table multiplies each IPP value by a factor of 8 to calculate the default internal DSCP (for example, IPP 1 maps to DSCP 8, IPP 2 maps to DSCP 16, and so on); however, the default IPP-to-DSCP mapping table can be modified with the [mls] qos map ip-prec-dscp global configuration command (for example to map IPP 5 to the non-default DSCP value of EF [46]). In the case of a non-IP packet (such as an IPX packet) the default Internal DSCP value of 0 is applied. This port trust state can be enabled with the interface configuration command [mls] qos trust ip-precedence; however, it should be noted that this trust state is a legacy state, having been relegated by the trust DSCP state.

Trust DSCP—A port in this trust state accepts the DSCP marking of a packet and sets the internal DSCP value to match. In the case of a non-IP packet (such as an IPX packet), the default internal DSCP value of 0 is applied. This port trust state can be enabled with the interface configuration command [mls] qos trust dscp.

Note While the preceding serves to summarize these port trust states and operations, more complex options and scenarios also exist, as illustrated in Figure 2-15.

In addition to the four static trust states described above, Cisco Catalyst switches also support a dynamic trust state, where the applied trust state for a port can dynamically toggle, depending on a successful endpoint identification exchange and the configured endpoint trust policy. This feature is referred to as conditional trust and automates user mobility for Cisco IP telephony deployments. Conditional trust operation is illustrated in Figure 2-1.

2. The Cisco IP phone sets CoS to 5 for VoIP and to 3 for call signaling traffic.

3. The Cisco IP phone rewrites CoS from PC to 0.

4. The switch trusts CoS from phone and maps CoS-to-DSCP to generate internal DSCP values for all incoming packets.

Note CDP is a lightweight, proprietary protocol engineered to perform neighbor discovery and as such was never engineered to be used as a security authentication protocol. Therefore, CDP should not be viewed or relied on as secure, as it can easily be spoofed.

The dynamic conditional trust state for Cisco Unified IP phones can be enabled with the interface configuration command [mls] qos trust device cisco-phone. Additionally, newer medianet devices, such as Cisco TelePresence Systems and IP Video Surveillance cameras, can also support conditional trust (on certain platforms with the latest versions of software); these devices use the cts and cisco-camera keywords, respectively

Regardless of how the Internal DSCP is generated—either by one of the four static port trust states or by the dynamic conditional trust state—it is important to note that as the packet exits the switch (unless explicitly overridden, as discussed in the following paragraph) the Catalyst switch sets the exiting IP packet’s DSCP value to the final computed internal DSCP value. If trunking is enabled on the exiting switch port, the exiting packet’s CoS value is also similarly set, but only to the first three bits of the final computed internal DSCP value.

If an administrator does not want the internal DSCP to overwrite the packet’s ingress DSCP value, they can utilize the DSCP transparency feature, which is enabled by the no mls qos rewrite ip dscp global configuration command. When the DSCP transparency feature is enabled, the packet always has the same DSCP value on egress as it had on ingress, regardless of any internal QoS operations performed on the packet.

Note The DSCP transparency is supported on all switching platforms discussed in this chapter, with the exception of the Catalyst 4500/4900 family.

Trust Boundaries

Having reviewed the internal DSCP concept and trust state operations, the administrator needs to consider where to enforce the trust boundary, i.e., the network edge at which packets are trusted (or not).

In line with the strategic QoS classification principle mentioned at the outset of this chapter, the trust boundary should be set as close to the endpoints as technically and administratively feasible.

The reason for the “administratively feasible” caveat within this design recommendation is that, while many endpoints (including user PCs) technically support the ability to mark traffic on their NICs, allowing a blanket trust of such markings could easily facilitate network abuse, as users could simply mark all their traffic with Expedited Forwarding, which would allow them to hijack network priority services for their non-realtime traffic and thus ruin the service quality of real time applications throughout the enterprise.

Thus, for many years it was advocated to not trust traffic from user PCs. However, more recently, various secure endpoint software has been released, such as Cisco Security Agent, that allows for PC markings to be centrally administered and enforced. Such centrally-administered software—along with quarantine VLANs for PCs that do not have such software installed—presents the option for network administrators to trust such secure endpoint PCs.

Therefore, from a trust perspective, there are three main categories of endpoints:

The optimal trust boundaries and configuration commands for each of these categories of endpoints are illustrated in Figure 2-2.

Figure 2-2 Optimal Trust Boundaries

Port-Based, VLAN-Based, and Per-Port/Per-VLAN-Based QoS

QoS classification (including trust), marking, and policing policies on Cisco Catalyst switches can be applied in one of three ways:

Port-based QoS—When a QoS policy is applied on a per-port basis, it is attached to a specific physical switch port and is active on all traffic received on that specific port (only). QoS policies are applied on a per-port basis, by default. Figure 2-3 illustrates port-based QoS.

Figure 2-3 Port-Based QoS

VLAN-based QoS—When a QoS policy is applied on a per-VLAN basis, it is attached to a logical VLAN interface and is active on all traffic received on all ports that are currently assigned to the VLAN. Applying QoS polices on a per-VLAN basis requires the [mls] qos vlan-based interface command. Figure 2-4 illustrates VLAN-based QoS.

Figure 2-4 VLAN-Based QoS

Per-port/per-VLAN-based QoS—When a QoS policy is applied on a per-port/per-VLAN basis, it is attached to specific VLAN on a trunked port and is active on all traffic received from that specific VLAN from that specific trunked port (only). Per-port/per-VLAN QoS is not supported on all platforms and the configuration commands are platform-specific, and as such is discussed on a per-platform basis later in this chapter. Figure 2-5 illustrates per-port/per-VLAN-based QoS.

Figure 2-5 Per-Port/Per-VLAN-Based QoS

These application options allow for efficiency and granularity. For example, marking policies may be more efficiently scaled when applied on a per-VLAN basis. On the other hand, policies requiring policing granularity are best performed on a per-port/per-VLAN basis. Specifically, if an administrator wanted to police VoIP traffic from IP phones to a maximum of 128 kbps from each IP phone, this would could best be achieved by deploying a per-port/per-VLAN policing policy applied to the VVLAN on a given port. A per-port policer would not be sufficient, unless additional classification criteria was provided to specifically identify traffic from the IP phone only; neither could a per-VLAN policy be used, as this would police the aggregate traffic from all ports belonging to the VVLAN to 128 kbps.

EtherChannel QoS

Another case where logical versus physical interfaces has a bearing on QoS design is when provisioning QoS over EtherChannel interfaces. Multiple Gigabit Ethernet or 10-Gigabit Ethernet ports can be logically bundled into a single EtherChannel interface (which is also known as a PortChannel interface, as this is how it appears in the configuration syntax). From a Layer 2/Layer 3 standpoint, these bundled interfaces are represented-and function-as a single interface.

Two important considerations that should be kept in mind when deploying QoS policies on EtherChannel interfaces:

The first EtherChannel QoS design consideration relates to load-balancing. Depending upon the platform, load balancing on the port-channel group can be done in various ways—by source IP address, by destination IP address, by source MAC address, by destination MAC address, by source and destination IP address, or by source and destination MAC address. It should be noted that EtherChannel technology does not take into account the bandwidth of each flow. Instead, it relies on the statistical probability that, given a large number of flows of relatively equal bandwidths, the load is equally distributed across the links of the port-channel group. However, this may not always be true. In general, it is recommended to load-balance based on the source-and-destination IP address, as this allows for statistically-superior load-distribution. And when loads are balanced in this manner, packets belonging to a single flow will retain their packet order.

The second EtherChannel QoS design consideration is that EtherChannel technology does not take into account any QoS configuration on the individual Gigabit Ethernet links. Again, it relies on the statistical probability that, given a large number of flows with different QoS markings, the load of those individual flows is equally distributed across the links of the port-channel group. Given a failover situation in which one of the links of an EtherChannel group fails, the sessions crossing that link would be re-allocated across the remaining links. Since EtherChannel technology has no awareness of QoS markings, it could easily re-allocate more real-time flows across any one of the links than the link is configured to accommodate. This could result in degraded real-time services. Incidentally, this scenario could also occur in a non-failover situation. Therefore, caution should be used when deciding to utilize EtherChannel technology versus a single higher-speed uplink port.

When configuring QoS policies over EtherChannel interfaces, the policies must often (but not always) be split two ways:

Ingress policies, such as trust or marking and/or policing policies, are attached to the (logical) PortChannel interface. For example [mls] qos trust dscp or service-policy input commands would be applied to the PortChannel interface; this is the case for the Catalyst 4500/4500-E and 6500-6500-E series switches. An exception to this is the Catalyst 2960-G/S, 2975-GS, 3560-G/E/X, and 3750-G/E/X family of switches, which require ingress trust/classification/marking/policing policies to be identically-configured on each and every Etherchannel physical port-member interface.

Egress queuing policies are applied directly on the (physical) interfaces that compose the EtherChannel bundle. As queuing policies and commands vary by platform and/or linecard, these must be configured according to the platform-specific queuing sections outlined later in this design chapter. This requirement applies to all Catalyst switches discussed in this design chapter.

Therefore, as there is some slight per-platform variation in EtherChannel QoS configuration, a design example is included within each platform-family.

Campus QoS Models

Generally speaking, there are four main steps to deploying QoS models in the campus:

4. Enable control plane policing (on platforms that support this feature).

These campus QoS deployment steps are illustrated in Figure 2-6 and are disscused in additional detail in the following sections.

Figure 2-6 Campus QoS Deployment Steps

Ingress QoS Models

The ingress QoS model applies either a port trust state or an explicit classification and marking policy to the switch ports (or VLANs, in the case of VLAN-based QoS), as well as optional ingress policers and ingress queuing (as required and supported).

To begin with, the administrator needs to consider what application classes are present at the campus access edge (in the ingress direction) and whether these application classes are sourced from trusted or untrusted endpoints. As previously discussed, if PC endpoint markings are secured and centrally administered, then endpoint PCs can also be considered trusted endpoints; however, in most deployment scenarios this is not the case, and as such PCs are considered as untrusted endpoints for the remainder of this chapter.

Not every application class in the Cisco-modified RFC 4594-based model, shown in Figure 1-9 in Chapter 1, “Enterprise Medianet Quality of Service Design 4.0—Overview”, is present in the ingress direction at the campus access edge and as such, do not need to be provisioned for at this node. Specifically, network control traffic should never be received from endpoints, and as such, this class is not needed at the campus access edge. A similar case can be made for OAM traffic, as this traffic is primarily generated by network devices and is collected by management stations, which are typically in a data center or a network control center (and not the campus in general). Also, broadcast video and multimedia streaming traffic would originate from data center servers and would be unidirectional to campus endpoints (and should not be sourced from campus endpoints); therefore, these classes also would not need to be provisioned at the campus access edge.

That being said, of the remaining classes, consideration has to be given to which are sourced from trusted versus untrusted endpoints. Voice traffic is primarily sourced from Cisco IP telephony devices residing in the voice VLAN (VVLAN), and as such can be trusted (optimally, by conditional trust polices to facilitate user mobility, as illustrated in Figure 2-1). On the other hand, voice traffic may also be sourced from PC soft-phone applications, like Cisco Unified Personal Communicator (CUPC). However, because such applications share the same UDP port range as multimedia conferencing traffic (specifically, UDP/RTP ports 16384-32767), from a campus access edge classification standpoint, this renders soft-phone VoIP streams virtually indistinguishable from multimedia conferencing streams (unless NBAR technologies are used at the campus access edge). Unless soft-phone VoIP can be definitively distinguished from multimedia conferencing flows, it is simpler and safer to classify and mark the UDP port range of 16384-32767 as multimedia conferencing flows (AF4), as the alternative could allow multimedia conferencing flows to be admitted into strict priority queues intended (and capacity planned) for VoIP-only.

Realtime interactive flows may be sourced from Cisco TelePresence systems, which—like other Cisco IP telephony products—reside in the VVLAN and can be trusted to mark their own traffic, as shown in Figure 2-7. Cisco TelePresence systems can be configured with either static or conditional trust policies.

Figure 2-7 Cisco TelePresence Conditional Trust Operation

At the campus edge, signaling traffic may be sourced from both trusted endpoints (such as Cisco IP phones or Cisco TelePresence systems) or from untrusted endpoints (in the case of soft-phone applications running on PC endpoints, like CUPC). Therefore, both cases need to be accounted for with access edge policies.

Data applications, whether transactional, bulk, or best effort, are typically sourced from untrusted PC endpoints, as are scavenger applications.

Traffic sourced from untrusted endpoints requires explicit classification and marking policies. While the number of applications assigned to the five (non-default) untrusted campus access edge application classes shown in Figure 2-9 is virtually limitless—and is a function of the business objectives of the enterprise, as well as the technical proficiency of the network administrators—only a relatively few applications are used in this design chapter to illustrate these design concepts. Specifically, multimedia conferencing applications are sourced from the DVLAN to/from UDP ports 16384-32767. Signaling applications are limited to Skinny Call Control Protocol (SCCP) on TCP ports 2000-2002 and Session Initiation Protocol (SIP) on TCP/UDP ports 5060 and 5061. HTTPs are classified as a transactional data application (as the use of a secure transport implies a transaction). Additionally, an sample Enterprise Resource Planning (ERP) application, namely Oracle, is likewise classified as a transactional data application. FTP and email applications are classified as bulk data, as are PC-backup applications, such as Connected Backup for PC. Various peer-to-peer media sharing applications, such as iTunes, BitTorrent, and Kazaa, are classified as scavenger, as are gaming applications like Microsoft and Yahoo online gaming services. These applications classes, along with their classification criteria, are summarized in Figure 2-9.

Figure 2-9 Untrusted Application Classification Examples

Note It is important to note that the list of TCP/UDP ports for applications shown in Figure 2-9 is merely an example list and is not to be taken as an application port list reference. Some application ports are not included in the list above (to simplify the examples that follow); additionally, many applications add or change ports with incremental software revisions (and this list will not be maintained or updated to reflect such revisions).

In addition to explicit marking policies, optional policing policies may also be implemented on the campus access ingress edges to meter and manage flows. For example, voice flows could be policed to 128 kbps, while remaining traffic from the VVLAN (which would for the most part be signaling traffic, with a negligible amount of management traffic) could be policed to 32 kbps. Both VVLAN policers could be configured to drop violating flows, as VoIP and signaling traffic is well-defined and behaved, and traffic bursts in excess of these rates would indicate a network violation or abuse.

Note Policing VoIP to 128 kbps is adequate to support G.711, G.722 and G.729 VoIP codecs. However, other VoIP codecs may require additional bandwidth, such as the Cisco Wideband (L16) Codec, which requires 256 kbps + network overhead (for a 320 kbps total). In such cases, the VoIP policiers need to be provisioned accordingly.

In the DVLAN, multimedia conferencing flows come in various resolutions and quality. For example, 384 kbps or 768 kbps H.323 video conferencing streams can be policed at 500 kbps and 1 Mbps, respectively. Higher quality streams, such as 720p or 1080p H.264 streams, can be policed at (approximately) 2 Mbps and 5 Mbps, respectively (depending on motion-handling algorithms and other factors).

Data plane policing policies (discussed in QoS for Security Best Practices in Chapter 1, “Enterprise Medianet Quality of Service Design 4.0—Overview”) can be applied to monitor transactional data, bulk data, and best effort flows, such that these flows are metered, with violations being remarked to to either an increased Drop Precedence within a given AF class (such as AF12, AF22, AF32, or AF42 or even to AF13, AF23, AF33, or AF43 in the case of dual-rate policiers) or to CS1. What is important is that these packets are not dropped on ingress. For example, each of these classes can be policed to remark at 10 Mbps.

Note This data plane policing rate (of 10 Mbps) is an example value. Such values could vary from enterprise to enterprise, even from department to department within an enterprise. The key is to set data plane policing rates such that approximately 95% of traffic flows for a given application class fall below the metered rate. For the sake of simplicity, a data plane policing rate of 10 Mbps is used for these application classes within this chapter.

Finally, a scavenger class can also be implemented to meter “less than best effort” flows—such as peer-to-peer media sharing applications or gaming applications. Such flows could also be policed to 10 Mbps (which is still only 1% of a GE link’s capacity), but with a more severe penalty for violations, namely dropping rather than remarking.

Once all ports have been set to trust or classify and mark (and optionally police) traffic, then ingress queuing policies may be applied (on platforms that require and support this feature). Ingress queuing details are discussed in the relevant platform-specific sections of this chapter.

It bears repeating that not every application class described here needs to be provisioned for at the access edge. For example, if multimedia conferencing applications are not widely deployed or utilized, then this class (along with the DVLAN signaling class) need not be provisioned at the access edge. Similarly, administrators may choose to simplify their data plane provisioning models, such that rather than explicitly provisioning transactional data, bulk data, and best effort classes, these could be provisioned as an aggregate best effort class (and marked as DF and optionally policed at an aggregate policing level). Likewise, explicitly provisioning a scavenger class is completely optional. Nonetheless, full examples, as described, are shown in this design chapter to provide template configurations which may be simplified as needed (or alternatively, expanded on).

Once ingress traffic has been trusted, classified, and (optionally) policed at the campus access edge, then the ingress QoS model for all campus inter-switch links can be set to trust the DSCP markings of all incoming packets.

Egress QoS Models

Egress QoS models primarily deal with queuing and dropping policies (although additional egress QoS features—such as egress policing—are supported on some platforms). As discussed in the previous chapter, critical media applications require service guarantees regardless of network conditions. The only way to provide service guarantees is to enable queuing at any node that has the potential for congestion, regardless of how rarely this may actually occur. This principle applies not only to campus-to-WAN/VPN edges, where speed mismatches are most pronounced, but also to campus inter-switch links, where oversubscription ratios create the potential for instantaneous congestion. There is simply no other way to guarantee service levels other than by enabling queuing wherever a speed mismatch exists.

Additionally, because each medianet application class has unique service level requirements, each should optimally be assigned a dedicated queue. However, on platforms bounded by a limited number of hardware queues, no fewer than four queues would be required to support medianet QoS policies in the campus; specifically the following queues would be considered a minimum:

Realtime queue (to support a RFC 3246 EF PHB service)

Guaranteed bandwidth queue (to support RFC 2597 AF PHB services)

Default queue (to support a RFC 2474 DF service)

Bandwidth constrained queue (to support a RFC 3662 scavenger service)

Additionally, given the queuing best practice guidelines outlined in the previous chapter, the following bandwidth allocations are recommended for these queues:

On some platforms, not only bandwidth allocations may be tuned, but also buffer allocations. Per-queue buffer allocations can be directly proportional to per-queue bandwidth allocations (for example, the buffer allocations for the best effort queue may be set to 25% to match the bandwidth allocation for this queue) or these can be indirectly proportional (for example, a strict priority queue which is being serviced in real-time would likely not need a corresponding 33% buffer allocation; whereas a bandwidth-constrained queue would benefit from deeper buffers to offset its minimal bandwidth allocation). Tuning buffer allocations is less impactful than tuning bandwidth allocations alone, but serves to complement the scheduling policies. Thus, in this design chapter—wherever possible—the strict-priority and less-than-best-effort queues are tuned to be indirectly proportional to their bandwidth allocations, while all other non-priority preferential queues are tuned to be directly proportional to their bandwidth allocations.

Given these minimum queuing requirements and bandwidth and buffer allocation recommendations, the following application classes can be mapped to the respective queues:

Voice, broadcast video, and realtime interactive may be mapped to the realtime queue (per RFC 4594).

Network/internetwork control, signaling, network management, multimedia conferencing, multimedia streaming, and transactional data can be mapped to the guaranteed bandwidth queue. Congestion avoidance mechanisms (i.e., selective dropping tools), such as WRED, can be enabled on this class; furthermore, if configurable drop thresholds are supported on the platform, these may be enabled to provide intra-queue QoS to these application classes, in the respective order they are listed (such that control plane protocols receive the highest level of QoS within a given queue).

Bulk data and scavenger traffic can be mapped to the bandwidth-constrained queue and congestion avoidance mechanisms can be enabled on this class. If configurable drop thresholds are supported on the platform, these may be enabled to provide inter-queue QoS to drop scavenger traffic ahead of bulk data.

Best effort traffic can be mapped to the default queue; congestion avoidance mechanisms can be enabled on this class.

Obviously, if more queues are supported these should be leveraged to give more granular bandwidth guarantees to these respective application classes. Nonetheless, the general application class hierarchy is to provision realtime applications (such as voice, broadcast video and realtime interactive) in a strict priority queue, followed by control plane protocols (including network/internetwork control, signaling [which is control plane traffic for the voice/video infrastructure] and network management), followed by guaranteed bandwidth, non-realtime applications (including multimedia conferencing, multimedia streaming, and transactional data), followed by the default best effort class, followed by bulk data and scavenger applications. Maintaining such an application class hierarchy serves to ensure consistent per-hop behaviors (PHBs).

Some platforms provide DSCP-to-queue mapping functionality, whereas others (such as some Catalyst 6500 linecards) are limited to CoS-to-queue mapping functionality only. In both cases, it is the value of the internal DSCP that decides the transmit queue to which the packet is assigned; but in the case of CoS-to-queue mapping, internal DSCP values are assigned to queues in blocks of eight. For example, if CoS 1 is mapped to queue 1 (Q1), this means that internal DSCP values 8 through 15 are assigned to Q1; if CoS 2 is assigned to queue 2, this means that internal DSCP values 16-23 are assigned to Q2; if CoS 3 is mapped to queue 3, this means that internal DSCP values 24-31 are assigned to Q3, and so on. Essentially, CoS-to-queue mapping assigns the internal DSCP value that corresponds to the (CoS value * 8), along with the following seven internal DSCP values, to a given queue.

In some CoS-to-queue mapping scenarios, certain application classes may not be distinguishable from one another (due to the limited marking granularity of the 3-bit 802.1Q/p CoS model) and as such need to be assigned to the same queues. For example, since realtime interactive traffic (CS4/32) and multimedia conferencing traffic (AF41/34) share the same CoS value (of 4), these could not be mapped to different queues within a CoS-to-queue mapping model. Such considerations are discussed in more detail in the platform-specific sections of this chapter.

In contrast, with DSCP-to-queue mapping, discrete DSCP values can be mapped to specific queues, allowing for better queuing-policy granularity.

A campus egress QoS model example for a platform that supports DSCP-to-queue mapping with a 1P3Q8T queuing structure is shown in Figure 2-12.

Figure 2-12 Campus Egress QoS Model Example

Medianet Campus Port QoS Roles

The policy elements discussed thus far can be grouped into roles that various switch ports serve within the medianet campus architecture, such as:

– Static trust policies should be configured on these ports, preferably DSCP-trust for maximum classification and marking granularity.

– Optional ingress marking or policing policies (such as data plane policing policies) may be configured on these ports.

– Egress queuing policies that support (at a minimum) 1P3QyT queuing should be configured on these ports, preferably with DSCP-to-queue mapping. However, switch platforms/linecards that support 1P7QyT queuing are preferred at the distribution and core layers for increased queuing granularity at these aggregation layers.

– Distribution downlinks (to the access layer) may be configured with microflow policing or User-Based Rate Limiting (UBRL) to provide a potential second line of policing defense for the medianet campus network.

AutoQoS

Due to the complexity of some QoS policies, coupled with the large number of ports on typical Catalyst switches, QoS deployment can often become unwieldy. One option is to make liberal use of the interface range configuration command to deploy policies to multiple interfaces at once. Another option is to use Automatic QoS (AutoQoS), if applicable. Yet another option is to use Smartport macros (which is discussed in the following section).

To address customer demands for simplification of QoS deployment, Cisco developed the AutoQoS feature. AutoQoS is an intelligent macro that allows an administrator to enter one or two simple AutoQoS commands to enable all the recommended QoS settings for one (or more) applications. In its first release, AutoQoS-VoIP, AutoQoS provisioned all the recommended QoS settings for IP telephony deployments for a specific switch port interface.

AutoQoS-VoIP for Catalyst switches supports three modes of operation, all of which are preceded by the auto qos voip interface configuration command:

cisco-phone—This mode is intended for switch ports that may be connected to PCs or Cisco IP phones and sets the port to a conditional trust state, as well as configures mapping and queuing policies for QoS for VoIP.

cisco-softphone—This mode is intended for switch ports that may be connected to PCs running Cisco IP Communicator or similar soft-phone software, and polices VoIP and signaling traffic, as well as configures mapping and queuing policies for QoS for VoIP (note that this feature is not supported on the Catalyst 4500 series of switches).

trust—This mode is intended for switch ports that are within the trusted-boundary (such as inter-switch links, including uplinks and downlinks) or switch ports that are connecting to trusted endpoints, and sets the port to a static trust-dscp state, as well as configures mapping and queuing policies for QoS for VoIP.

Note AutoQoS-VoIP is not supported on the Catalyst 4500-E/4900M series switches.

For additional details on AutoQoS-VoIP and the platform-specific commands and settings that it generates, refer to the respective platform’s AutoQoS-VoIP documentation:

Some may naturally ask, why should I read this lengthy and complex QoS design document when I have AutoQoS-VoIP? AutoQoS-VoIP is an excellent tool for customers who want to enable QoS for VoIP (only), that have basic QoS needs, or do not have the time or desire to do more with QoS.

However, it is important to remember how AutoQoS developed. AutoQoS features are the result of Cisco QoS feature development coupled with Cisco QoS design guides based on large-scale lab testing. AutoQoS-VoIP is the product of the first QoS Solution Reference Network Design (SRND) guide (published in 1999) and the AutoQoS-VoIP feature has not been significantly updated since. Therefore, if the business requirement for QoS is for IP Telephony only, then AutoQoS would be an excellent tool to expedite the QoS deployment.

However, as of August 2010, an updated version of AutoQoS was released for the Catalyst 2960-G/S, 2975-GS, 3560-G/E/X, and 3750-G/E/X family of switches (with IOS release 12.2(55)SE). This release was directly based on the recommendations put forward in this design chapter to support medianet applications; in fact, the new keyword and name for this version of AutoQoS is AutoQoS-SRND4 (taken from Solution Reference Network Design guide version 4, which is the Cisco name for this design chapter). AutoQoS-SRND4 is the fastest and most accurate method to deploy the recommended QoS designs to support rich media applications across this family of switches. Details on this feature, as well as the complete configurations produced, are presented in Cisco Catalyst 2960-G/S, 2975-GS, 3560-G/E/X, and 3750-G/E/X QoS Design.

Note It should be mentioned that—at the time of writing—there are initiatives to update AutoQoS to also support medianet applications on other switching platforms. As these become available, details will be added to this design chapter.

Smartport Macros

Smartports macros provide static (and on some platforms, dynamic) configurations to port or VLAN interfaces. With Smartport macros, longer configuration snippets can be deployed with a single command, with some configuration parameters modified dynamically (such as VLAN IDs and IP addresses).

Certain Smartport macros are pre-defined, or built in, within Catalyst IOS switch software, such as macros that configure ports to connect to Cisco IP phones (which includes the configuration and execution of AutoQoS-VoIP on the switchport), Cisco Catalyst switches, Cisco routers, and Cisco wireless access points (among other devices).

Additionally, Smartport macros can be deployed on event triggers. The most common event triggers are based on CDP messages received from connected devices. The detection of a device invokes a CDP event trigger, such as a Cisco IP phone, Cisco switch, Cisco router, or Cisco wireless access point.

Finally, Smartport macros can be custom defined, such that an administrator can assign a Smartport macro name to a custom configuration snippet and apply the macro statically or have it triggered dynamically by an event.

Control Plane Policing

Control plane policing (CoPP) is a security infrastructure feature available on Catalyst 4500 and 6500 Series switches running Cisco IOS that allows the configuration of QoS policies that rate limit the traffic handled by the main CPU of the switch. This protects the control plane of the switch from direct DoS attacks and reconnaissance activity.

CoPP protects Catalyst 4500 and 6500 switches by allowing the definition and enforcement of QoS policies that regulate the traffic processed by the main switch CPU (route or switch processor). With CoPP, these QoS policies are configured to permit, block, or rate limit the packets handled by the main CPU.

Packets handled by the main CPU, referred to as control plane traffic, typically include:

Routing protocols

Packets destined to the local IP address of the router

Packets from network management protocols, such as SNMP

Interactive access protocols, such as SSH, and telnet

Other protocols, such as ICMP, or IP options, might also require handling by the switch CPU

Layer 2 packets such as BPDU, CDP, DOT1X, etc.

CoPP leverages the modular QoS command line interface (MQC) for its QoS policy configuration. MQC allows the classification of traffic into classes and lets you define and apply distinct QoS policies to separately rate limit the traffic in each class. MQC lets you divide the traffic destined to the CPU into multiple classes based on different criteria. For example, four traffic classes could be defined based on relative importance: critical, normal, undesirable, and default. After the traffic classes are defined, a QoS policy can be defined and enforced for each class according to importance. The QoS policies in each class can be configured to permit all packets, drop all packets, or drop only those packets exceeding a specific rate limit.

Note The number of control plane classes is not limited to four, but should be chosen based on local network requirements, security policies, and a thorough analysis of the baseline traffic.

CoPP comes into play right after the switching or the routing decision and before traffic is forwarded to the control plane. When CoPP is enabled the sequence of events (at a high level) is:

1. A packet enters the switch configured with CoPP on the ingress port.

2. The port performs any applicable input port and QoS services.

3. The packet gets forwarded to the switch CPU.

4. The switch CPU makes a routing or a switching decision, determining whether or not the packet is destined for the control plane.

5. Packets destined for the control plane are processed by CoPP and are dropped or delivered to the control plane according to each traffic class policy. Packets that have other destinations are forwarded normally.

The Catalyst 4500 and Catalyst 6500 Series switches implement CoPP similarly; however, CoPP has been enhanced on both platforms to leverage the benefits of their hardware architectures, and as a result each platform provides unique features. Therefore, the CoPP implementations on Catalyst 4500 and Catalyst 6500 Series switches are discussed in platform-specific detail in their respective sections within this chapter. Nonetheless, some general guidelines to deploying CoPP are common to both platforms.

Defining CoPP Traffic Classes

Developing a CoPP policy starts with the classification of the control plane traffic. To that end, the control plane traffic needs to be first identified and separated into different class maps.

The Catalyst 4500 Series switches provides a macro which automatically generates a collection of class maps for common Layer 3 and Layer 2 control plane traffic. While very useful, these predefined class maps might not include all the necessary traffic classes reaching the control plane and as a result they might need to be complemented with other user-defined class maps. The Catalyst 6500 Series switches do not provide a configuration macro. Therefore, all class maps need to be defined by the user.

This section presents a classification template that can be used as a model when implementing CoPP on Catalyst 4500 and Catalyst 6500 Series switches. This template presents a realistic classification, where traffic is grouped based on its relative importance and protocol type. The template uses nine different classes, which provide great granularity and make it suitable for real-world environments. It is important to note that, even though you can use this template as a reference, the actual number and type of classes needed for a given network can differ and should be selected based on local requirements, security policies, and a thorough analysis of baseline traffic.

This CoPP template defines these nine traffic classes:

Border Gateway Protocol (BGP)—This class defines traffic that is crucial to maintaining neighbor relationships for BGP routing protocol, such as BGP keepalives and routing updates. Maintaining BGP routing protocol is crucial to maintaining connectivity within a network or to an ISP. Sites that are not running BGP would not use this class.

Reporting—This class defines traffic used for generating network performance statistics for reporting. This class would include traffic required for using Cisco IOS IP Service Level Agreements (SLAs) to generate ICMP with different DSCP settings in order to report on response times within different QoS data classes.

Monitoring—This class defines traffic used for monitoring a router. This kind of traffic should be permitted but should never be allowed to pose a risk to the router. With CoPP, this traffic can be permitted but limited to a low rate. Examples would include packets generated by ICMP echo requests (ping and trace route).

Undesirable—This explicitly identifies unwanted or malicious traffic that should be dropped and denied access to the RP. For example, this class could contain packets from a well-known worm. This class is particularly useful when specific traffic destined to the router should always be denied rather than be placed into a default category. Explicitly denying traffic allows you to collect rough statistics on this traffic using show commands and thereby offers some insight into the rate of denied traffic.

Default—This class defines all remaining traffic destined to the route processor (RP) that does not match any other class. MQC provides the default class so you can specify how to treat traffic that is not explicitly associated with any other user-defined classes. It is desirable to give such traffic access to the RP, but at a highly reduced rate. With a default classification in place, statistics can be monitored to determine the rate of otherwise unidentified traffic destined to the control plane. After this traffic is identified, further analysis can be performed to classify it. If needed, the other CoPP policy entries can be updated to account for this traffic.

Note On Catalyst 6500 Supervisors 32 and 720 the default class (class-default) is the only traffic class that matches both IP and non-IP packets.

Deploying CoPP Policies

Because CoPP filters traffic, it is critical to gain an adequate level of understanding about the legitimate traffic destined to the RP prior to deployment. CoPP policies built without proper understanding of the protocols, devices, or required traffic rates involved can block critical traffic, which has the potential of creating a DoS condition. Determining the exact traffic profile needed to build the CoPP policies might be difficult in some networks.

The following steps employ a conservative methodology that facilitates the process of designing and deploying CoPP. This methodology uses iterative ACL configurations to help identify and to incrementally filter traffic.

To deploy CoPP, it is recommended that you perform these steps:

Step 1 Determine the classification scheme for your network.

Identify the known protocols that access the RP and divide them into categories using the most useful criteria for your specific network. In the case of the Catalyst 4500 Series switch, you can take advantage of the system predefined classes and chose to combine them with your own classes. In the case of Catalyst 6500 there are no predefined classes, so you need to define all the classes. As an example of classification, the nine categories template presented earlier in this section (BGP, IGP, interactive management, file management, reporting, critical qpplications, undesirable, and default) use a combination of relative importance and traffic type. Select a scheme suited to your specific network, which might require a larger or smaller number of classes.

Step 2 Define classification access lists.

Configure each ACL to permit all known protocols in its class that require access to the RP. At this point, each ACL entry should have both source and destination addresses set to any. In addition, the ACL for the default class should be configured with a single entry, permit ip any any. This matches traffic not explicitly permitted by entries in the other ACLs. After the ACLs have been configured, create a class map for each class defined in Step 1, including one for the default class. Then assign each ACL to its corresponding class map.

Note In this step you should create a separate class map for the default class, rather than using the class default available in some platforms. Creating a separate class map and assigning a permit ip any any ACL allows you to identify traffic not yet classified as part of another class.

Each class map should then be associated with a policy map that permits all traffic, regardless of classification. The policy for each class should be set as conform-action transmit exceed-action transmit.

Step 3 Review the identified traffic and adjust the classification.

Ideally, the classification performed in Step 1 identified all required traffic destined to the router. However, realistically, not all required traffic is identified prior to deployment and the permit ip any any entry in the default class ACL logs a number of packet matches. Some form of analysis is required to determine the exact nature of the unclassified packets. For example, you can use the show access-lists command to see the entries in the ACLs that are in use and to identify any additional traffic sent to the RP. However, to analyze the unclassified traffic you can use one of these techniques:

When traffic has been properly identified, adjust the class configuration accordingly. Remove the ACL entries for those protocols that are not used. Add a permit any any entry for each protocol just identified.

Step 4 Restrict a macro range of source addresses.

Refine the classification ACLs by only allowing the full range of the allocated CIDR block to be permitted as the source address. For example, if the network has been allocated 172.68.0.0/16, then permit source addresses from 172.68.0.0/16 where applicable.

This step provides data points for devices or users from outside the CIDR block that might be accessing the equipment. An external BGP (eBGP) peer requires an exception because the permitted source addresses for the session lies outside the CIDR block. This phase might be left on for a few days to collect data for the next phase of narrowing the ACL entries.

Increasingly limit the source address in the classification ACLs to only permit sources that communicate with the RP. For example, only known network management stations should be permitted to access the SNMP ports on a router.

Step 6 Refine CoPP policies by implementing rate limiting.

Use the show policy-map control-plane command to collect data about the actual policies in place. Analyze the packet count and rate information and develop a rate limiting policy accordingly. At this point, you might decide to remove the class map and ACL used for the classification of default traffic. If so, you should also replace the previously defined policy for the default class by the class default policy.

A tested and validated set of CoPP rates are presented in
Table 2-1. It is important to note that the values presented here are solely for illustration purposes, as every environment has different baselines.

Table 2-1 Example Control Plane Policing Rate Limits and Actions

Traffic Class

Rate (bps)

Conform Action

Exceed Action

Border Gateway Protocol

4,000,000

Transmit

Drop

Interior Gateway Protocol

300,000

Transmit

Drop

Interactive Management

500,000

Transmit

Drop

File management

6,000,000

Transmit

Drop

Monitoring

900,000

Transmit

Drop

Critical applications

900,000

Transmit

Drop

Undesirable

32,000

Drop

Drop

Default

500,000

Transmit

Drop

This CoPP classification template, deployment model, and rate limits are used in the Catalyst 4500 and 6500 CoPP configuration examples later in this chapter.

The Cisco Catalyst 2960-G/S, 2975-GS, 3560-G/E/X, and 3750-G/E/X family of switches all support the (previously discussed) minimum requirements for medianet switches, including Gigabit Ethernet support, as well as supporting a strict priority hardware queue with at least three additional hardware queues.

The specific switch hardware configurations that meet these requirements are shown below, by switch family.

Note These are the current shipping hardware configurations for these switching families at the time of writing. Additional configurations options may be added over time. As long as future hardware configuration options include the minimum requirements for medianet campus switches (namely, the support of Gigabit interfaces, along with a strict priority hardware queue and at least three additional non-priority hardware queues), these can also be deployed across medianet campus network infrastructures according to the guidelines presented in this chapter.

At a high-level, the major differences between these switch product families are as follows: the Catalyst 2960-G, 2960-S, and 2975-GS are Layer 2-only switches, while the 3560-G, 3560-E, 3560-X, and 3750-G, 3750-E, and 3750-X support Layer 2/Layer 3 multilayer switch feature sets. Additionally, the Catalyst 2960-G, 3560-G, 3560-E, and 3560-X are standalone switches, while (some models of) the Catalyst 2960-S, the Catalyst 2975-GS, 3750-G, 3750-E, and 3750-X are stackable switches. The Catalyst 2975-GS and 3750-G support stacking with Cisco StackWise technology, while the 3750-E and 3750-X use StackWise Plus technology; however, the models of the 2960-S family that support stacking do so using FlexStack technology. All of these Catalyst switches support a dual-counter-rotating ring, which effectively serves as the switching backplane; these rings are internal for non-stackable switches, but external (via special cables) for stackable switches. These rings operate at 16 Gbps each (for a total switching capacity of 32 Gbps) for the Catalyst 2960-G and 2975-GS series, 20 Gbps each (for a total switching capacity of 40 Gbps) for the 2960-S series and at 32 Gbps each (for a total switching capacity of 64 Gbps) for the 3750-G, 3750-E, and 3750-X series switches.

Note For additional product-specific details, refer to the product data sheets for each switch product family.

These major feature and functionality differences between these switch product families are summarized in
Table 2-2.

While these switches have some major feature and functionality differences, their QoS feature set and command syntax are virtually identical, with a few minor differences, as is discussed in the following section.

Platform-Specific QoS Considerations

Cisco Catalyst 2960-G/S, 2975-GS, 3560-G/E/X, and 3750-G/E/X have virtually identical QoS feature sets, and as such are discussed collectively; additionally, for brevity, these switches are collectively referred to as the Catalyst 3750-E, except when discussing switch-specific differences. The complete QoS model for the Catalyst 3750-E is shown in Figure 2-14.

Figure 2-14 Catalyst 3750-E QoS Model

Traffic is classified on ingress, based on trust-states, access-lists, or class-maps. Marking or policing policies can be applied to physical switch ports or—on multilayer switch platforms—to Switch Virtual Interfaces (SVIs), which allows for per-VLAN or per-port/per-VLAN policies.

Because the total inbound bandwidth of all ports can exceed the bandwidth of the stack or internal ring, ingress queues are located after the packet is classified, policed, and marked and before packets are forwarded into the switch fabric (i.e., the internal or stack rings). Because multiple ingress ports can simultaneously send packets to an egress port (such as an uplink port) and cause congestion, outbound queues are located after the stack or internal rings. The queuing scheduler is Shared Round Robin (SRR), and the dropping algorithm is Weighted Tail Drop (WTD), both of which are discussed in more detail in Queuing Models.

Relating to QoS, these key switch-specific differences exist:

The Catalyst 2960-G/S and 2975-GS do not support multilayer switching and as such do not correspondingly support per-VLAN or per-port/per-VLAN policies.

The Catalyst 2960-G and 2975-GS can only police to a minimum rate of 1 Mbps; all other platforms within this switch product family can police to a minimum rate of 8 kbps (with the exception of the 2960-S, which-although it can be configured to police at 8 kbps-can only police at a minimum rate of 16 kbps).

The Catalyst 2960-S does not support ingress queuing.

The Catalyst 2960-S does not support a “class-default” class-map.

Only the Catalyst 3650-E/X, 3750-E/X support IPv6 QoS.

Only the Catalyst 3650-E/X and 3750-E/X support policing on 10 Gigabit Ethernet interfaces.

Only the Catalyst 3650-E/X and 3750-E/X support SRR shaping weights on 10 Gigabit Ethernet interfaces (SRR shaping weights are discussed in more detail in Queuing Models).

Other than these key exceptions, the following commands and configurations work across these switch platforms (unless explicitly noted otherwise).

Enabling QoS

On all the switching platforms discussed in this chapter (with the exception of the Catalyst 4500-E/4900M) QoS needs to be explicitly enabled, as it is disabled by default. This is a critical first step to deploying QoS on these platforms. If this small—but important—step is overlooked, this can lead to frustration in troubleshooting QoS problems; this is because the switch software accepts QoS commands and even displays these within the switch configuration, but none of the QoS commands are active until the mls qos global command is enabled, as shown in Example 2-1.

Trust Models

The Catalyst 3750-E switch ports can be configured to statically trust CoS, DSCP, and IP Precedence (although this is considered to be relegated by DSCP-trust) or to dynamically and conditionally trust Cisco IP phones. By default, with QoS enabled, all ports are set to an untrusted state. The complete port trust classification flowchart for the Catalyst 3750-E switch product family is shown in Figure 2-15.

Figure 2-15 Catalyst 3750-E Port Trust Classification Flowchart

Trust-CoS Model

A Catalyst 3750-E switch port can be configured to trust CoS by configuring the interface with the mls qos trust cos command. However, if an interface is set to trust CoS, then it by default calculates a packet’s internal DSCP to be the incoming packet’s (CoS value * 8). While this may suitable for most markings, this default mapping may not be suitable for VoIP, as VoIP is usually marked CoS 5, which would map by default to DSCP 40 (and not 46, which is the EF PHB as defined by RFC 3246). Therefore, if an interface is set to trust CoS, then the default CoS-to-DSCP mapping table should be modified such that CoS 5 maps to DSCP 46, as shown in Example 2-3.

In Example 2-4, the CoS-to-DSCP mapping value for CoS 5 has been modified from the default mapping of 40 (CoS 5 * 8) to 46 (to match the recommendation from RFC 3246 that realtime applications be marked DSCP 46/EF).

In Example 2-5, the port trust mode is set to trust CoS and the current (static) state of the interface is likewise set to trust CoS.

Trust-DSCP Model

Because of the additional granularity of DSCP versus QoS markings, it is generally recommended to trust DSCP rather than CoS (everything else being held equal). A Catalyst 3750-E switch port can be configured to trust DSCP with the mls qos trust dscp interface command, as shown in Example 2-6.

In Example 2-7, the port trust mode is set to trust DSCP and the current (static) state of the interface is likewise set to trust DSCP.

Conditional-Trust Model

In addition to configuring switch ports to statically trust endpoints, the Catalyst 3750-E family supports dynamic, conditional trust with the mls qos trust device interface command, which can be configured with the cisco-phone keyword to extend trust to Cisco IP phones, after these have been verified via a CDP-negotiation. Additionally, the type of trust to be extended must be specified (either CoS or DSCP). When configuring conditional trust to Cisco IP Phones, it is recommended to dynamically extend CoS-Trust, as Cisco IP Phones can only remark PC QoS markings at Layer 2 (CoS) and not at Layer 3 (DSCP). For other endpoints that do not have this remarking limitation, it is recommended to dynamically extend DSCP-trust (over CoS-trust), not only because DSCP has greater marking granularity, but also because the type of trust configured on the ingress switch port on a Catalyst 3750-E family of switches ultimately determines the type of queuing policies that are applied on the egress switch port. Specifically, if an ingress switch port is configured to trust-CoS—whether this is configured statically or dynamically (in conjunction with the mls qos trust device interface command)—a CoS-to-queue mapping determines the (ingress and) egress queuing policy. Conversely, if an ingress switch port is configured to trust-DSCP—whether this is configured statically or dynamically—a DSCP-to-queue mapping determines the (ingress and) egress queuing policy. Since DSCP-to-queue mapping has more granular policy options, it is the preferred way to assign packets to queues and as such depends on the ingress switch port being set to trust DSCP.

An example of a dynamic, conditional trust policy that is set to extend CoS-trust to CDP-verified Cisco IP phones is shown in Example 2-8.

In Example 2-9, the trust device feature has been enabled, with the trusted device being specified as a cisco-phone. The port trust mode—that is, the mode of trust (CoS | DSCP | IP Precedence) that is extended dynamically to the IP phone—is set to trust CoS. Similarly, the current (dynamic) trust state of the interface is likewise set to trust CoS. This is because there is a Cisco IP phone currently connected to the switch port; if this IP phone is removed from the switch port, the trust state of the interface toggles to “not trusted”.

Marking Models

The Catalyst 3750-E family of switches supports two main marking models:

Per-port marking model—This is the only option on Catalyst 2960 and 2975 series switches, as these do not support multilayer switching (and therefore do not support SVI interfaces and per-VLAN policies).

Per-VLAN marking model—This model is supported on the Catalyst 3560G, 3750G, 3560-E, and 3750-E series switches.

Each model is detailed in the following sections.

Per-Port Marking Model

The per-port marking model (based on Figure 2-10) matches VoIP and signaling traffic from the VVLAN by matching on DSCP EF and CS3, respectively. Multimedia conferencing traffic from the DVLAN is matched by UDP/RTP ports 16384-32767. Signaling traffic is matched on SCCP ports (TCP 2000-2002), as well as on SIP ports (TCP/UDP 5060-5061). Other transactional data traffic, bulk data, and scavenger traffic are matched on various ports (outlined in Figure 2-9). The service policy is applied to an interface range, along with (DSCP-mode) conditional trust, as shown in Example 2-10.

! An explicit class-default marks all other IP traffic to 0 (see note)

! This section attaches the service-policy to the interface(s)

C3750-E(config)#interface range GigabitEthernet 1/0/1-48

C3750-E(config-if-range)# switchport access vlan 10

C3750-E(config-if-range)# switchport voice vlan 110

C3750-E(config-if-range)# spanning-tree portfast

C3750-E(config-if-range)# mls qos trust device cisco-phone

! The interface is set to conditionally-trust Cisco IP Phones

C3750-E(config-if-range)# mls qos trust cos

! CoS-trust will be dynamically extended to Cisco IP Phones

C3750-E(config-if-range)# service-policy input PER-PORT-MARKING

! Attaches the Per-Port Marking policy to the interface(s)

Note While the Catalyst 3750-E MQC syntax includes an implicit class-default, any policy actions assigned to this class are not enforced. Therefore, an explicit class DEFAULT is configured in Example 2-10 to enforce a marking/remarking policy to DSCP 0 for all other IP traffic.

Note An explicit marking command (set dscp) is used even for trusted application classes (like VVLAN-VOIP and VVLAN-SIGNALING) rather than a trust policy-map action. This is because a trust statement in a policy map requires multiple hardware entries and, as such, might be too large to fit into the available QoS hardware memory, triggering an error when the policy map is applied to a port. The use of an explicit (but seemingly redundant) explicit marking command actually improves the policy efficiency from a hardware perspective.

In Example 2-11, CoS-mode conditional trust has been applied to the interface (which allows the port to dynamically extend CoS-trust to the Cisco IP phone, such that VVLAN-VoIP and VVLAN-Signaling traffic can be matched on CoS 5 and 3, respectively). Additionally the PER-PORT-MARKING service policy has been attached to the interface to classify both VVLAN and DVLAN traffic.

As shown in Example 2-14, unlike the show policy-map interface outputs on IOS routers, the corresponding command on the Catalyst 3750-E series of switches does not dynamically increment packet, byte, drop, and bps counters.

Per-VLAN Marking Model

An alternative approach for deploying marking policies on the Catalyst 3560/3750 platforms is to deploy these on a per-VLAN basis. In order to do so, the interfaces belonging to the VLANs need to be configured with the mls qos vlan-based interface command. Additionally, the policy-map can be simplified/broken-apart, as applicable to each VLAN. Adapting the previous example to a VLAN-based marking policing allows for the VVLAN-based policy map to be reduced to only three explicit classes: VoIP, signaling, and the explicit default class. Similarly, the DVLAN-based policy map is reduced to six explicit classes: multimedia conferencing, signaling, transactional data, bulk data, scavenger, and the explicit default class. A per-VLAN marking model is shown in Example 2-15.

Note As the access lists and class maps are identical to Example 2-14, these are omitted for brevity in this—and in following—examples for this switch platform family.

! This section configures the ingress marking policy-map for the VVLAN

C3750-E(config)#policy-map VVLAN-MARKING

C3750-E(config-pmap)# class VVLAN-VOIP

C3750-E(config-pmap-c)# set dscp ef

! VoIP is trusted (from the VVLAN)

C3750-E(config-pmap-c)# class VVLAN-SIGNALING

C3750-E(config-pmap-c)# set dscp cs3

! Signaling is trusted (from the VVLAN)

C3750-E(config-pmap-c)# class DEFAULT

C3750-E(config-pmap-c)# set dscp default

! An explicit DEFAULT class marks all other VVLAN IP traffic to DF

! This section configures the ingress marking policy-map for the DVLAN

C3750-E(config)#policy-map DVLAN-MARKING

C3750-E(config-pmap)# class MULTIMEDIA-CONFERENCING

C3750-E(config-pmap-c)# set dscp af41

! Multimedia-conferencing is marked AF41

C3750-E(config-pmap-c)# class SIGNALING

C3750-E(config-pmap-c)# set dscp cs3

! Signaling (from the DVLAN) is marked CS3

C3750-E(config-pmap-c)# class TRANSACTIONAL-DATA

C3750-E(config-pmap-c)# set dscp af21

! Transactional Data is marked AF21

C3750-E(config-pmap-c)# class BULK-DATA

C3750-E(config-pmap-c)# set dscp af11

! Bulk Data is marked AF11

C3750-E(config-pmap-c)# class SCAVENGER

C3750-E(config-pmap-c)# set dscp cs1

! Scavenger traffic is marked CS1

C3750-E(config-pmap-c)# class DEFAULT

C3750-E(config-pmap-c)# set dscp default

! An explicit DEFAULT class marks all other DVLAN IP traffic to DF

! This section configures the interface(s) for conditional trust

! and enables VLAN-based QoS

C3750-E(config)#interface range GigabitEthernet 1/0/1-48

C3750-E(config-if-range)# switchport access vlan 10

C3750-E(config-if-range)# switchport voice vlan 110

C3750-E(config-if-range)# spanning-tree portfast

C3750-E(config-if-range)# mls qos trust device cisco-phone

! The interface is set to conditionally-trust Cisco IP Phones

C3750-E(config-if-range)# mls qos vlan-based

! Enables VLAN-based QoS on the interface(s)

! This section attaches the DVLAN policy to the DVLAN interface

C3750-E(config)#interface Vlan 10

C3750-E(config-if)# description DVLAN

C3750-E(config-if)# service-policy input DVLAN-MARKING

! Attaches the DVLAN Per-VLAN Marking policy to the DVLAN interface

! This section attaches the VVLAN policy to the VVLAN interface

C3750-E(config)#interface Vlan 110

C3750-E(config-if)# description VVLAN

C3750-E(config-if)# service-policy input VVLAN-MARKING

! Attaches the VVLAN Per-VLAN Marking policy to the VVLAN interface

This configuration can be verified with the commands:

show mls qos interface

show class-map

show policy-map

show policy-map interface

Policing Models

The Catalyst 3750-E family of switches support 256 policers per hardware ASIC. These switches share 2, 4, 6, 8, or 24 ports per ASIC (this depends on the platform and hardware configuration). The number of ASICs for a specific switch can be verified by using the show platform port-asic version verification command. Additionally, the specific switch ports associated with each ASIC can further be identified by the show platform pm platform-block verification command (in the ASIC column).

As a reminder, these policing caveats apply to these switches:

The Catalyst 2960 and 2975 can only police to a minimum rate of 1 Mbps; all other platforms within this switch-product family can police to a minimum rate of 8 kbps.

Only the Catalyst 3650-E and 3750-E support policing on 10 Gigabit Ethernet interfaces.

The Catalyst 3750-E family of switches supports these ingress policing models:

Per-port policing model—This model (which is the only option on Catalyst 2960 and 2975 series switches-as these do not support multilayer switching and therefore do not support SVI interfaces and per-VLAN policies) attaches policers to physical switch port interfaces.

Per-VLAN policing model—This model (which is supported on the Catalyst 3560G, 3750G, 3560-E, and 3750-E series switches) attaches policers to logical VLAN interfaces. However, there is an inherent limitation with this policing model: it only supports a single-aggregate policer per VLAN and—since the number of ports associated with a VLAN is dynamic and variable—thus is quite restricted in overall policing effectiveness. Therefore, it is generally recommended to use the per-port/per-VLAN policing model instead, as it offers more discrete policing options.

The per-port and per-port/per-VLAN policing models for the Catalyst 3750-E family of switches are detailed in the following sections.

Per-Port Policing Model

The per-port policing model is quite similar to the per-port marking model, except that the policy action includes a policing function-in some cases to drop, in others to remark. As shown in Figure 2-10, the VoIP and signaling traffic from the VVLAN can be policed to drop at 128 kbps and 32 kbps, respectively (as any excessive traffic matching this criteria would be indicative of network abuse). Similarly, the multimedia conferencing, signaling, and scavenger traffic from the DVLAN can be policed to drop. On the other hand, data plane policing policies can be applied to transactional, bulk, and best effort data traffic, such that these flows are subject to being remarked (but not dropped at the ingress edge) when severely out-of-profile. Remarking is performed by configuring a policed-DSCP map with the global configuration command mls qos map policed-dscp, which specifies which DSCP values are subject to remarking if out-of-profile and what value these should be remarked as (which in the case of data plane policing/scavenger class QoS policies, this value is CS1/DSCP 8). A per-port policing model for a Catalyst 3750-E is shown in Example 2-16.

Note Catalyst 3750-G software allows for policing rates to be entered using the postfixes k (for kilobits), m (for megabits), and g (for gigabits), as shown in Example 2-16. Additionally, decimal points are allowed in conjunction with these postfixes; for example, a rate of 10.5 Mbps could be entered with the policy-map command police 10.5m. While these policing rates are converted to their full bps values within the configuration, it makes the entering of these rate more user-friendly and less error prone (as could easily be the case when having to enter up to 10 zeros to define the policing rate).

In Example 2-17, the policing DSCP-markdown mapping is shown. The first digit of the DSCP value of a packet offered to a policer is shown along the Y-axis of the table; the second digit of the DSCP value of a packet offered to a policer is shown along the X-axis of the table. For example, the DSCP value for the transactional data application class (AF21/18) is found in the row d1=1 and column d2=8. And, as shown, packets with this offered DSCP value (along with DF/0 and AF11/10) are remarked to CS1 (08) if found to be in excess of the policing rate.

In Example 2-18, the interface policers for GigabitEthernet 1/0/1 are shown, including the policing rates, burst, and drop-function values (drop=1 means that exceeding-traffic is dropped, while drop=0 value means that exceeding-traffic is not dropped, but remarked).

Per-Port/Per-VLAN Policing Model

An alternative—and more discrete—approach for deploying policing policies on the Catalyst 3560/3750 platforms is to deploy these on a per-port/per-VLAN basis, which (on this family of switch platforms) requires the use of hierarchical QoS policies, also known as nested QoS policies.

The first step is to configure a class-map that defines the switch port(s) to which the policers are attached. Then one or more per-port policers need to be defined (according to the various levels of policing rates or exceeding-actions required); these policers reference the previously-defined class map that specifies the switch port(s) are policed. These per-port policers comprise the “child” policy maps in the hierarchy.

Following this, “parent” policy maps are configured that combine the various per-port policers for the various classes of traffic for a given VLAN. Each of these parent policy maps reference child policies that implement the per-port policing functions. Finally, these parent policy maps are applied to the VLAN SVI interfaces.

In Example 2-18, a class map (VLAN-10/110-PORTS) defines the ports on which the policers are enforced, specifically the ports belonging to DVLAN 10 and VVLAN 110 (which in this example equates to Gigabit Ethernet ports 1/0/1 through 1/0/48). Then a series of per-port policers (child policy maps) are defined, one each for 128 kbps (with a dropping action), 32 kbps (with a dropping action), 5 Mbps (with a dropping action), 10 Mbps (with a dropping action), and 10 Mbps (with a remarking action). Following this, a parent policy map for the VVLAN references the child policy maps to police VoIP to 128 kbps, (VLAN) signaling to 32 kbps, and all other VVLAN IP traffic to 32 kbps.

Similarly, a parent policy map for the DVLAN references the child policy,maps to police multimedia conferencing to 5 Mbps, (DVLAN) signaling to 32 kbps, and scavenger traffic to 10 Mbps. However, data plane policing (scavenger class QoS) policies are applied to the transactional data, bulk data, and the (explicitly defined) best effort class to police these (respectively) to 10 Mbps with a remarking action and not a dropping action.

As in the previous example, remarking is performed by configuring a policed DSCP map with the global configuration command mls qos map policed-dscp, which specifies which DSCP values are subject to remarking if out-of-profile and what value these should be remarked as (which in the case of data plane policing/scavenger class QoS policies, this value is CS1/DSCP 8). The switch ports have VLAN-based QoS enabled on them and the parent service policies are applied to the VLAN SVI interfaces for the DVLAN and the VVLAN.

! An explicit default class marks all other DVLAN IP traffic to DF and

! (via a nested service-policy) polices to remark (to CS1) at 10 Mbps

! This section configures the interfaces for conditional trust

! and enables VLAN-based QoS

C3750-E(config)#interface range GigabitEthernet 1/0/1-48

C3750-E(config-if-range)# switchport access vlan 10

C3750-E(config-if-range)# switchport voice vlan 110

C3750-E(config-if-range)# spanning-tree portfast

C3750-E(config-if-range)# mls qos trust device cisco-phone

! The interface is set to conditionally-trust Cisco IP Phones

C3750-E(config-if-range)# mls qos vlan-based

! Enables VLAN-based QoS on the interface(s)

! This section attaches the DVLAN policers to the DVLAN interface

C3750-E(config)#interface Vlan 10

C3750-E(config-if)# description DVLAN

C3750-E(config-if)# service-policy input DVLAN-POLICERS

! Attaches the DVLAN Per-VLAN Policing policy to the DVLAN interface

! This section attaches the VVLAN policers to the VVLAN interface

C3750-E(config)#interface Vlan 110

C3750-E(config-if)# description VVLAN

C3750-E(config-if)# service-policy input VVLAN-POLICERS

! Attaches the VVLAN Per-VLAN Policing policy to the VVLAN interface

Note On Catalyst 3750-E switches, a policer cannot be attached to both a port and a SVI; separate policers must be configured for these different types of interfaces.

Note On Catalyst 3750-E switches, a nested/child policy map can only be referenced by one parent service policy. Therefore, separate (child) policers are configured in Example 2-19 for the signaling classes (one each for the DVLAN-POLICER parent policy map and another for the VVLAN-POLICER parent policy map).

Note It is important to note that on Catalyst 3750G and 3750-E switches, when you enable VLAN-based QoS and configure a hierarchical policy map in a switch stack, these automatic actions occur when the stack configuration changes:—When a new stack master is selected, the stack master re-enables and reconfigures these features on all applicable interfaces on the stack master.—When a stack member is added, the stack master re-enables and reconfigures these features on all applicable ports on the stack member.—When you merge switch stacks, the new stack master re-enables and reconfigures these features on the switches in the new stack.—When the switch stack divides into two or more switch stacks, the stack master in each switch stack re-enables and reconfigures these features on all applicable interfaces on the stack members, including the stack master.

This configuration can be verified with the commands:

show mls qos maps policed-dscp

show mls qos interface

show mls qos interface interface x/y policers

show class-map

show policy-map

show policy-map interface

Queuing Models

As shown in Figure 2-14, on the Catalyst 3750-E switch-family platforms, because the total inbound bandwidth of all ports can exceed the bandwidth of the stack or internal ring, ingress queues are located after the packet is classified, policed, and marked and before packets are forwarded into the switch fabric. Additionally, because multiple ingress ports can simultaneously send packets to an egress port and cause congestion, outbound queues are located after the stack or internal ring.

Both the ingress and egress queues are serviced by a Shaped Round Robin (SRR) scheduling algorithm. SRR can be configured in two modes, shaped or sharing.

In shaped mode, the egress queues are guaranteed a percentage of the bandwidth and they are rate limited to that amount. Shaped traffic does not use more than the allocated bandwidth even if the link is idle. Shaping provides a more even flow of traffic over time and reduces the peaks and valleys of bursty traffic. With shaping, the absolute value of each weight is used to compute the bandwidth available for the queues. SRR shaping is configured with the srr-queue bandwidth shape interface command.

In shared mode, the ingress or egress queues share the bandwidth among them according to the configured weights. The bandwidth is guaranteed at this level but not limited to it. For example, if a queue is empty and no longer requires a share of the link, the remaining queues can expand into the unused bandwidth and share it among them. With sharing, the ratio of the weights controls the frequency of dequeuing; the absolute values are meaningless. SRR sharing is configured with the srr-queue bandwidth share interface command.

Furthermore, both the ingress and egress queuing structures support the enabling of a single priority queue, or expedite queue, as it corresponds to the EF PHB. An ingress or egress queue operating as an expedite queue is fully serviced ahead of all other queues until empty. After the priority queue has been fully serviced, the scheduler services the non-priority queues, which are configured in either shaped or shared SRR modes. A strict priority queue is enabled with the priority-queue interface command.

With respect to scheduling hierarchy in the Catalyst 3750-E family of switches, shaped mode overrides shared mode and priority mode overrides both shaped and shared modes.

Additionally, the Catalyst 3750-E family of switches supports the weighted tail drop (WTD) congestion avoidance mechanism. WTD is implemented on queues to manage the queue lengths and to provide drop preferences for different traffic classifications. As a packet is enqueued to a particular ingress or egress queue, WTD uses the frame’s assigned internal DSCP to subject it to different drop thresholds. If the threshold is exceeded for a given internal DSCP value (in other words, the space available in the destination queue is less than the size of the packet), the switch drops the packet. Each queue has three threshold values. The internal DSCP determines which of the three threshold values is subjected to the frame. Of the three thresholds, two are configurable (explicit) and one is not (implicit), as this last threshold corresponds to the tail of the queue (100% limit).

Packets are mapped to queues and thresholds on the Catalyst 3750-E by either CoS-to-queue/threshold or DSCP-to-queue/threshold mappings. The mapping used directly corresponds to whether the packet was configured to trust CoS on ingress or to trust DSCP on ingress (untrusted packets are simply assigned to the default queue).

Ingress Queuing 1P1Q3T Model

As the Catalyst 3750-E switch platforms have architectures based on oversubscription, they have been engineered to guarantee QoS by protecting critical traffic trying to access the backplane/stack-ring via ingress queuing. Ingress queuing on this platform can be configured as 2Q3T or 1P1Q3T, with the latter being the recommended configuration (as it supports the RFC 3246 EF PHB).

1P1Q3T ingress queuing is configured by explicitly enabling Q2 as a priority queue and assigning it a bandwidth allocation, such as 30%. Next, an SRR weight can be assigned to the non-priority queue, which in this case would be 70%. The buffer allocations can be tuned such that Q1 gets 90% of the buffers, while Q2 (the PQ) gets only 10%; since the PQ is serviced in realtime, it is generally more efficient to provision fewer buffers to it and more to the non-priority queues. After this, WTD thresholds can be defined on Q1 to provide inter-queue QoS; specifically, Q1T1 can be explicitly set at 80% queue depth and Q1T2 can be explicitly set at 90% queue depth (while Q3 remains implicitly set at 100% queue depth).

With the queues and thresholds set, then VoIP (EF), broadcast video (CS5), and realtime interactive (CS4) traffic can be mapped to the strict priority ingress queue. All other traffic classes can be mapped to the default (non-priority) ingress queue. However, drop preference can be given to control plane traffic, such that network control (CS7) and internetwork control (CS6) traffic is mapped to the highest WTD threshold (Q1T3); additionally, signaling (CS3) traffic can be mapped to the middle WTD threshold (Q1T2). All other flows would be mapped to Q1T1. These 1P1Q3T ingress queuing mappings for the Catalyst 3750-E are shown in Figure 2-16.

Figure 2-16 Catalyst 3750-E 1P1Q3T Ingress Queuing Model

The corresponding configuration for 1P1Q3T ingress queuing on the Catalyst 3750-E is shown in Example 2-20.

! DSCP CS4, CS5 and EF are mapped to ingress Q2T3 (the tail of the PQ)

Note CoS-to-queue mappings are only required if some switch ports are configured to trust-CoS on ingress. In which case, the CoS-to-DSCP map should also be modified to map CoS 5 to DSCP EF (as shown in Example 2-3). Additionally, it should be noted that due to the limited granularity of CoS-to-queue mapping, it is not possible to assign multimedia conferencing (AF4) and realtime interactive (CS4) traffic into separate queues (as both share the same CoS value of 4); nor is is possible to assign signaling (CS3) and multimedia streaming (AF3) traffic into separate queue thresholds (as both share the same CoS value of 3).

Note Non-standard DSCP-to-queue mappings are not shown in the configurations in this chapter for the sake of simplicity.

Example 2-21 shows that ingress queuing buffers and bandwidth have been allocated between Q1 and Q2 by a 70:30 split, respectively. Also, that Q2 has been enabled as a strict-priority queue with a 30% maximum bandwidth guarantee. Q1T1 and Q1T2 thresholds have been set to 80% and 90%, but all Q2 thresholds are at 100%.

Example 2-22 shows the ingress CoS-to-queue mappings. Specifically, CoS values 1 and 2 have been mapped to Q1T1, CoS 3 has been mapped to Q1T2, CoS values 4 and 5 have been mapped to Q2T1 (the PQ), and CoS values 6 and 7 have been mapped to Q1T3.

Example 2-23 shows the ingress DSCP-to-queue mappings. The first digit of the DSCP value of a packet is shown along the Y-axis of the table; the second digit of the DSCP value of a packet is shown along the X-axis of the table. The mapping table corresponds to Figure 2-16. It can be noted that CS4 (DSCP 32), CS5 (DSCP 40), and EF (DSCP 46) are all mapped to Q2 (the PQ). It should also be noted that internal DSCP values 40 through 47 are mapped to Q2 by default, which is why the table shows additional values being mapped to this queue.

Egress Queuing 1P3Q3T Model

Egress queuing on the Catalyst 3750-E family of switches can be configured as 4Q3T or 1P3Q3T, with the latter being the recommended configuration (as it supports the RFC 3246 EF PHB).

Two different egress queuing sets can be configured on the Catalyst 3750-E; however, to maintain consistent per-hop behaviors, it is generally recommended to use only one.

A unique feature of the Catalyst 3750-E is that it supports flexible buffer allocations to hardware queues, which may be dynamically loaned or borrowed against (as needed). Specifically, each queue can lend part of its buffering capacity, unless a specified minimum reserve threshold has been reached. Additionally, each queue may borrow up to four times its capacity from a common pool of buffers (which are not allocated to any specific queue) should these be available for use. The recommended buffer allocations for queues 1 through 4 are 20%, 30%, 35%, and 15%, respectively. Correspondingly, the recommended parameters for reserve thresholds and maximum (overload) thresholds for non-priority queues are 100% and 400%, respectively; for the priority queue, all thresholds should be set to 100%

Once the primary queuing set has been configured for 1P3Q3T egress queuing, WTD thresholds can be defined on Q2 and Q4 to provide intra-queue QoS. Specifically, Q2T1 can be explicitly set at 80% queue depth and Q2T2 can be explicitly set at 90% queue depth (while Q3 remains implicitly set at 100% queue depth). Also, Q4T1 can be explicitly set at 60% queue depth, while the other thresholds for Q4 remain at their default values (of 100% queue depth), with the exception of the maximum (overload) threshold, which can be set to 400%. This last setting allows for even Scavenger & Bulk traffic to benefit from the extended buffering capabilities of this platform, especially when considering that these are the least favored flows from a bandwidth perspective and thus will likely need the deepest queues.

With the queues and thresholds set, then VoIP (EF), broadcast video (CS5), and realtime interactive (CS4) traffic can be mapped to the strict priority egress queue (Q1). Network management (CS2), transactional data (AF2), multimedia streaming (AF3), and multimedia conferencing (AF4) traffic can be mapped to Q2T1. Signaling (CS3) traffic can be mapped to Q2T2. Network (CS7) and internetwork (CS6) traffic can be mapped to Q2T3. Default (DF) traffic can be mapped to Q3, the default queue. Scavenger (CS1) traffic can be mapped to Q4T1, while bulk data (AF1) is mapped to Q4T2. These 1P3Q3T egress queuing mappings for the Catalyst 3750-E are shown in Figure 2-17.

Figure 2-17 Catalyst 3750-E 1P3Q3T Egress Queuing Model

The corresponding configuration for 1P3Q3T egress queuing on the Catalyst 3750-E is shown in Example 2-24.

! DSCP AF1 is mapped to Q4T3 (tail of the less-than-best-effort queue)

! This section configures interface egress queuing parameters

C3750-E(config)#interface range GigabitEthernet1/0/1-48

C3750-E(config-if-range)# queue-set 1

! The interface(s) is assigned to queue-set 1

C3750-E(config-if-range)# srr-queue bandwidth share 1 30 35 5

! The SRR sharing weights are set to allocate 30% BW to Q2

! 35% BW to Q3 and 5% BW to Q4

! Q1 SRR sharing weight is ignored, as it will be configured as a PQ

C3750-E(config-if-range)# priority-queue out

! Q1 is enabled as a strict priority queue

Note CoS-to-queue mappings are only required if some switch ports are configured to trust-CoS on ingress. In which case, the CoS-to-DSCP map should also be modified to map CoS 5 to DSCP EF (as shown in Example 2-3). Additionally, it should be noted that due to the limited granularity of CoS-to-queue mapping, it is not possible to assign multimedia-conferencing (AF4) and realtime interactive (CS4) traffic into separate queues (as both share the same CoS value of 4); nor is is possible to assign signaling (CS3) and multimedia streaming (AF3) traffic into separate queue thresholds (as both share the same CoS value of 3); nor is is possible to assign scavenger (CS1) and bulk data (AF1) traffic into separate queue thresholds (as both share the same CoS value of 1).

Example 2-27 shows a set of dynamically-updated packet statistic tables for an uplink port on an access layer Catalyst 3750-E switch that is primarily congested in the access-to-distribution direction. The first table shows the incoming DSCP values (from the distribution layer). DSCP values are broken into groups of 4. For example, incoming packets marked DSCP EF/46 are listed in the DSCP 45-49 row in the second column (in this case: 127,292 packets). The second table shows the outgoing packets (to the distribution layer) in a similar format. For example, DSCP CS1/8 is listed in the DSCP 5-9 row in the third column (23,842,155 packets). The third table shows incoming packets (from the distribution layer) by CoS values (again grouped in sets of 4); similarly the fourth table shows outgoing packets (to the distribution layer) by CoS values. The fifth and sixth tables are particularly interesting in terms of queuing statistics: the fifth table shows the number of packets assigned to each queue/threshold combination.

Note The queue numbers are 1 lower than the numbers used in the configuration syntax (such that Q1 is shown here as Q0, Q2 is shown here as Q1, Q3 is shown here as Q2, and Q4 is shown here as Q3).

For example, from the fifth table, it can be seen that 127,291 packets were sent to the (tail of the) PQ (shown here as Q0); similarly, 23,842,155 packets were sent to the scavenger/bulk queue first threshold (shown here as Q3T1). Finally, the sixth table shows any drops that have occurred on a per-queue/per-threshold basis; from this table it can be seen that 892 drops occurred in the scavenger/bulk queue first threshold (scavenger class drops).

EtherChannel QoS Model

As discussed in EtherChannel QoS, QoS policies applied to EtherChannel links on the Catalyst 2960-G/S, 2975-GS, 3560-G/E/X, and 3750-G/E/X family of switches are required to be identically-configured on each and every EtherChannel physical port-member interface; these include both ingress trust/classification/marking/policing policies, as well as egress queuing policies (ingress queuing policies are globally-defined and as such are not bound by this requirement). If the policies are not identically-configured, even though they may appear in the configuration, these will not take effect. Also, it is recommended to load-balance across the EtherChannel by source-and-destination IP address. An example of an EtherChannel QoS Model for the Catalyst 3750-E family is shown in Example 2-28.

Note As the ingress queuing policies, as well as egress-queuing mappings, have not changed from the previous configuration examples, these are omitted from this EtherChannel QoS Model example.

! The physical port-member interfaces are set to statically trust DSCP

C3750-E(config-if-range)# queue-set 1

! The interfaces are assigned to queue-set 1

C3750-E(config-if-range)# srr-queue bandwidth share 1 30 35 5

! The SRR sharing weights are set to allocate 30% BW to Q2

! 35% BW to Q3 and 5% BW to Q4

! Q1 SRR sharing weight is ignored, as it will be configured as a PQ

C3750-E(config-if-range)# priority-queue out

! Q1 is enabled as a strict priority queue

This configuration can be verified with the commands:

show mls qos interface

show mls qos input-queue

show mls qos maps cos-input-q

show mls qos maps dscp-input-q

show mls qos maps cos-output-q

show mls qos maps dscp-output-q

show mls qos interface interface x/y queueing

show mls qos interface interface x/y statistics

AutoQoS-SRND4 Models

As mentioned in AutoQoS, as of August 2010, an updated version of AutoQoS was released for the Catalyst 2960-G/S, 2975-GS, 3560-G/E/X, and 3750-G/E/X family of switches with IOS release 12.2(55)SE. This release was directly based on the recommendations put forward in this design chapter to support medianet applications; in fact, the new keyword and name for this version of AutoQoS is AutoQoS-SRND4 (taken from Solution Reference Network Design guide version 4, which is the Cisco name for this design chapter). AutoQoS-SRND4 is the fastest and most accurate method to deploy the recommended QoS designs to support rich media applications across this family of switches.

AutoQoS-SRND4—which can be shortened to “Auto QoS” for the sake of simplicity—presents the network administrator with four main ingress QoS policy options in interface-configuration mode:

auto qos voip [cisco-phone | cisco-softphone | trust]—This option provides not only legacy support for Auto QoS VoIP IP Telephony deployments, but also expands on these models to include provisioning for additional classes of rich media applications and to include data-plane policing/scavenger-class QoS policy-elements to protect and secure these applications.

auto qos classify {police}—This option provides a generic template that can classify and mark up to six classes of medianet traffic, as well as optionally provision data-plane policing/scavenger-class QoS policy-elements for these traffic classes (via the optional police keyword).

Each ingress option is automatically complemented by a complete set of ingress and egress queuing configurations, complete with both CoS- and DSCP-to-Queue mappings, as shown in Figure 2-18.

Figure 2-18 Auto QoS SRND4 Models

Note Ingress queuing is not supported on the Catalyst 2960-S platform.

The complete configuration provisioned by each of these new auto qos options, along with the complete ingress and egress queuing configurations, will be detailed in the following sections. For the sake of logical development however, auto qos voip will be discussed last, as it combines several policy elements from other auto qos ingress model options.

Auto QoS Trust Models

The auto qos trust command configures static trust policies on the port(s) or interface(s) that it is configured on: if the port is operating as a Layer 2 switch port, then (by default) CoS-trust is configured (as shown in Example 2-29); whereas, if the port is operating as a Layer 3 routed interface then (by default) DSCP-trust is configured (as shown in Example 2-33).

All AutoQoS configurations assume mls qos to have been previously enabled.

The show run interface command displays the effect of deploying auto qos trust on a Layer 2 switch port configuration: namely four additional lines of configuration (in bold) have been added automatically (as well as many more in the global configuration; these will all be discussed in detail in the following sections). From the configuration it can be seen that the interface trust state has been set to statically trust CoS.

This interface trust state and CoS-to-DSCP maps can be verified by the commands:

The show run interface command displays the effect of deploying auto qos trust on a Layer 3 routed interface configuration. From the configuration it can be seen that the interface trust state has been set to statically trust DSCP.

Example 2-35 verifies that although the same auto qos trust policy is applied to a Layer 3 routed interface (in Example 2-33) as was applied to a Layer 2 switch port (in Example 2-29), this has resulted in a DSCP-trust policy on this interface, by default.

Since all interswitch links are recommended to be set to DSCP-trust, whether they are operating at Layer 2 or Layer 3, it is recommended to always use the auto qos trust dscp option on all interswitch links, as well as any access-edge ports connected to trusted devices (assuming that the trusted devices have the ability to mark/re-mark DSCP). The auto qos trust dscp option is shown in Example 2-36.

The show run interface command displays the effect of deploying auto qos trust dscp on a Layer 2 switch port interface configuration. From the configuration it can be seen that the interface trust state has been set to statically trust DSCP.

Example 2-38 verifies that although the interface is configured as a Layer 2 switch port, because of the use of the dscp keyword in conjunction with the auto qos trust interface command, DSCP-trust has been applied to the interface (rather than the default CoS-trust).

Auto QoS Video Models

Besides supporting IP Telephony devices such as Cisco IP Phones and Soft-Phones (via AutoQoS-VoIP), Auto QoS now also supports video devices, such as Cisco TelePresence Systems (CTS) and IP Video-Surveillance cameras, both of which support conditional trust via CDP-negotiation.

Cisco TelePresence Systems can mark their video flow and their audio flows with to CoS 4 and DSCP CS4. Additionally, any voice traffic originating from the Cisco 7975G IP Phone, that is an integral part of the CTS, is marked to CoS 5 and DSCP EF. Furthermore, any signaling traffic—whether for the CTS and/or the IP Phone—is marked CoS 3 and DSCP CS3. These CTS markings are illustrated in Figure 2-7.

The show run interface command displays the effect of deploying auto qos video cts on a Layer 2 switch port interface configuration. From the configuration it can be seen that the interface trust state has been set to conditionally-trust cts devices and to dynamically extend CoS-trust to these.

As shown by Example 2-41, the auto qos video cts command has configured a conditional trust policy to dynamically extend CoS-trust to CTS systems. As a CTS device is currently attached to this port, the current trust state is trust-CoS.

Nonetheless, should an administrator choose to trust-DSCP instead of CoS, they can still do so while using the auto qos video cts command, simply by manually adding a mls qos trust dscp interface command to the configuration, as shown in Example 2-42.

The effect of this auto qos video cts policy in conjunction with a manual override mls qos trust dscp policy on a Layer 2 switch port can be verified by the show run interface command, as shown in Example 2-43.

! The trust-state to be dynamically extended has been manually set to DSCP-trust

auto qos video cts

spanning-tree portfast

end

C3750-E#

The show run interface command displays the effect of deploying auto qos video cts on a Layer 2 switch port interface along with a manual mls qos trust dscp command. From the configuration it can be seen that the interface trust state has been set to conditionally-trust cts devices and to dynamically extend DSCP-trust to these.

This example demonstrates a simple, yet powerful point: AutoQoS configurations may be modified and tailored to specific administrative needs or preferences. In other words, deploying AutoQoS is not an “all-or-nothing” option, but rather one that may be viewed as a generic template on which custom-tailored designs may be overlaid. Even with a moderate amount of manual configuration, AutoQoS can still significantly expedite medianet QoS deployments and greatly reduce manual configuration errors in the process.

Example 2-44 shows that the auto qos video cts interface command in conjunction with the mls qos trust dscp interface command has configured a conditional trust policy to dynamically extend DSCP-trust to CTS systems. As a CTS device is currently attached to this port, the current trust state is trust-DSCP.

Unlike CTS devices, IP Video Surveillance Cameras are only required to mark their video (and if supported, audio) flows at Layer 3 (typically to DSCP CS5/40). This allows for more flexible deployment models, as these cameras do not therefore have to be deployed in dedicated VLANs connecting to the access switch via an 802.1Q trunk. As such, the auto qos video ip-camera interface command dynamically extends DSCP-trust to such devices, once these have successfully identified themselves to the switch via CDP. DSCP-trust is dynamically extended whether the port is configured as a Layer 2 switch port or as a Layer 3 routed interface, as shown in Example 2-45.

! AutoQoS has configured a conditional-trust policy for ip-camera devices

mls qos trust dscp

! AutoQoS has configured DSCP-trust to be dynamically extended

auto qos video ip-camera

spanning-tree portfast

end

C3750-E#

The show run interface command displays the effect of deploying auto qos video ip-camera on a Layer 2 switch port interface configuration. From the configuration it can be seen that the interface trust state has been set to conditionally-trust ip-camera devices and dynamically extend DSCP-trust to these.

Example 2-47 confirms that the auto qos video ip-camera interface command has configured a conditional trust policy for IPVS cameras to dynamically extend DSCP-trust. As an IPVS camera is currently attached to this port, the current trust state is trust-DSCP.

In a similar vein to the CTS (DSCP-trust) example, should an administrator wish to extend CoS-trust instead of DSCP trust to IPVS cameras, they could add mls qos trust cos to the auto qos video ip-camera interface configuration.

Auto QoS Classify and Police Models

The AutoQoS Classify and Police models provide a generic template to support additional rich media and data applications, providing a classification (and optional policing) model for these. These models are most suitable for switch ports connecting to PC endpoint devices.

Six application classes (multimedia conferencing, signaling, transactional data, bulk data, scavenger, and best-effort) are automatically defined via class-maps. Each class-map references an assosciated extended IP access-list. These IP access lists define the TCP and UDP port numbers of the given class of applications based on the sample ports summarized in
Table 2-9. However, it cannot be overemphasized that these are just generic application examples for these classes and the administrator can add/change/delete the access-list entries to match on their specific applications.

! This section applies the AutoQoS-Classify policy-map to the interface

interface GigabitEthernet1/0/1

description L2-ACCESS-PORT-TO-PC

switchport access vlan 10

switchport voice vlan 110

srr-queue bandwidth share 1 30 35 5

queue-set 2

priority-queue out

auto qos classify

spanning-tree portfast

service-policy input AUTOQOS-SRND4-CLASSIFY-POLICY

! Attaches the AutoQoS-Classify service-policy to the interface

!

<snip>

! This section defines the Extended IP Access-Lists for AutoQoS-Classify

ip access-list extended AUTOQOS-ACL-BULK-DATA

permit tcp any any eq 22

permit tcp any any eq 465

permit tcp any any eq 143

permit tcp any any eq 993

permit tcp any any eq 995

permit tcp any any eq 1914

permit tcp any any eq ftp

permit tcp any any eq ftp-data

permit tcp any any eq smtp

permit tcp any any eq pop3

ip access-list extended AUTOQOS-ACL-DEFAULT

permit ip any any

ip access-list extended AUTOQOS-ACL-MULTIENHANCED-CONF

permit udp any any range 16384 32767

ip access-list extended AUTOQOS-ACL-SCAVANGER

permit tcp any any range 2300 2400

permit udp any any range 2300 2400

permit tcp any any range 6881 6999

permit tcp any any range 28800 29100

permit tcp any any eq 1214

permit udp any any eq 1214

permit tcp any any eq 3689

permit udp any any eq 3689

permit tcp any any eq 11999

ip access-list extended AUTOQOS-ACL-SIGNALING

permit tcp any any range 2000 2002

permit tcp any any range 5060 5061

permit udp any any range 5060 5061

ip access-list extended AUTOQOS-ACL-TRANSACTIONAL-DATA

permit tcp any any eq 443

permit tcp any any eq 1521

permit udp any any eq 1521

permit tcp any any eq 1526

permit udp any any eq 1526

permit tcp any any eq 1575

permit udp any any eq 1575

permit tcp any any eq 1630

permit udp any any eq 1630

!

<snip>

As can be seen from the configuration output in Example 2-49, the auto qos classify command generates class-maps, assosciated extended IP access-lists, and a policy map, which is attached to the interface (along with input and output queuing policies, which will be discussed in detail a following section). Again, it should be noted that the IP access-list entries are based on the sample ports summarized in
Table 2-9 and that these are just generic application examples for these classes and the administrator can add/change/delete the access-list entries to match on their specific applications.

This configuration can be verified with the commands:

show mls qos interface

show class-map

show policy-map

show policy-map interface

Additionally, should the administrator wish to enable data-plane policing/scavenger-class QoS policies on these application classes, they may do so by including the option keyword police in conjunction with the auto qos classify interface command, as shown in Example 2-50.

! Multimedia-conferencing is marked AF41 and policed to drop at 5 Mbps

class AUTOQOS_BULK_DATA_CLASS

set dscp af11

police 10000000 8000 exceed-action policed-dscp-transmit

! Bulk-data is marked AF11 and policed to remark (to CS1) at 10 Mbps

class AUTOQOS_TRANSACTION_CLASS

set dscp af21

police 10000000 8000 exceed-action policed-dscp-transmit

! Trans-data is marked AF21 and policed to remark (to CS1) at 10 Mbps

class AUTOQOS_SCAVANGER_CLASS

set dscp cs1

police 10000000 8000 exceed-action drop

! Scavenger traffic is marked CS1 and policed to drop at 10 Mbps

class AUTOQOS_SIGNALING_CLASS

set dscp cs3

police 32000 8000 exceed-action drop

! Signaling is marked CS3 and policed to drop at 32 kbps

class AUTOQOS_DEFAULT_CLASS

set dscp default

police 10000000 8000 exceed-action policed-dscp-transmit

! An explicit default class marks all other IP traffic to DF

! and polices all other IP traffic to remark (to CS1) at 10 Mbps

!

<snip>

! This section applies the AutoQoS-Classify-Police policy-map to the interface

interface GigabitEthernet1/0/1

description L2-ACCESS-PORT-TO-PC

switchport access vlan 10

switchport voice vlan 110

srr-queue bandwidth share 1 30 35 5

queue-set 2

priority-queue out

auto qos classify police

spanning-tree portfast

service-policy input AUTOQOS-SRND4-CLASSIFY-POLICE-POLICY

! Attaches the AutoQoS-Classify service-policy to the interface

!

<snip>

As can be seen from the configuration output in Example 2-51, the two principle changes in the configuration attributable to the police keyword used in conjunction with the auto qos classify command are:

A globally-defined policed-dscp map to mark down DF (0), AF11 (10), and AF21 (18) to CS1 (8)—if found to be exceeding their respective policing rates.

Auto QoS VoIP Models

As with legacy AutoQoS-VoIP, there are three deployment options for AutoQoS (SRND4) VoIP: trust, cisco-phone, and cisco-softphone. These updated auto qos voip deployment options—complete with ingress and egress queuing configurations—are illustrated in Figure 2-19.

Figure 2-19 Auto QoS VoIP (SRND4) Models

Each of these auto qos voip deployment options will be detailed in turn.

The first point to be noted is that since the SRND4 versions of auto qos voip expands functionality beyond the original AutoQoS-VoIP feature, the administrator must indicate which version of this AutoQoS VoIP is desired. By default, simply entering auto qos voip interface-configuration commands will invoke legacy legacy AutoQoS-VoIP configurations; however, if the administrator first enters auto qos srnd4 in the global configuration command prior to applying these auto qos voip interface-configuration commands, then the SRND4 versions of auto qos voip will be applied.

The show run interface command shows that although auto qos voip trust was configured on the interface, this has been converted and replaced with auto qos trust (as this is functionally equivalent on this Layer 2 switch port interface).

This interface trust state and CoS-to-DSCP maps can be verified by the commands:

show mls qos interface

show mls qos maps cos-dscp

A second deployment option offered by the (SRND4) auto qos voip feature is to use the cisco-phone keyword. As previously mentioned, the administrator must first enter auto qos srnd4 in the global configuration prior to entering auto qos voip cisco-phone on a specific interface(s). When auto qos voip cisco-phone is configured on a Layer 2 switch port, it will dynamically extend trust-CoS to Cisco IP Phones; when configured on Layer 3 routed interfaces, it will dynamically extend trust-DSCP to Cisco IP Phones. Additionally, this command will configure data-plane policing/scavenger-class QoS policies on voice, signaling and best-effort traffic, as shown in Example 2-54 and Example 2-55.

! This section attaches the AutoQoS-VoIP-Cisco-Phone (SRND4) Policy-Map to the interface

interface GigabitEthernet1/0/1

description L2-ACCESS-PORT

switchport access vlan 10

switchport voice vlan 110

srr-queue bandwidth share 1 30 35 5

queue-set 2

priority-queue out

mls qos trust device cisco-phone

! AutoQoS has configured a conditional-trust policy for cisco-phone devices

mls qos trust cos

! AutoQoS has configured CoS-trust to be dynamically extended

auto qos voip cisco-phone

spanning-tree portfast

service-policy input AUTOQOS-SRND4-CISCOPHONE-POLICY

! Attaches the AutoQoS-VoIP-Cisco-Phone (SRND4) Policy-Map to the interface

!

<snip>

! This section defines the explicit-default extended IP ACL

ip access-list extended AUTOQOS-ACL-DEFAULT

permit ip any any

!

Example 2-55 shows that the applied version of auto qos voip is srnd4 and, as such, voice is policed to remark if exceeding 128 kbps, signaling is policed to remark if exceeding 32 kbps and best effort traffic is policed to remark to scavenger if exceeding 10 Mbps.

This configuration can be verified with the commands:

show mls qos maps policed-dscp

show mls qos interface

show mls qos interface interface x/y policers

show class-map

show policy-map

show policy-map interface

A third deployment option offered by the (SRND4) auto qos voip feature is to use the cisco-softphone keyword. As previously mentioned, the administrator must first enter auto qos srnd4 in the global configuration prior to entering auto qos voip cisco-softphone on specific interface(s).

In addition to the voice and signaling classes, six additional application classes (multimedia conferencing, signaling, transactional data, bulk data, scavenger and best-effort) are automatically defined via class-maps. Each class-map references an assosciated extended IP access-list. These IP access lists define the TCP and UDP port numbers of the given class of applications, based on the sample ports summarized in
Table 2-9. However, it cannot be overemphasized that these are just generic application examples for these classes and the administrator can add/change/delete the access-list entries to match on their specific applications.

Auto QoS 1P1Q3T Ingress Queuing Model

The AutoQoS SRND4 ingress queuing model is illustrated in Figure 2-16. These ingress queuing policies are automatically configured along with any other AutoQoS SRND4 QoS model and are shown in Example 2-58.

Comparing theAutoQoS 1P3Q1T Ingress Queuing (Example 2-58) with the previously-defined manual 1P3Q1T Ingress Queuing Configuration (Example 2-20) will reveal virtually identical queuing models, with the only differences in configuration being that default settings and/or mappings are not shown in the AutoQoS example configuration; additionally, the AutoQoS example includes some mappings of non-standard DSCP values to queues (which-as previously noted-were omitted from previous examples for the sake of simplicity).

This configuration can be verified with the commands:

show mls qos input-queue

show mls qos maps cos-input-q

show mls qos maps dscp-input-q

Auto QoS 1P3Q3T Egress Queuing Model

The AutoQoS SRND4 egress queuing model is illustrated in Figure 2-17. These egress queuing policies are automatically configured along with any other AutoQoS SRND4 QoS model.The egress queuing policies automatically configured by AutoQoS SRND4 are shown in Example 2-59.

That default settings and/or mappings are not shown in the AutoQoS example configuration.

The AutoQoS example includes some mappings of non-standard DSCP values to queues (which—as previously noted—were omitted from previous examples for the sake of simplicity.

Some minor administrative preferences on the part of the platform engineers versus the authors of this document (mainly relating to buffer and threshold fine-tuning).

This configuration can be verified with the commands:

show mls qos queue-set

show mls qos maps cos-output-q

show mls qos maps dscp-output-q

show mls qos interface interface x/y queueing

show mls qos interface interface x/y statistics

Auto QoS and EtherChannels

At the time of writing, AutoQoS is not supported on EtherChannels. However, if AutoQoS SRND4 has been deployed on any other switch port or interface, it is a fairly simple matter to apply the same ingress and egress queuing policies generated by AutoQoS to EtherChannel port-member inferfaces. Specifically, only four incremental interface-configuration commands are required to be applied to the EtherChannel physical port-member interfaces, as shown in Example 2-60.

Example 2-60 EtherChannel AutoQoS SRND4 Design on a Catalyst 3750-E.

! This section configures Trust-DSCP and AutoQoS SRND4 Egress Queuing across the

! The physical port-member interfaces are set to statically trust DSCP

C3750-E(config-if-range)# queue-set 2

! The interfaces are assigned to queue-set 2

C3750-E(config-if-range)# srr-queue bandwidth share 1 30 35 5

! The SRR sharing weights are set to allocate 30% BW to Q2

! 35% BW to Q3 and 5% BW to Q4

! Q1 SRR sharing weight is ignored, as it will be configured as a PQ

C3750-E(config-if-range)# priority-queue out

! Q1 is enabled as a strict priority queue

This configuration can be verified with the commands:

show mls qos interface

show mls qos input-queue

show mls qos maps cos-input-q

show mls qos maps dscp-input-q

show mls qos maps cos-output-q

show mls qos maps dscp-output-q

show mls qos interface interface x/y queueing

show mls qos interface interface x/y statistics

Auto QoS Removal

Some administrators may be a bit surprised to see the amount of incremental QoS-related configuration generated by AutoQoS. This surprise may at times even turn into alarm if they attempt a no auto qos… command and still see a significant amount of QoS policies lingering around in their switch configuration. The reason this feature has been implemented in this manner is that if an administrator changes a switch port’s QoS role, the switch does not change its entire queuing, mapping, buffer, and threshold configurations—which may potentially adversely effect traffic. Nonetheless, some administrators feel more comfortable with a script that can quickly revert all QoS configations back to default. To this end, such a script is shown in Example 2-61.

Note This script can only be run when all auto qos commands have been removed from all interfaces.

Cisco Catalyst 4500/4900 and 4500-E/4900M QoS Design

The Catalyst 4500 family of switches provides Layer 2 through Layer 4 network services, including advanced high availability, security, and QoS services in addition to integrated PoE to support unified communications. The Catalyst 4500 and 4900 share common features and syntax and as such are grouped together and discussed as a single switch family, namely the Catalyst 4500 (which may also be designated as ,Classic Supervisors,). Similarly, the Catalyst 4500-E and 4900M share common features and syntax and as such are also grouped together and discussed as a single switch family, namely the Catalyst 4500-E (which may also be designated as Supervisor 6-E).

Catalyst 4500/4500-E switches come in various chassis, supervisor, and linecard combinations, as are discussed in turn.

The Catalyst 4500 switch family offers chassis that support 3, 6, 7, and 10 slots; these models include the Catalyst 4503, 4506, 4507R, and 4510R, respectively (the latter two models supporting a redundant supervisor option). Catalyst 4500 chassis provide 6 Gbps of bandwidth per linecard slot.

Multiple supervisor options exist for the Catalyst 4500/4500-E family of switches, including:

Supervisor II-Plus-TS—Supports basic Layer 2-Layer 4 services at up to 64 Gbps (48-millions of packets per second [mpps]) switching; includes 12 ports of wire-speed 10/100/1000 802.3af Power over Ethernet (PoE) and eight wire-speed SFP ports directly on the supervisor engine.

All of the supervisors above—with the exception of the Supervisor 6-E and 7-E—are referred to as Classic Supervisors. There is a major difference between QoS functionality and syntax on the Catalyst 4500 Classic Supervisors, as compared to the Supervisor 6-E and 7-E (which is discussed further in the following section).

The Catalyst 4500/4500-E linecards that meet the minimum requirements for medianet switch ports (including Gigabit Ethernet support, as well as supporting a strict priority hardware queue with at least three additional hardware queues), at the time of writing, are listed in
Table 2-3.

Table 2-3 Catalyst 4500/4500-E Linecards for Medianet Campus Networks

Line Card

Number of Ports

Port Speed

Port Type

Wire Rate

Cisco Catalyst 4500 Series Min/Max Ports

4503-E

4506-E/
4507R-E/
4507R+E

4510R-E/
4510R+E

E-Series 10 Gigabit Ethernet Line Cards

WS-X4712-SFP+E

12

10GBASE-R

SFP+ or SFP

2.5-to-1 with SFP+

1:1 with SFP

12/28

12/64

12/100

WS-X4606-X2-E

6

10GBASE-X

X2 or SFP with TwinGig Converter Module

2.5-to-1 with X2

1:1 with SFP

6/14

12/26

6/34

12/68

6/34

12/68

E-Series 10/100/1000 Line Cards

WS-X4748-RJ45V+E

48

10/100/1000

RJ-45

1:1

48/96

48/240

48/384

WS-X4648-RJ45V+E

48

10/100/1000

RJ-45

2-to-1

48/96

48/240

48/384

WS-X4648-RJ45V-E

48

10/100/1000

RJ-45

2-to-1

48/965

48/240

48/384

WS-X4648-RJ45-E

48

10/100/1000

RJ-45

2-to-1

48/965

48/240

48/384

E-Series Gigabit Ethernet SFP Line Cards

WS-X4624-SFP-E

24

1000

Pluggables

1:1

24/48

24/120

24/168

WS-X4612-SFP-E

12

1000

Pluggables

1:1

12/28

12/64

12/100

Classic 10/100/1000 Line Cards

WS-X4548-
RJ45V+

48

10/100/1000

RJ-45 PoE IEEE 802.3af, Cisco prestandard and PoEP-ready

8-to-1

48/96

48/240

48/384

WS-X4548-
GB-RJ45V

48

10/100/1000

RJ-45 PoE IEEE 802.3af and Cisco prestandard

8-to-1

48/96

48/240

48/384

WS-X4524-
GB-RJ45V

24

10/100/1000

RJ-45 PoE IEEE 802.3af and Cisco prestandard

4-to-1

24/48

24/120

24/168

WS-X4548-GB-RJ45

48

10/100/1000

RJ-45

8-to-1

48/96

48/240

48/384

Classic Gigabit Ethernet Fiber (GBIC or SFP) Line Cards

WS-X4306-GB

6

1000BASE-X

GBIC

Yes

6/12

6/30

6/42

WS-X4418-GB

18

1000BASE-X

GBIC

2 ports full

16 ports
4-to-1

18/36

18/90

18/126

WS-X4448-GB-LX

48

1000BASE-LX

48 SFPs (included)

8-to-1

48/96

48/240

48/384

WS-X4448-GB-SFP

48

1000BASE-X

SFP

8-to-1

48/96

48/240

48/384

WS-X4506-GB-T

6 + 6

10/100/1000

1000BASE-X (SFP) RJ-45 PoE IEEE 802.3af
and Cisco prestandard

Yes

6/12

6/30

6/42

Platform-Specific QoS Considerations

As mentioned, there is a significant difference in how QoS is implemented on the Catalyst 4500 Classic Supervisor family versus the Catalyst 4500-E Supervisor 6-E family; the former implements a switch-specific QoS (called the switch QoS model), while the latter implements Cisco IOS MQC (called the MQC model) on the switch.

Traffic is classified on ingress, based on trust states, access lists, or class maps. Policers can be applied to flows, on either a per-port basis or a per-VLAN basis or even a per-port/per-VLAN basis. Similarly, marking policies can be applied on the same basis. Egress queuing is based on a 4Q1T or a 1P3Q1T model (the latter being preferred, as it supports the EF PHB), with a platform-specific proprietary congestion avoidance mechanism providing Active Queue Management (AQM), namely Dynamic Buffer Limiting (DBL). DBL tracks the queue length for each traffic flow in the switch. When the queue length of a flow exceeds its limit, DBL drop packets or sets the Explicit Congestion Notification (ECN) bits in the packet headers.

Catalyst 4500 Classic Supervisor syntax is essentially equivalent to the QoS syntax on other Catalyst platforms, with the exception that the mls command prefix is omitted on this platform. Thus mls qos is abbreviated to simply qos on the Catalyst 4500 Classic Supervisors.

In contrast, the Catalyst 4500 Series Switch using Supervisor Engine 6-E/7-E employs the MQC QoS model. In this model, QoS is applied via Modular QoS Command-Line Interface (MQC). As such, certain QoS features are implemented differently (and others are not supported; the following sections detail the differences). The Catalyst 4500 Supervisor 6-E/7-E MQC QoS model is shown in Figure 2-21.

Figure 2-21 Catalyst 4500-E (Supervisor 6-E) MQC Model

In the MQC packet, QoS policies are applied as follows:

Step 1 The incoming packet is classified (based on different packet fields, receive port, or VLAN) to belong to a traffic class.

Step 2 Depending on the traffic class, the packet is rate limited/policed and its priority is optionally marked (typically at the edge of the network) so that lower priority packets are dropped or marked with lower priority in the packet fields (DSCP and CoS).

Step 3 After the packet has been marked, it is looked up for forwarding. This action obtains the transmit port and VLAN to transmit the packet.

Step 4 The packet is classified in the output direction based on the transmit port or VLAN. The classification takes into account any marking of the packet by input QoS.

Step 5 Depending on the output classification, the packet is policed, its priority is optionally (re-)marked, and the transmit queue for the packet is determined depending on the traffic class.

Step 6 The transmit queue state is dynamically monitored via DBL and drop threshold configuration to determine whether the packet should be dropped or enqueued for transmission.

Step 7 If eligible for transmission, the packet is enqueued to a transmit queue. The transmit queue is selected based on output QoS classification criteria. The selected queue provides the desired behavior in terms of latency and bandwidth.

Note As the QoS feature set and syntax for the Supervisor 6-E and 7-E are identical, these are collectively referred to as as the Supervisor 6-E for the remainder of this design chapter.

Enabling QoS (Classic Supervisors)

Note QoS does not have to be explicitly enabled within the Catalyst 4500-E Supervisor 6-E MQC model, as it is enabled by default.

QoS must be enabled globally on the Catalyst 4500 Classic Supervisors. This is a critical first step to deploying QoS on these platforms. If this small, but important, step is overlooked, it can lead to frustration in troubleshooting QoS problems because the switch software accepts QoS commands and even displays these within the switch configuration, but none of the QoS commands are active until the qos global command is enabled, as shown in Example 2-62.

Trust Models

Catalyst 4500 Classic Supervisor switch ports can be configured to statically trust CoS, DSCP, or to dynamically and conditionally trust Cisco IP phones. By default, with QoS enabled, all ports are set to an untrusted state.

In contrast, the Catalyst 4500-E Supervisor 6-E does not support trust CoS, as it considers all interfaces to be trusted (via DSCP-trust) by default; it does, however, support conditional trust.

Trust-CoS Model

A Catalyst 4500 Classic Supervisor switch port can be configured to trust CoS by configuring the interface with the qos trust cos command. However, if an interface is set to trust CoS, then it by default calculates a packet’s internal DSCP to be the incoming packet’s (CoS value * 8). While this may suitable for most markings, this default mapping may not be suitable for VoIP, as VoIP is usually marked CoS 5, which would map by default to DSCP 40 (and not 46, which is the EF PHB as defined by RFC 3246). Therefore, if an interface is set to trust CoS, then the default CoS-to-DSCP mapping table should be modified such that CoS 5 maps to DSCP 46, as shown in Example 2-64.

Note As previously mentioned, the Catalyst 4500-E Supervisor 6-E does not support CoS-trust, but considers all interfaces to be trusted—via DSCP-trust—by default.

In Example 2-65, the CoS-to-DSCP mapping value for CoS 5 has been modified from the default mapping of 40 (CoS 5 * 8) to 46 (to match the recommendation from RFC 3246 that realtime applications be marked DSCP 46/EF).

In Example 2-66, the administrative port trust state is set to trust-cos and the current operation port trust state is also at trust-cos.

Trust-DSCP Model

Because of the additional granularity of DSCP versus QoS markings, it is generally recommended to trust DSCP rather than CoS (everything else being equal). A Catalyst 4500 Classic Supervisor switch port can be configured to trust DSCP with the qos trust dscp interface command, as shown in Example 2-67.

Conditional Trust Model

In addition to configuring switch ports to statically trust endpoints, the Catalyst 4500 Classic Supervisor and the Catalyst 4500-E Supervisor 6-E support dynamic, conditional trust with the qos trust device interface command, which can be configured with the cisco-phone keyword to extend trust to Cisco IP phones, after these have been verified via a CDP-negotiation. Additionally, the type of trust to be extended can be specified (either CoS or DSCP) on the Catalyst 4500 Classic Supervisor. When configuring conditional trust to Cisco IP Phones, it is recommended to dynamically extend CoS-Trust, as Cisco IP Phones can only remark PC QoS markings at Layer 2 (CoS) and not at Layer 3 (DSCP). For other endpoints that do not have this remarking limitation, it is recommended to dynamically extend DSCP-trust (over CoS-trust), because DSCP has greater marking granularity. An example of a dynamic, conditional trust policy that is set to extend CoS-trust to CDP-verified Cisco IP phones is shown in Example 2-68.

In Example 2-69, the trust device feature has been enabled, with the trusted device being specified as a cisco-phone. The administrative port trust state—that is, the mode of trust (CoS or DSCP) that is extended dynamically to the IP Phone—is set to trust CoS. Similarly, the current (dynamic) operational port trust state is shown as trusting CoS. This is because there is a Cisco IP phone currently connected to the switch port; if the IP phone is removed from this switch port, the operational port trust state toggles to “untrusted”.

Because the Catalyst 4500 Sup6-E and Sup7-E do not support trust CoS and the fact that Cisco IP phones can only remark CoS bits on PC-generated traffic, a workaround policy is required for switch ports on these Supervisors that may connect to Cisco IP phones. This workaround is a dynamic conditional trust policy applied to the port in conjunction with a simple MQC policy that explicitly matches CoS 5 (for voice) and CoS 3 (for signaling) and marks the DSCP values of these packets to EF and CS3, respectively (essentially performing a CoS-to-DSCP mapping). This workaround policy is shown in Example 2-70.

! This section applies conditional trust and the policy-map to the int(s)

C4500-E(config-pmap-c)# interface GigabitEthernet 3/1

C4500-E(config-if)# switchport access vlan 10

C4500-E(config-if)# switchport voice vlan 110

C4500-E(config-if)# spanning-tree portfast

C4500-E(config-if)# qos trust device cisco-phone

! Applies conditional-trust to the switch port

C4500-E(config-if)# service-policy input CISCO-IPPHONE

! Attaches the CoS-to-DSCP mapping policy-map

This configuration can be verified with the commands:

show qos interface

show class-map

show policy-map

Marking Models

The Catalyst 4500 family of switches supports two main marking models:

Per-port marking model

Per-VLAN marking model

Each model is detailed in the following sections.

Per-Port Marking Model

The per-port marking model (based on Figure 2-10) matches VoIP and signaling traffic from the VVLAN by matching on DSCP EF and CS3, respectively. Multimedia conferencing traffic from the DVLAN is matched by UDP/RTP ports 16384-32767. Signaling traffic is matched on SCCP ports (TCP 2000-2002), as well as on SIP ports (TCP/UDP 5060-5061). Other transactional data traffic, bulk data, and scavenger traffic are matched on various ports (outlined in Figure 2-9). Unlike the Catalyst 3750-E examples, no explicit default class is required, as the implicit class default performs policy actions (such as marking or policing) on the Catalyst 4500. The service policy is applied to an interface range, along with (DSCP-mode) conditional trust, as shown in Example 2-71.

Note On the Catalyst 4500 Classic Supervisors, marking commands on an interface cannot be enabled until IP routing is enabled globally. If IP routing is disabled globally and you try to configure the service policy on an interface, the configuration is accepted but it does not take effect. You are prompted with the message: “Set command will not take effect since CEF is disabled. Please enable IP routing and CEF globally.” To enable IP routing globally, issue the ip routing and ip cef global configuration commands. After you do this, the marking commands take effect.

Note The second interface trust command (qos trust cos) is not supported on the Catalyst 4500-E Supervisor 6-E.

Example 2-72 shows that the show policy-map interface command on the Catalyst 4500 Classic Supervisors dynamically increments counters. However, it should be noted that these are slightly delayed and seem to increment only every 10-15 seconds.

Per-VLAN Marking Model (Classic Supervisors)

An alternative approach for deploying marking policies on the Catalyst 4500 Classic Supervisor platforms is to deploy these on a per-VLAN basis. In order to do so, the interfaces belonging to the VLANs need to be configured with the qos vlan-based interface command. Additionally, the policy map can be simplified/broken-apart, as applicable to each VLAN. Adapting the per-port marking example to a VLAN-based marking policing allows for the VVLAN-based policy map to be reduced to only two explicit classes, VoIP and signaling. Similarly, the DVLAN-based policy map is reduced to five explicit classes, multimedia conferencing, signaling, transactional data, bulk data, and scavenger. Both policy maps still retain an implicit default class for all other flows. A per-VLAN marking model is shown in Example 2-73.

Note As the access lists and class maps are identical to the previous example, these are omitted for brevity in this—and in following—examples for this switch platform family.

! This section configures the ingress marking policy-map for the VVLAN

C4500-CS(config)#policy-map VVLAN-MARKING

C4500-CS(config-pmap)# class VVLAN-VOIP

C4500-CS(config-pmap-c)# set dscp ef

! VoIP is trusted (from the VVLAN)

C4500-CS(config-pmap-c)# class VVLAN-SIGNALING

C4500-CS(config-pmap-c)# set dscp cs3

! Signaling is trusted (from the VVLAN)

C4500-CS(config-pmap-c)# class class-default

C4500-CS(config-pmap-c)# set dscp default

! An implicit class-default marks all other VVLAN traffic to DF

! This section configures the ingress marking policy-map for the DVLAN

C4500-CS(config)#policy-map DVLAN-MARKING

C4500-CS(config-pmap)# class MULTIMEDIA-CONFERENCING

C4500-CS(config-pmap-c)# set dscp af41

! Multimedia-conferencing is marked AF41

C4500-CS(config-pmap-c)# class SIGNALING

C4500-CS(config-pmap-c)# set dscp cs3

! Signaling (from the DVLAN) is marked CS3

C4500-CS(config-pmap-c)# class TRANSACTIONAL-DATA

C4500-CS(config-pmap-c)# set dscp af21

! Transactional Data is marked AF21

C4500-CS(config-pmap-c)# class BULK-DATA

C4500-CS(config-pmap-c)# set dscp af11

! Bulk Data is marked AF11

C4500-CS(config-pmap-c)# class SCAVENGER

C4500-CS(config-pmap-c)# set dscp cs1

! Scavenger traffic is marked CS1

C4500-CS(config-pmap-c)# class class-default

C4500-CS(config-pmap-c)# set dscp default

! An implicit class-default marks all other DVLAN traffic to DF

! This section configures the interface(s) for conditional trust,

! with CoS-trust and enables VLAN-based QoS

C4500-CS(config)#interface range GigabitEthernet 2/1-48

C4500-CS(config-if-range)# switchport access vlan 10

C4500-CS(config-if-range)# switchport voice vlan 110

C4500-CS(config-if-range)# spanning-tree portfast

C4500-CS(config-if-range)# qos trust device cisco-phone

! The interface is set to conditionally-trust Cisco IP Phones

C4500-CS(config-if-range)# qos trust cos

! CoS-trust will be dynamically extended to Cisco IP Phones

C4500-CS(config-if-range)# qos vlan-based

! Enables VLAN-based QoS on the interface(s)

! This section attaches the service-policy to the DVLAN interface

C4500-CS(config)#interface Vlan 10

C4500-CS(config-if)# description DVLAN

C4500-CS(config-if)# ip route-cache cef

! Enables IP CEF on the VLAN interface (required for marking)

C4500-CS(config-if)# service-policy input DVLAN-MARKING

! Attaches the DVLAN Per-VLAN Marking policy to the DVLAN interface

! This section attaches the service-policy to the VVLAN interface

C4500-CS(config)#interface Vlan 110

C4500-CS(config-if)# description VVLAN

C4500-CS(config-if)# ip route-cache cef

! Enables IP CEF on the VLAN interface (required for marking)

C4500-CS(config-if)# service-policy input VVLAN-MARKING

! Attaches the VVLAN Per-VLAN Marking policy to the VVLAN interface

This configuration can be verified with the commands:

show qos interface

show class-map

show policy-map

show policy-map interface

Per-VLAN Marking Model (Supervisor 6-E/7-E)

The per-VLAN marking model is essentially the same for the Catalyst 4500-E Supervisor 6-E, except for the final set of interface and VLAN commands. The Catalyst 4500-E does not support the qos vlan-based interface command, neither is the qos trust dscp interface command required, and finally, service policies are attached to VLANs via a VLAN-configuration mode (instead of an interface configuration mode), as shown in Example 2-74.

Per-VLAN policing model—This model attaches policers to logical VLAN interfaces; however, there is an inherent limitation with this policing model. It only supports a single aggregate policer per VLAN and—since the number of ports associated with a VLAN is dynamic and variable—thus is quite restricted in overall policing effectiveness. Therefore, it is generally recommended to use the per-port/per-VLAN policing model instead, as it offers more discrete policing options.

User-Based Rate Limiting (UBRL) models—This model (supported on the Supervisor V-10GE only) applies flow-based policers to Layer 3 interfaces to police microflows on a per-source or per-destination basis; UBRL may be applied on a per-port or per-port/per-VLAN basis.

The per-port and per-port/per-VLAN policing models and the UBRL models for the Catalyst 4500 family of switches are detailed in the following sections.

Per-Port Policing Model (Classic Supervisors)

The per-port policing model is quite similar to the per-port marking model, except that the policy action includes a policing function—in some cases to drop, in others to remark. As shown in Figure 2-10, the VoIP and signaling traffic from the VVLAN can be policed to drop at 128 kbps and 32 kbps, respectively (as any excessive traffic matching this criteria would be indicative of network abuse). Similarly, the multimedia conferencing, signaling, and scavenger traffic from the DVLAN can be policed to drop. On the other hand, data plane policing policies can be applied to transactional, bulk, and best effort data traffic, such that these flows are subject to being remarked (but not dropped at the ingress edge) when severely out-of-profile. Remarking is performed by configuring a policed-DSCP map with the global configuration command qos map dscp policed, which specifies which DSCP values are subject to remarking if out-of-profile and what value these should be remarked as (which in the case of data plane policing/scavenger class QoS policies, this value is CS1/DSCP 8). A per-port policing for the Catalyst 4500 Classic Supervisor is shown in Example 2-75.

Note Catalyst 4500 software allows for policing rates to be entered using the postfixes k (for kilobits), m (for megabits), and g (for gigabits), as shown in Example 2-16. Additionally, decimal points are allowed in conjunction with these postfixes; for example, a rate of 10.5 Mbps could be entered with the policy map command police 10.5m. While these policing rates are converted to their full bps values within the configuration, it makes the entering of these rate more user-friendly and less error prone (as could easily be the case when having to enter up to 10 zeros to define the policing rate).

In Example 2-76, the policing DSCP-markdown mapping is shown. The first digit of the DSCP value of a packet offered to a policer is shown along the Y-axis of the table; the second digit of the DSCP value of a packet offered to a policer is shown along the X-axis of the table. For example, the DSCP value for the transactional data application class (AF21/18) is found in the row d1=1 and column d2=8. And, as shown, packets with this offered DSCP value (along with DF/0 and AF11/10) are remarked to CS1 (08) if found to be in excess of the policing rate.

Per-Port Policing Model (Supervisor 6-E)

The per-port policing model is essentially the same for the Catalyst 4500-E Supervisor 6-E, except that it does not require a global policed-DSCP map and thus the policing commands are slightly different, also no trust-DSCP statement is required on the interface(s), as shown in Example 2-77.

! Multimedia-conferencing is marked AF41 and policed to drop at 5 Mbps

C4500-E(config-pmap)# class SIGNALING

C4500-E(config-pmap-c)# set dscp cs3

C4500-E(config-pmap-c)# police 32k bc 8000

C4500-E(config-pmap-c-police)# conform-action transmit

C4500-E(config-pmap-c-police)# exceed-action drop

! (DVLAN) Signaling is marked CS3 and policed to drop at 32 kbps

C4500-E(config-pmap)# class TRANSACTIONAL-DATA

C4500-E(config-pmap-c)# set dscp af21

C4500-E(config-pmap-c)# police 10m bc 8000

C4500-E(config-pmap-c-police)# conform-action transmit

C4500-E(config-pmap-c-police)# exceed-action set-dscp-transmit cs1

! Trans-data is marked AF21 and policed to remark (to CS1) at 10 Mbps

C4500-E(config-pmap)# class BULK-DATA

C4500-E(config-pmap-c)# set dscp af11

C4500-E(config-pmap-c)# police 10m bc 8000

C4500-E(config-pmap-c-police)# conform-action transmit

C4500-E(config-pmap-c-police)# exceed-action set-dscp-transmit cs1

! Bulk-data is marked AF11 and policed to remark (to CS1) at 10 Mbps

C4500-E(config-pmap)# class SCAVENGER

C4500-E(config-pmap-c)# set dscp cs1

C4500-E(config-pmap-c)# police 10m bc 8000

C4500-E(config-pmap-c-police)# conform-action transmit

C4500-E(config-pmap-c-police)# exceed-action drop

! Scavenger traffic is marked CS1 and policed to drop at 10 Mbps

C4500-E(config-pmap)# class class-default

C4500-E(config-pmap-c)# set dscp default

C4500-E(config-pmap-c)# police 10m bc 8000

C4500-E(config-pmap-c-police)# conform-action transmit

C4500-E(config-pmap-c-police)# exceed-action set-dscp-transmit cs1

! The implicit default class marks all other traffic to DF

! and polices all other traffic to remark (to CS1) at 10 Mbps

! This section attaches the service-policy to the interface(s)

C4500-E(config)#interface range GigabitEthernet 2/1-48

C4500-E(config-if-range)# switchport access vlan 10

C4500-E(config-if-range)# switchport voice vlan 110

C4500-E(config-if-range)# spanning-tree portfast

C4500-E(config-if-range)# qos trust device cisco-phone

! The interface is set to conditionally-trust Cisco IP Phones

C4500-E(config-if-range)# service-policy input PER-PORT-POLICING

! Attaches the Per-Port Policing policy to the interface(s)

Note Advanced network administrators can leverage the Catalyst 4500-E Supervisor 6-E’s support of three color markers—either the RFC 2697 single rate three color marker (srTCM) or the RFC 2698 two rate three color marker (trTCM)—such that the exceeding policing action for the transactional data and bulk data policers would be to remark to AF22 and AF12 (respectively), while the violating policing action for these classes would be to remark to CS1.

This configuration can be verified with the commands:

show qos interface

show class-map

show policy-map

show policy-map interface

Per-Port/Per-VLAN Policing Model (Classic Supervisors)

An alternative—and more discrete—approach for deploying policing policies on the Catalyst 4500 platforms is to deploy these on a per-port/per-VLAN basis. The Catalyst 4500 has a very elegant syntax for deploying per-port/per-VLAN policies (as compared to the Catalyst 3750-E syntax, for example), where policies are applied within a VLAN mode within a switch port’s interface configuration mode, as shown in Example 2-78.

In Example 2-78, three policers are applied to the VVLAN of a given access edge trunk port: one to police VoIP to 128 kbps, another to police signaling to 32 kbps, and a third to policy everything else to 32 kbps. On the other hand, six policers are applied to the DVLAN of a given access edge trunk port: one to policy multimedia conferencing traffic to drop at 5 Mbps, a second to police signaling to drop at 32 kbps, a third to police transactional data to remark (to CS1) at 10 Mbps, a fourth to police bulk data to remark (to CS1) at 10 Mbps, a fifth to police scavenger to drop at 10 Mbps, and a sixth to police everything else to remark (to CS1) at 10 Mbps.

As in the previous examples, remarking is performed by configuring a policed-DSCP map with the global configuration command qos map dscp policed, which specifies which DSCP values are subject to remarking (if out-of-profile) and what these values should be remarked to (which in the case of scavenger-class QoS policies, the remarking value is CS1/DSCP 8).

Per-Port/Per-VLAN Policing Model (Supervisor 6-E/7-E)

The per-port/per-VLAN policing model is essentially the same for the Catalyst 4500-E Supervisor 6-E, except that it does not require a global policed-DSCP map and thus the policing commands are slightly different; also no trust-DSCP statement is required on the interface(s), as shown in Example 2-79.

! Multimedia-conferencing is marked AF41 and policed to drop at 5 Mbps

C4500-E(config-pmap)# class SIGNALING

C4500-E(config-pmap-c)# set dscp cs3

C4500-E(config-pmap-c)# police 32k bc 8000

C4500-E(config-pmap-c-police)# conform-action transmit

C4500-E(config-pmap-c-police)# exceed-action drop

! (DVLAN) Signaling is marked CS3 and policed to drop at 32 kbps

C4500-E(config-pmap)# class TRANSACTIONAL-DATA

C4500-E(config-pmap-c)# set dscp af21

C4500-E(config-pmap-c)# police 10m bc 8000

C4500-E(config-pmap-c-police)# conform-action transmit

C4500-E(config-pmap-c-police)# exceed-action set-dscp-transmit cs1

! Trans-data is marked AF21 and policed to remark (to CS1) at 10 Mbps

C4500-E(config-pmap)# class BULK-DATA

C4500-E(config-pmap-c)# set dscp af11

C4500-E(config-pmap-c)# police 10m bc 8000

C4500-E(config-pmap-c-police)# conform-action transmit

C4500-E(config-pmap-c-police)# exceed-action set-dscp-transmit cs1

! Bulk-data is marked AF11 and policed to remark (to CS1) at 10 Mbps

C4500-E(config-pmap)# class SCAVENGER

C4500-E(config-pmap-c)# set dscp cs1

C4500-E(config-pmap-c)# police 10m bc 8000

C4500-E(config-pmap-c-police)# conform-action transmit

C4500-E(config-pmap-c-police)# exceed-action drop

! Scavenger traffic is marked CS1 and policed to drop at 10 Mbps

C4500-E(config-pmap)# class class-default

C4500-E(config-pmap-c)# set dscp default

C4500-E(config-pmap-c)# police 10m bc 8000

C4500-E(config-pmap-c-police)# conform-action transmit

C4500-E(config-pmap-c-police)# exceed-action set-dscp-transmit cs1

! The implicit default class marks all other traffic to DF

! and polices all other traffic to remark (to CS1) at 10 Mbps

! This section attaches the policy to the VLANs on a Per-Port basis

C4500-E(config)#interface range GigabitEthernet 2/1-48

C4500-E(config-if-range)# switchport access vlan 10

C4500-E(config-if-range)# switchport voice vlan 110

C4500-E(config-if-range)# spanning-tree portfast

C4500-E(config-if-range)# qos trust device cisco-phone

! The interface is set to conditionally-trust Cisco IP Phones

C4500-E(config-if-range)# vlan 10

C4500-E(config-if-vlan-range)# service-policy input DVLAN-POLICERS

! Attaches the Per-Port/Per-VLAN DVLAN Policing policy to the

! DVLAN of the trunked interface(s)

C4500-E(config-if-range)# vlan 110

C4500-E(config-if-vlan-range)# service-policy input VVLAN-POLICERS

! Attaches the Per-Port/Per-VLAN VVLAN Policing policy to the

! VVLAN of the trunked interface(s)

Note Advanced network administrators can leverage the Catalyst 4500-E Supervisor 6-E’s support of three color markers—either the RFC 2697 single rate three color marker (srTCM) or the RFC 2698 two rate three color marker (trTCM)—such that the exceeding policing action for the transactional data and bulk data policers would be to remark to AF22 and AF12 (respectively), while the violating policing action for these classes would be to remark to CS1.

This configuration can be verified with the commands:

show qos interface

show class-map

show policy-map

show policy-map interface

show policy-map interface interface x/y vlan vlan-number

Per-Port User-Based Rate Limiting (Supervisor V-10GE)

UBRL adopts microflow policing capability to dynamically learn traffic flows and rate limit each unique flow to an individual rate and, as such, is a highly effective and efficient policing tool, particularly at the distribution ayer in a medianet campus network.

UBRL is available on Supervisor Engine V-10GE with NetFlow support. UBRL can be applied to ingress traffic on routed interfaces and is typically used in environments where a per-user, granular rate limiting mechanism is required, such as at the distribution layer, to provide a second line of policing defense in the campus. Like other policers, UBRL can be used to drop or remark exceeding flows.

A flow is defined by five-tuples (IP source address, IP destination address, IP protocol field, Layer 4 protocol source, and destination ports), which are the same for each packet in the flow. Flow-based policers apply a single policy to discrete flows without having to specify the virtually-infinite tuple-combinations. UBRL can also be applied with source or destination flow masks; these masks apply an aggregate microflow policing policy to multiple flows sharing the same source or IP destination addresses.

In the per-port UBRL Model, a class map matches on a microflow basis and aggregates these by source addresses. Then a policer applies an aggregate limit to all microflows sharing a common source IP address, remarking traffic in excess of the policing rate.

Remarking is performed by configuring a policed-DSCP map with the global configuration command qos map dscp policed, which specifies which DSCP values are subject to remarking (if out-of-profile) and what these values should be remarked to (which in the case of scavenger class QoS policies, the remarking value is CS1/DSCP 8).

UBRL is supported on Layer 3 interfaces and can be applied on either a per-port or per-port/per-VLAN-basis, as shown in Example 2-80 and Example 2-81, respectively.

In Example 2-80, the campus distribution block is using a routed access design and, as such, has Layer 3 interfaces (TenGigabitEthernet 1/1 and 1/2) connecting it to the access layer switches. UBRL is applied to all flows to ensure that any endpoint transmitting at more than 5% capacity (an example value) of the access edge 10/100/1000 switch ports are subject to data plane policing/scavenger class QoS.

Per-Port/Per-VLAN User-Based Rate Limiting (Supervisor V-10GE)

In contrast with the previous example, if the campus distribution block is using a Layer 2/Layer 3 design, and as such has Layer 2 trunked interfaces (TenGigabitEthernet 1/1 and 1/2) connecting it to the access layer switches, then UBRL can be applied on a per-port/per-VLAN basis. In this case, separate UBRL policies can be applied to each VLAN traversing the trunked interfaces—via per-port/per-VLAN UBRL policies—as each VLAN is routed through the switch.

To highlight policy flexibility, additional levels of classification are included in this second UBRL example (which incidentally can also be applied to the per-port UBRL model). Instead of applying a blanket UBRL policy to all endpoints, separate UBRL polices can be applied to different types of endpoints or application-and-endpoint-combinations. For example, VoIP from Cisco IP phones in the VVLAN can be rate limited to 128 Kbps, while signaling traffic from these endpoints can be limited to 32 kbps. Similarly, TelePresence endpoints in the VVLAN (which mark their media flows to CS4) can be limited to 25 Mbps. All other endpoint-generated traffic in the VVLAN can be limited to 32 kbps per endpoint.

Similar policy granularity can be applied to the DVLAN policer, if desired. However in this example, a simplified DVLAN policer is applied to all flows to ensure that any DVLAN endpoint transmitting at more than 5% capacity (an example value) of the access edge 10/100/1000 switch ports are subject to data plane policing/scavenger class QoS.

Static DSCP-trust is configured on the physical ports and the per-port/per-VLAN UBRL policers are applied to their respective VLANs within the trunked interface, as shown in Example 2-81.

Queuing Models

The Catalyst 4500 switch family only supports egress queuing models, which can be configured on the Classic Supervisor branch to operate in either a 4Q1T mode or a 1P3Q1T mode (the latter of which is recommended for medianet campus networks, as it supports the EF PHB) and on the Supervisor 6-E branch can be configured (via MQC) to provide a flexible queuing structure, to a maximum of 1P7Q1T.

Additionally, the Catalyst 4500 family uses a platform-specific congestion avoidance algorithm to provide Active Queue Management (AQM), namely Dynamic Buffer Limiting (DBL). DBL tracks the queue length for each traffic flow in the switch. When the queue length of a flow exceeds its limit, DBL drop packets or set the Explicit Congestion Notification (ECN) bits in the packet headers.

Furthermore, the Catalyst 4500 supports DSCP-to-queue mapping on both branches.

The Catalyst 4500 Classic Supervisor 1P3Q1T+DBL model and the Supervisor 6-E 1P7Q1T+DBL models are detailed in the following sections.

Egress Queuing 1P3Q1T+DBL (Classic Supervisors) Model

On the Catalyst 4500 Classic Supervisors, queue 3 can be enabled as a strict-priority queue. Once enabled, Q4 can be used as a guaranteed bandwidth queue, Q2 can be used as the default best effort queue, and Q1 can be used as a less than best effort (scavenger) queue. Bandwidth can be assigned as: 5%, 35%, 30%, and 30% for queues 1 through 4, respectively.

DBL can be enabled on all DSCP values, with the exception of DSCP values that are mapped to the PQ (specifically, CS4/32, CS5/40, and EF/46), as this may cause drops to occur on these real time flows. Additionally, DBL can be enabled to mark the IP Explicit Congestion Notification (IP ECN) bits within the IP ToS Bye in the event of congestion. A service policy can be configured to enable DBL on all flows (except those already noted) and applied to each interface on which queuing is enabled.

Once these queues have been configured, then VoIP (EF), broadcast video (CS5), and realtime interactive (CS4) traffic can be mapped to the strict priority queue (Q3). Network control (CS7), internetwork control (CS6), signaling (CS3), and management (CS2) traffic can be mapped to Q4, along with multimedia conferencing (AF4), multimedia streaming (AF3), and transactional data (AF2). Best effort traffic is sent to the default queue (Q2), while bulk data (AF1) and scavenger (CS1) traffic are mapped to the deferential queue (Q1). These 1P3Q1T+DBL egress queuing mappings for the Catalyst 4500 Classic Supervisor are shown in Figure 2-22.

Example 2-84 shows the ingress DSCP-to-queue mappings. The first digit of the DSCP value of a packet is shown along the Y-axis of the table; the second digit of the DSCP value of a packet is shown along the X-axis of the table. The mapping table corresponds to Figure 2-22. It can be noted that CS4 (DSCP 32), CS5 (DSCP 40), and EF (DSCP 46) are all mapped to Q3 (the PQ). It should also be noted that internal DSCP values 32 through 47 are mapped to Q2 by default, which is why the table shows additional values being mapped to this queue.

Example 2-85 shows that interface GigabitEthernet 1/1 has been configured such that Q1 through Q4 receive 5%, 35%, 30%, and 30% (respectively) of the interface bandwidth (1 Gbps) and that Q3 has been enabled as a high priority/strict priority queue.

Egress Queuing 1P7Q1T+DBL (Supervisor 6-E/7-E) Model

The Catalyst 4500-E Supervisor 6-E hardware supports (up to) eight transmit queues per port. Queues are assigned when an output policy is attached to a port with one or more queuing related actions for one or more classes of traffic. Because there are only eight queues per port, there can be at most eight classes of traffic (including the reserved class, class default) with queuing actions defined.

On the Catalyst 4500-E Supervisor 6-E only one transmit queue on a port can be configured as strict priority queue (which, in effect, constitutes a hardware low latency queue) with the priority policy-map class action command. The priority queue is serviced first until it is empty or until it is under its limited rate. Only one traffic stream can be destined for the priority queue per class level policy (in other words, multiple hardware LLQs are not supported on the Supervisor 6-E). The Supervisor 6-E supports an unconditional (explicit) policer to rate limit packets enqueued to the strict priority queue. When the priority queue is configured on one class of a policy map without a policer, only bandwidth remaining percent is accepted on other classes (guaranteeing a minimum bandwidth for other classes from the remaining bandwidth of what is left after using the priority queue); however, when the priority queue is configured with a policer, then either bandwidth percent or bandwidth remaining percent is accepted on the other queuing classes.

However, if queuing policies are to be applied to EtherChannel interfaces, then it is recommended not to police the priority queue. This is because two policy maps would be needed in this case: one policy-map would be needed to police the priority queue (which would have to be applied to the logical EtherChannel interface in the egress direction) and a second policy-map would be needed to define the queuing policy (using bandwidth remaining percent), which would be applied to all EtherChannel physical port-member interfaces in the egress direction. Thus to simplify the queuing policy and to increase its portability and modularity, the priority queue is not policed in Example 2-86.

Additionally, as with the Classic Supervisors, DBL can be enabled on a per-class basis, but is most effective when applied against TCP-based traffic flows (as opposed to UDP-based traffic flows).

Thus the Catalyst 4500-E Supervisor 6-E can be configured to operate in a 1P7Q1T+DBL mode. VoIP (EF), broadcast video (CS5), and realtime interactive (CS4) flows can be assigned to the strict priority queue. Network and internetwork control (CS6 and CS7, respectively), along with signaling and management (CS3 and CS2, respectively), can all share a control/management queue. This allows for dedicated queues to be provisioned for multimedia conferencing (AF4), multimedia streaming (AF3), transactional data (AF2), and bulk data (AF1). Also, scavenger (CS1) traffic can share a bandwidth-constrained “less than best effort” queue, while all other traffic is assigned to the default/best effort queue. The recommended 1P7Q1T Sup 6-E egress queuing configuration for the C4500-E Supervisor 6-E is illustrated in Figure 2-23.

Note As noted within the comments in Example 2-86, unique class map names must be used for these egress queuing policies, so that logical incompatibilities—resulting in classification errors—are not introduced.

Example 2-87 shows various queuing classes and their associated packet and byte counts, including 26,268 queuing drops noted on the scavenger-queue.

EtherChannel QoS Models

Similar to the trust and queuing policies on the Catalyst 4500/4500-E family, there are two sets of EtherChannel QoS models: one for the Classic Supervisors and another for the Supervisor 6-E/7-E series. Each of these will be presented in turn.

Note As noted in the previous section, the queuing policies will only attach to EtherChannel port-member physical interfaces if the priority queue is not explicitly policied. If policing the priority queue is desired, then a separate policy map needs to be constructed to do so and attached to the logical EtherChannel interface in the egress direction.

Control Plane Policing

The Catalyst 4500 Series switches support CoPP on all supervisor engines compatible with Cisco IOS release 12.2(31)SG. In this platform CoPP is implemented in hardware in a centralized, non-distributed fashion. CoPP policies are centrally configured under the control plane configuration mode and then enforced in hardware by the classification TCAM and QoS policers of the supervisor engine. This CoPP model is shown in Figure 2-24.

Figure 2-24 Catalyst 4500 Control Plane Policing Implementation

CoPP Configuration

The Catalyst 4500 implementation of CoPP uses the modular QoS command line interface (MQC) to define traffic classification criteria and to specify the configurable policy actions for the classified traffic. MQC uses class maps to define packets for a particular traffic class. After you have classified the traffic, you can create policy maps to enforce policy actions for the identified traffic. The control-plane global configuration command allows the CoPP service policy to be directly attached to the control plane.

Additionally, Catalyst 4500 CoPP supports the definition of non-IP traffic classes in addition to IP traffic classes. With this, instead of using the default class for handling all non-IP traffic, you can define separate policies for non-IP traffic. This results in better and more granular control over non-IP protocols, such as ARP, IPX, BPDUs, CDP, and SSTP.

One particular characteristic of Catalyst 4500 CoPP is that the CoPP policy must be named system-cpp policy. In fact, system-cpp-policy is the only policy map that can be attached to the control-plane. To facilitate the configuration of the system-cpp-policy, Catalyst 4500’s CoPP provides a global macro function (called system-cpp) that automatically generates and applies CoPP policies to the control plane. The resulting configuration uses a collection of system-defined class maps for common Layer 3 and Layer 2 control plane traffic. The names of all system-defined CoPP class maps and their matching ACLs contain the prefix system-cpp-. By default, no action is specified on any of the system predefined traffic classes.
Table 2-4 lists the predefined system ACLs.

Table 2-4 Catalyst 4500 System Pre-Defined CoPP ACLs

Pre-defined Named ACL

Description

system-cpp-dot1x

MAC DA = 0180.C200.0003

system-cpp-lldp

MAC DA=0180.c200.000E

system-cpp-mcast-cfm

MAC DA=0100.0ccc.ccc0 - 0100.0ccc.ccc7

system-cpp-ucast-cfm

MAC DA=0100.0ccc.ccc0

system-cpp-bpdu-range

MAC DA = 0180.C200.0000 - 0180.C200.000F

system-cpp-cdp

MAC DA = 0100.0CCC.CCCC (UDLD/DTP/VTP/Pagp)

system-cpp-sstp

MAC DA = 0100.0CCC.CCCD

system-cpp-cgmp

MAC DA = 01-00-0C-DD-DD-DD

system-cpp-ospf

IP Protocol = OSPF, IPDA matches 224.0.0.0/24

system-cpp-igmp

IP Protocol = IGMP, IPDA matches 224.0.0.0/3

system-cpp-pim

IP Protocol = PIM, IPDA matches 224.0.0.0/24

system-cpp-all-systems-on-subnet

IPDA = 224.0.0.1

system-cpp-all-routers-on-subnet

IPDA = 224.0.0.2

system-cpp-ripv2

IPDA = 224.0.0.9

system-cpp-ip-mcast-linklocal

IP DA = 224.0.0.0/24

system-cpp-dhcp-cs

IP Protocol = UDP, L4SrcPort = 68, L4DstPort = 67

system-cpp-dhcp-sc

IP Protocol = UDP, L4SrcPort = 67, L4DstPort = 68

system-cpp-dhcp-ss

IP Protocol = UDP, L4SrcPort = 67, L4DstPort = 67

In addition to the predefined classes, you can configure your own class maps matching other control plane traffic. In order to take effect, these user-defined class maps need to be added to the system-cpp-policy policy-map.

To summarize, CoPP is enabled on Catalyst 4500 Series switches by performing these steps:

Step 1 Enable QoS with the qos global configuration command.

Step 2 Run the macro global apply system-cpp global macro to create the system-cpp-policy policy-map and attach it to the control-plane.

Step 3 Optionally, define the necessary ACLs to be used to match your own traffic classes.

Step 4 Next, classify the control plane traffic using the class-map command.

Step 5 After the traffic is classified, configure a policy-map with a police policy action to each class, indicating whether to permit all packets, to drop all packets, or to drop packets crossing a specified rate limit for that particular class.

CoPP Considerations and Restrictions

The following are important considerations and known restrictions that should be taken into account prior configuring CoPP on the Catalyst 4500:

CoPP is not enabled unless the global QoS is enabled and police action is specified.

Only ingress CoPP is supported, so only the input keyword is supported in control plane-related CLIs.

Use the system-defined class maps for policing control plane traffic.

ARP support is limited to gratuitous ARPs (destination MAC in the 0180.C200.0020 - 0180.C200.002F range). Broadcast ARPs are not currently supported by CoPP.

Control plane traffic can be policed only using CoPP. Traffic cannot be policed at the input interface or VLAN even though a policy map containing the control plane traffic is accepted when the policy map is attached to an interface or VLAN.

System-defined class maps cannot be used in policy maps for regular QoS.

Use ACLs and class maps to identify data plane and management plane traffic that are handled by CPU. User-defined class maps should be added to the system-cpp-policy policy map for CoPP.

The policy map named system-cpp-policy is dedicated for CoPP. When attached to the control plane, it cannot be detached.

The default system-cpp-policy policy map does not define actions for the system-defined class maps, which means no policing.

The only action supported in system-cpp-policy policy map is police.

Do not use the log keyword in the CoPP policy ACLs.

Both MAC and IP ACLs can be used to define data plane and management plane traffic classes.

The exceeding action policed-dscp-transmit is not supported for CoPP.

CoPP Model

CoPP can be deployed on the Catalyst 4500 in one of two main ways:

The global macro macro global apply system-cpp can be used to pre-configure CoPP access lists, class maps, and a system-cpp-policy policy map (with no class actions configured); this template can then be modified and tuned by the administrator to suit specific environments (this is the recommended approach for Catalyst 4500 Classic Supervisors).

The CoPP policy can be generated manually.

In Example 2-90, CoPP has been deployed manually (to keep the policy as consistent as possible between the Catalyst 4500 and 6500 examples), inline with the recommendations for CoPP class definitions and deployment models presented earlier in this chapter.

Note As previously mentioned, to apply this policy to the control plane of a Catalyst 4500 Classic Supervisor, the global macro macro global apply system-cpp should be added to the configuration above (prior to the definition of the system-cpp-policy policy map). Additionally, as the Catalyst 4500 Classic Supervisors only support single rate policers, the policing commands need to be adapted to a single rate syntax, as has been shown in the per-port and per-port/per-VLAN policing model examples for Classic Supervisors (see Example 2-75 and Example 2-78, respectively).

Example 2-91 shows sample traffic being matched across various control plane traffic classes.

Note To clear the counters on the control plane, enter the clear control-plane * command.

Note As previously mentioned, to apply this policy to the control plane of a Catalyst 4500 Classic Supervisor, the global macro macro global apply system-cpp should be added to the configuration above (prior to the definition of the system-cpp-policy policy map). Additionally, as the Catalyst 4500 Classic Supervisors only support single rate policers, the policing commands need to be adapted to a single rate syntax, as has been shown in the per-port and per-port/per-VLAN policing model examples for Classic Supervisors (see Example 2-75 and Example 2-78, respectively).

Cisco Catalyst 6500 and 6500-E QoS Design

The Cisco Catalyst 6500/6500-E series switches represent the flagship of Cisco’s switching portfolio, delivering innovative, secure, converged services throughout the campus, from the access edge wiring closet to the distribution to the core to the data center to the WAN/VPN edge. The Catalyst 6500/6500-E provides leading-edge Layer 2-Layer 7 services, including rich high availability, manageability, virtualization, security, and QoS feature sets, as well as integrated Power-over-Ethernet (PoE), allowing for maximum flexibility in virtually any role within the campus.

Catalyst 6500/6500-E switches come in various chassis, supervisor, feature-cards, and linecard combinations, as are discussed in turn.

Catalyst 6500 (regular) chassis are available in 3, 6, 9, or 13 slot combinations; namely, the 6503, 6506, 6509, and 6513, respectively. Additionally, the 6509 is available in a Network Equipment Building Standards (NEBS) design, where the network modules are presented vertically (as opposed to the standard horizontal design), as the 6509-NEB-A. Catalyst 6500 chassis provide up to 32 Gbps of bandwidth per linecard slot.

Also, Catalyst 6500 chassis are available in Enhanced models, as designated by a -E suffix (such as 6503-E) in 3, 4, 6, and 9 slot combinations; namely, the 6503-E, 6504-E, 6506-E, and the 6509-E. Additionally, the 6509-E is also available in an Enhanced Vertical (NEBS compliant) design, as the 6509-V-E. Catalyst 6500-E chassis provide up to 80 Gbps of bandwidth per linecard slot.

At the time of writing, three supervisor engine options are available for the Catalyst 6500 series switches (not including Virtual Switch Supervisor Engines):

Supervisor Engine 720—The Supervisor Engine 720 is part of the Catalyst 6500’s third generation suite of supervisor modules and increases slot efficiency by integrating a high performance 720 Gbps switch fabric backplane with a new routing and forwarding engine, including a third generation policy feature card (PFC3). With such an architecture, the Supervisor 720 delivers scalable performance, achieving centralized forwarding (CEF) at 48Mpps/720Gbps, accelerated CEF at 400Mpps (peak) /720Gbps, and distributed forwarding (dCEF) 400Mpps (sustained)/720Gbps.

These supervisors, in turn, can leverage various feature cards, including the Multilayer Switch Feature Card (MSFC), which serves as the routing engine, the Policy Feature Card (PFC), which serves as the primary QoS engine, as well as various Distributed Feature Cards (DFCs), which serve to scale policies and processing. Specifically relating to QoS, the PFC sends a copy of the QoS policies to the DFC to provide local support for the QoS policies, which enables the DFCs to support the same QoS features that the PFC supports.

The QoS features supported on currently shipping PFCs and DFCs are summarized in
Table 2-5.

Note Even though, at the time of writing, the WS-X6516A-GBIC, the WS-X6408A-GBIC, the WS-X6548-GE-TX, the WS-X6548-GE-45AF, and the WS-X6816-GBIC are currently shipping Gigabit Ethernet or 10/100/1000 Ethernet modules, these modules do not support the minimum recommended queuing structure (of 1P3QyT) for medianet campus networks (as discussed at the beginning of this chapter) and, as such, these linecards are not included in this chapter.

Ingress queuing and congestion avoidance—If you configure an Ethernet LAN port to trust CoS or DSCP, QoS classifies the traffic on the basis of its Layer 2 CoS value or its Layer 3 DSCP value and assigns it to an ingress queue to provide congestion avoidance.

Internal DSCP—On the PFC and DFCs, QoS associates an internal DSCP value with all traffic to classify it for processing through the system. There is an initial internal DSCP based on the traffic trust state and a final internal DSCP. The final internal DSCP can be the same as the initial value or an MQC policy map can set it to a different value.

MQC policy maps—MQC policy maps can do one or more of these operations:

– Change the trust state of the traffic (bases the internal DSCP value on a different QoS label)

– Set the initial internal DSCP value (only for traffic from untrusted ports)

Note As the ports on the Supervisor 720 only support a 1P2Q2T queuing structure, and as the minimum recommended queuing structure for medianet campus networks is 1P3QyT, it is recommended to use alternate ports, whenever possible.

Note At the time of writing, only the WS-X6708-10GE, WS-X6716-10GE, and Supervisor Engine 720-10GE ports support DSCP-to-queue mapping. All other Supervisor and Ethernet switch module ports for the Catalyst 6500/6500-E family support CoS-to-queue mapping (only).

Enabling QoS

QoS must be enabled globally on the Catalyst 6500-E series switches. This is a critical first step to deploying QoS on these platforms. If this small—but important—step is overlooked, it can lead to frustration in troubleshooting QoS problems because the switch software accepts QoS commands and even displays these within the switch configuration, but none of the QoS commands are active until the mls qos global command is enabled, as shown in Example 2-92.

Note To reduce wordiness, the Catalyst 6500 and 6500-E series switches are collectively referred to as Catalyst 6500-E in this chapter, unless otherwise noted.

Trust Models

The Catalyst 6500-E switch ports can be configured to statically trust CoS, DSCP, and IP Precedence (although this is considered to be relegated by DSCP-trust) or to dynamically and conditionally trust Cisco IP phones. By default, with QoS enabled, all ports are set to an untrusted state.

Trust-CoS Model

A Catalyst 6500-E switch port can be configured to trust CoS by configuring the interface with the mls qos trust cos command. However, if an interface is set to trust CoS, then it by default calculates a packet’s internal DSCP to be the incoming packet’s (CoS value * 8). While this may suitable for most markings, this default mapping may not be suitable for VoIP, as VoIP is usually marked CoS 5, which would map by default to DSCP 40 (and not 46, which is the EF PHB as defined by RFC 3246). Therefore, if an interface is set to trust CoS, then the default CoS-to-DSCP mapping table should be modified such that CoS 5 maps to DSCP 46, as shown in Example 2-94.

In Example 2-95, the CoS-to-DSCP mapping value for CoS 5 has been modified from the default mapping of 40 (CoS 5 * 8) to 46 (to match the recommendation from RFC 3246 that realtime applications be marked DSCP 46/EF).

Unlike the previously discussed switch platforms, the Catalyst 6500-E does not support the show mls qos interface verification command, but rather uses a show mls qos module command, as shown in Example 2-96, wherein the port trust mode for interface Gigabit 2/1 is verified to be set to trust CoS.

Trust-DSCP Model

Because of the additional granularity of DSCP versus QoS markings, it is generally recommended to trust DSCP rather than CoS (everything else being equal). A Catalyst 6500-E switch port can be configured to trust DSCP with the mls qos trust dscp interface command, as shown in Example 2-97.

Example 2-97 Configuring Trust-DSCP on a Catalyst 6500-E

C6500-E(config)#interface GigabitEthernet 2/1

C6500-E(config-if)#mls qos trust dscp

! The interface is set to statically trust DSCP

This configuration can be verified with the commands:

show mls qos

show mls qos module

Conditional-Trust Model

Beginning with IOS Release 12.2(33)SXI1, the Catalyst 6500-E family supports dynamic, conditional trust with the mls qos trust device interface command, which can be configured with the cisco-phone keyword to extend trust to Cisco IP phones, after these have been verified via a CDP-negotiation. Additionally, the type of trust to be extended must be specified (either CoS or DSCP). When configuring conditional trust to Cisco IP Phones, it is recommended to dynamically extend CoS-Trust, as Cisco IP Phones can only remark PC QoS markings at Layer 2 (CoS) and not at Layer 3 (DSCP). For other endpoints that do not have this remarking limitation, it is recommended to dynamically extend DSCP-trust (over CoS-trust). An example of a dynamic, conditional trust policy that is set to extend CoS-trust to CDP-verified Cisco IP phones is shown in Example 2-98.

In Example 2-99, the trust boundary/conditional trust feature has been enabled and the current (dynamic) trust state is shown as trust CoS. This is because there is a Cisco IP phone currently connected to the switch port; if the IP phone is removed from this switch port, the trust state toggles to “Port is untrusted”.

Marking Models

The Catalyst 6500 family of switches supports two main marking models:

Per-port marking model

Per-VLAN marking model

Additionally, classification for each policy model may be performed by using access lists or (on the Supervisor Engine 32 with PISA) with Network Based Application Recognition. Each marking model is detailed in the following sections, with the per-port marking model showing both classification options.

Per-Port Marking Model (Access-List Based Classification)

The access list-based per-port marking model (based on Figure 2-10) matches VoIP and signaling traffic from the VVLAN by matching on DSCP EF and CS3, respectively. Multimedia conferencing traffic from the DVLAN is matched by UDP/RTP ports 16384-32767. Signaling traffic is matched on SCCP ports (TCP 2000-2002), as well as on SIP ports (TCP/UDP 5060-5061). Other transactional data traffic, bulk data, and scavenger traffic are matched on various ports (outlined in Figure 2-9). The service policy is applied to an interface range, along with (DSCP-mode) conditional trust, as shown in Example 2-100.

Note Access-lists—along with other policy elements—consume Ternary Content Addressable Memory (TCAM) resources on the Catalyst 6500 platform. The show platform hardware capacity forwarding monitoring command can be used to ensure that TCAM resources are being managed effectively.

Note The mls qos trust interface commands are not functionally compatible in conjunction with a service-policy interface command on the Catalyst 6500-E and thus should not be used in conjunction with them.

Example 2-101 shows that the show policy-map interface command on the Catalyst 6500-E dynamically increments counters. However, it should be noted that these are slightly delayed and seem to increment only every 10-15 seconds.

It should be noted that NBAR PDLM matching can be used in conjunction with any other type of matching criteria, including DSCP values and access lists. Additionally, it is good to keep in mind that the match-any operator keyword should be used when defining a class map with multiple (mutually exclusive) match statements (such as multiple NBAR protocols), otherwise the classification logic fails.

Note The mls qos trust interface commands are not functionally compatible in conjunction with a service-policy interface command on the Catalyst 6500-E, and thus should not be used in conjunction with them.

Note The number of filters in any given class map is limited to eight on the Catalyst 6500 Supervisor Engine 32. Therefore, no more than eight PDLMs can be used to match an application class. However, access lists can be used in conjunction with PDLMs, as applicable, to increase the number of applications matched by a given class map.

This configuration can be verified with the commands:

show mls qos

show mls qos module

show queueing interface

show class-map

show policy-map

show policy-map interface

Per-VLAN Marking Model

An alternative approach for deploying marking policies on the Catalyst 6500-E is to deploy these on a per-VLAN basis. In order to do so, the interfaces belonging to the VLANs need to be configured with the mls qos vlan-based interface command. Additionally, the policy map can be simplified/broken-apart, as applicable to each VLAN. Adapting the previous example to a VLAN-based marking policing allows for the VVLAN-based policy map to be reduced to only three classes, VoIP, signaling, and a default class. Similarly, the DVLAN-based policy map is reduced to six classes, multimedia conferencing, signaling, transactional data, bulk data, scavenger, and a default class. A per-VLAN marking model is shown in Example 2-103.

Note As the access lists and class maps are identical to the previous examples, these are omitted for brevity in this—and in following—examples for this switch platform family.

Per-VLAN policing model—This model attaches policers to logical VLAN interfaces; however, there is an inherent limitation with this policing model; it only supports a single-aggregate policer per VLAN and—since the number of ports associated with a VLAN is dynamic and variable—thus is quite restricted in overall policing effectiveness; therefore, it is generally recommended to use the microflow policing model instead, as it offers more discrete policing options.

Microflow policing models—This model applies flow-based policers to Layer 3 interfaces to police microflows on a per-source or per-destination basis; microflow policing may be applied on a per-port or per-VLAN basis.

Note Unlike the previously discussed Catalyst switch platforms, the Catalyst 6500-E does not support per-port/per-VLAN policing.

The per-port policing model and the microflow policing models for the Catalyst 6500-E family of switches are detailed in the following sections.

Per-Port Policing Model

The per-port policing model is quite similar to the per-port marking model, except that the policy action includes a policing function—in some cases to drop, in others to remark. As shown in Figure 2-10Figure 2-10, the VoIP and signaling traffic from the VVLAN can be policed to drop at 128 kbps and 32 kbps, respectively (as any excessive traffic matching this criteria would be indicative of network abuse). Similarly, the multimedia conferencing, signaling, and scavenger traffic from the DVLAN can be policed to drop. On the other hand, data plane policing policies can be applied to transactional, bulk, and best effort data traffic, such that these flows are subject to being remarked (but not dropped at the ingress edge) when severely out-of-profile.

Remarking is performed by configuring policed-DSCP maps with the global configuration commands mls qos map policed-dscp normal-burst (which specifies the exceeding remarking action) and mls qos map policed-dscp max-burst (which specifies the violating remarking action, in the case of a dual-rate policer). These commands specify which DSCP values are subject to remarking if out-of-profile and what value these should be remarked as (which in the case of data plane policing/scavenger class QoS policies, this value is CS1/DSCP 8). Even if single rate policers are used, it is recommended to configure the mls qos map dscp policed max-burst markdown map, as the maximum_burst_bytes parameter for the policer is set to equal to the normal_burst_bytes parameter, unless explicitly specified otherwise. In other words, the PIR is set to equal the CIR, unless explicitly specified otherwise, and thus the exceed-action policed-dscp-transmit keywords causes PFC QoS to mark traffic down DSCP values as defined by the policed-dscp max-burst markdown map (and not the policed-dscp normal-burst markdown map).

A per-port policing for the Catalyst 6500-E is shown in Example 2-104.

Note Catalyst 6500-E software allows for policing rates to be entered using the postfixes k (for kilobits), m (for megabits), and g (for gigabits), as shown in Example 2-104. Additionally, decimal points are allowed in conjunction with these postfixes; for example, a rate of 10.5 Mbps could be entered with the policy map command police 10.5m. While these policing rates are converted to their full bps values within the configuration, it makes the entering of these rate more user-friendly and less error prone (as could easily be the case when having to enter up to 10 zeros to define the policing rate).

Note Advanced network administrators can leverage the Catalyst 6500-E support of dual-rate policers—corresponding to the RFC 2698 two rate three color marker (trTCM)—such that the exceeding policing-action for the transactional data and bulk data policers would be to remark to AF22 and AF12 (respectively), while the violating policing action for these classes would be to remark to CS1.

In Example 2-105, the policing DSCP-markdown mappings are shown in two tables:

The first table (the normal burst policed-DSCP map) defines the remarking action for packets exceeding the CIR.

The second table (the maximum burst policed-DSCP map) defines the remarking action for packets exceeding the PIR (which, as previously mentioned, is set to equal the CIR, unless explicitly specified otherwise).

The first digit of the DSCP value of a packet offered to a policer is shown along the Y-axis of the table; the second digit of the DSCP value of a packet offered to a policer is shown along the X-axis of the table. For example, the DSCP value for the transactional data application class (AF21/18) is found in both tables in the row d1=1 and column d2=8. And, as shown, packets with this offered DSCP value (along with DF/0 and AF11/10) are remarked to CS1 (08) if found to be in excess of the policing rate or in violation of the policing rate.

Per-Port Microflow Policing Model

Microflow policing dynamically learns traffic flows and rate limits each unique flow to an individual rate and as such, is a highly effective and efficient policing tool—particularly at the distribution layer in a medianet campus network.

Microflow policing can be applied to ingress traffic on routed interfaces and is typically used in environments where a per-user, granular rate limiting mechanism is required—such as at the distribution layer—to provide a second-line of policing defense in the campus. Like other policers, microflow policing can be used to drop or remark exceeding flows.

Microflow policers are enabled with the police flow policy-map class-action command. A flow is defined by five-tuples (IP source address, IP destination address, IP protocol field, Layer 4 protocol source, and destination ports), which are the same for each packet in the flow. Microflow policers apply a single policy to discrete traffic flows, without having to specify the virtually-infinite tuple-combinations. Microflow policing can also be applied with source or destination flow masks (with the mask src-only and mask dest-only optional keywords, respectively); these masks apply an aggregate microflow policing policy to multiple flows sharing the same source or IP destination addresses.

In the per-port microflow policing model, a flow-based policer is applied with a mask src-only option and applies an aggregate limit to all microflows sharing a common source IP address, remarking traffic in excess of the policing rate.

Remarking is performed by configuring policed-DSCP maps with the global configuration commands mls qos map policed-dscp normal-burst (which specifies the exceeding remarking action) and mls qos map policed-dscp max-burst (which specifies the violating remarking action, in the case of a dual rate policer). These commands specify which DSCP values are subject to remarking if out-of-profile and what value these should be remarked as (which in the case of data plane policing/scavenger class QoS policies, this value is CS1/DSCP 8). Even if single rate policers are used, it is recommended to configure the mls qos map dscp policed max-burst markdown map, as the maximum_burst_bytes parameter for the policer is set to equal to the normal_burst_bytes parameter, unless explicitly specified otherwise. In other words, the PIR is set to equal the CIR, unless explicitly specified otherwise, and thus the exceed-action policed-dscp-transmit keywords causes PFC QoS to mark traffic down DSCP values as defined by the policed-dscp max-burst markdown map (and not the policed-dscp normal-burst markdown map).

In Example 2-106, the campus distribution block is using a routed access design and, as such, has Layer 3 interfaces (TenGigabitEthernet 3/1 and 3/2) connecting it to the access layer switches. Microflow policing is applied to all flows to ensure that any endpoint transmitting at more than 5% capacity (an example value) of the access edge 10/100/1000 switch ports are subject to data plane policing/scavenger class QoS.

Per-VLAN Microflow Policing Model

In contrast with the previous example, if the campus distribution block is using a Layer 2/Layer 3 design, and as such has Layer 2 trunked interfaces (TenGigabitEthernet 3/1 and 3/2) connecting it to the access layer switches, then microflow policing can be applied on a per-VLAN basis. In this case, separate microflow policing policies can be applied to each VLAN.

To highlight policy flexibility, additional levels of classification are included in this second microflow policing example (which incidentally can also be applied to the per-port microflow policing model). Instead of applying a blanket microflow policer to all endpoints, separate microflow policers can be applied to different types of endpoints or application-and-endpoint-combinations. For example, VoIP from Cisco IP phones in the VVLAN can be policed to 128 Kbps, while signaling traffic from these endpoints can be policed to 32 kbps. Similarly, TelePresence endpoints in the VVLAN (which mark their media flows to CS4) can be policed to 25 Mbps. All other endpoint-generated traffic in the VVLAN can be policed to 32 kbps per endpoint.

Similar policy granularity can be applied to the DVLAN policer, if desired. However in this example, a simplified DVLAN policer is applied to all flows to ensure that any DVLAN endpoint transmitting at more than 5% capacity (an example value) of the access edge 10/100/1000 switch ports are subject to data plane policing/scavenger class QoS.

Each of these Catalyst 6500/6500-E egress queuing models (1P3Q8T, 1P7Q8T, and 1P7Q4T) is covered in subsequent sections, but first, consideration has to be given to ingress queuing models

There are two main considerations relevant to ingress queuing design on the Catalyst 6500/6500-E:

The degree of oversubscription (if any) of the linecard

Whether the linecard requires trust-CoS to be enabled to engage ingress queuing

To the first consideration, some linecards may be designed to support a degree of oversubscription, meaning that theoretically more traffic may be offered to the linecard via the sum of all GE/10GE switch ports than can collectively access the backplane at once. Since such a scenario is extremely unlikely, it is often more cost-effective to utilize linecards that have a degree of oversubscription within the campus network. However, if this design choice has been made, it is important for administrators to recognize the potential for drops due to oversubscribed linecard architectures. To manage application-class service levels during such extreme scenarios, ingress queuing models may be enabled.

While the presence of oversubscribed linecard architectures may be viewed as the sole consideration as to enabling ingress queuing or not, a second important consideration should also be kept in mind, namely that many Catalyst 6500/6500-E linecards (at the time of writing) only support CoS-based ingress queuing models (and thus require trust-CoS to be enabled on these switch ports). Enabling trust-CoS reduces classification and marking granularity—limiting the administrator to an 8-class 802.1Q/p model. However, as previously discussed, RFC 4594-based medianet models may require up to 12 classes of service. Once CoS is trusted, DSCP values are overwritten (via the CoS-to-DSCP mapping table) and application classes sharing the same CoS values are longer distinguishable from one another. Therefore, given this classification and marking limitation and the fact that the value of enabling ingress queuing is only achieved in extremely rare scenarios, it is not recommended to enable CoS-based ingress queuing on the Catalyst 6500/6500-E; rather, limit such linecards to the access layer of a medianet campus network and deploy either non-oversubscribed linecards and/or linecards supporting DSCP-based queuing at the distribution and core layers of the campus network.

Table 2-13 helps summarize these considerations by listing the medianet switch models (presented in
Table 2-12) and including their oversubscription ratios and whether the ingress queuing models are CoS or DSCP-based.

Table 2-13 Catalyst 6500 Switch Module Ingress Queuing Architectures

Switch Module

Maximum Input

Maximum Output (to Backplane)

Oversubscription

Ratio

Ingress Queuing Structure

CoS/DSCP Based

Ingress Queuing Recommendations

WS-SUP32-GE

8 Gbps

(8 x GE)

32 Gbps

-

2Q2T

CoS-Based

Not required

WS-SUP32-10GE

20 Gbps

(2 x 10GE)

32 Gbps

-

2Q2T

CoS-Based

Not required

WS-X6148A-GE-TX

48 Gbps

(48 x GE)

32 Gbps

6:5

1Q2T

CoS-Based

Not recommended (use linecard at access-layer only)

WS-X6148A-GE-45AF

48 Gbps

(48 x GE)

32 Gbps

6:5

1Q2T

CoS-Based

Not recommended (use linecard at access-layer only)

WS-X6724-SFP

24 Gbps

(24 x GE)

40 Gbps

(2 x 20 Gbps)

-

2Q8T/1Q8T

CoS-Based

Not required

WS-X6748-SFP

48 Gbps

(48 x GE)

40 Gbps

(2 x 20 Gbps)

6:5

2Q8T/1Q8T

CoS-Based

Not recommended (use linecard at access-layer only)

WS-X6748-GE-TX

48 Gbps

(48 x GE)

40 Gbps

(2 x 20 Gbps)

6:5

2Q8T/1Q8T

CoS-Based

Not recommended (use linecard at access-layer only)

WS-X6704-10GE

40 Gbps

(4 x 10GE)

40 Gbps

(2 x 20 Gbps)

-

8Q8T

CoS or DSCP-based

Not required

WS-X6708-10GE

80 Gbps

(8 x 10GE)

40 Gbps

(2 x 20 Gbps)

2:1

8Q4T

CoS or DSCP-based

Use DSCP-based 8Q4T ingress queuing

WS-X6716-10GE

160 Gbps (16 x 10GE)

40 Gbps

(2 x 20 Gbps)

4:1

8Q4T/1P7Q2T*

CoS or DSCP-based

Use DSCP-based 1P7Q2T ingress queuing

Note The Catalyst WS-X6716-10GE can be configured to operate in Performance Mode (with an 8Q4T ingress queuing structure) or in Oversubscription Mode (with a 1P7Q2T ingress queuing structure). In Performance mode, only one port in every group of four is operational (while the rest are administratively shut down), which eliminates any oversubscription on this linecard and as such ingress queuing is not required (as only 4 x 10GE ports are active in this mode and the backplane access rate is also at 40 Gbps). In Oversubscription Mode (the default mode), all ports are operational and the maximum oversubscription ratio is 4:1. Therefore it is recommended to enable 1P7Q2T DSCP-based ingress queuing on this linecard in Oversubscription Mode.

Therefore, if 6708 and 6716 linecards (with the latter operating in oversubscription mode) are used in the distribution and core layers of the medianet campus network, then 8Q4T DSCP-based ingress queuing and 1P7Q2T DSCP-based ingress queuing (respectively) are recommended to be enabled. These queuing models are detailed in the following sections.

8Q4T (DSCP-Based) Ingress Queuing Model

In the 8Q8T (DSCP-Based) ingress queuing model, 30% of the link bandwidth can be allocated for Q8, 10% for Q7, 10% for Q6, 10% for Q5, 10% for Q4, 4% for Q3, 25% for Q2 (the best effort queue), and 1% for Q1 (the scavenger queue). In turn, 15% of the buffers can be allocated for Q8, 10% each for Q3-Q7, 25% for Q2 (the best effort queue), and 5% for Q1 (the scavenger queue).

Additionally, WRED can be enabled on queues 1 through 7. Only basic WRED functionality is required for queues 1 and 2 (as only a single DSCP value is assigned to each); therefore the first minimum WRED thresholds for these queues can be set to 80% and the first maximum WRED thresholds for these queues can be set to 100%. As queues 3 through 6 have AF PHBs assigned to them, the WRED thresholds can be set to correspond to the three drop-precedence levels per AF class. Thus, the first three minimum WRED thresholds for these queues can be set to 70%, 80%, and 90%, respectively; and the first three maximum WRED thresholds for these queues can be set to 80%, 90%, and 100%, respectively. Additionally, since Q7 has 4 separate DSCP values assigned to it, intra-queue QoS can be achieved by mapping these to different WRED thresholds. Thus, the minimum WRED thresholds for Q7T1, Q7T2, Q7T3, and Q7T4 can be set to 60%, 70%, 80%, and 90%, respectively; and the minimum WRED thresholds for Q7T1, Q7T2, Q7T3, and Q7T4 can be set to 70%, 80%, 90%, and 100%, respectively.

DSCP EF (VoIP), CS5 (broadcast video) and CS4 (realtime interactive) can be mapped to Q8. CS7 (network control) can be mapped to Q7T4; CS6 (internetwork control) can be mapped to Q7T3; CS3 (signaling) can be mapped to Q7T2; and CS2 (network management) can be mapped to Q7T1. AF4 (multimedia conferencing) can be mapped to Q6. AF3 (multimedia streaming) can be mapped to Q5. AF2 (transactional data) can be mapped to Q4. AF1 (bulk data) can be mapped to Q3. DF (best effort) can be mapped to Q2. CS1 can be mapped to Q1.

Example 2-109 verifies that 8Q8T (DSCP-based) ingress queuing has been enabled on the interface with the queue limits, bandwidth allocations, WRED thresholds, and DSCP-to-queue mappings as described at the beginning of this section.

1P7Q2T (DSCP-Based) Ingress Queuing Model

In the 1P7Q2T (DSCP-Based) ingress queuing model, 10% of the link bandwidth can be allocated for Q7, 10% for Q6, 10% for Q5, 10% for Q4, 4% for Q3, 25% for Q2 (the best effort queue), and 1% for Q1 (the scavenger queue); the bandwidth allocated for the strict-priority queue (Q8) is not configurable. In turn, 10% of the buffers can be allocated (each) for Q3-Q7, 25% for Q2 (the best effort queue), and 10% for Q1 (the scavenger queue).

Additionally, the 1P7Q2T structure supports—not WRED—but two tail-drop thresholds (one is configurable and the other is simply the tail of the queue). As such, this functionality would not be needed on queues 1 and 2 (as only a single DSCP value is mapped to each and there is no point tail-dropping flows sharing the same DSCP-value earlier than necessary). However, this functionality can be leveraged on queues 3 to 6 to loosely mimic the AF PHB (although only two levels of dropping would be supported, rather than the three specified in RFC 2597); specifically, AFx2 and AFx3 can be mapped to the first tail-drop threshold (set at 80%) and AFx1 can be mapped to the second drop threshold (the tail at 100%). Similarly, the first tail-drop threshold on Q7 can also be set to 80%, with the second remaining at 100%.

The 1P7Q2T model does not support explicit DSCP-mapping to the strict priority queue (at the time of writing); by default, DSCP EF (VoIP) and CS5 (broadcast video) are mapped to the strict priority queue (Q8). Therefore, CS4 needs to be mapped to another queue, which in this case can be Q6 (as the bandwidth allocated to it has been increased accordingly). Additionally, CS7 (network control) and CS6 (internetwork control) can be mapped to Q7T2; CS3 (signaling) and CS2 (network management) can be mapped to Q7T1. AF4 (multimedia conferencing) can be mapped to Q6. AF3 (multimedia streaming) can be mapped to Q5. AF2 (transactional data) can be mapped to Q4. AF1 (bulk data) can be mapped to Q3. DF (best effort) can be mapped to Q2. CS1 can be mapped to Q1.

1P3Q8T (CoS-Based) Egress Queuing Model

In the 1P3Q8T (CoS-Based) egress queuing model, 30% of the link bandwidth can be allocated for the priority queue (Q4), 40% for the non-realtime queue (Q3), 25% for the best effort queue (Q2), and 5% for the scavenger/bulk queue (Q1). In turn, 15% of the buffers can be allocated for the PQ (Q4), 40% for Q3, 25% for Q2, and 20% for Q1.

Additionally, WRED can be enabled on Q1, Q2, and Q3. Q1 and Q2 need only basic WRED functionality, as only a single CoS value is assigned to each; therefore the first minimum WRED thresholds can be set to 80% for these queues and the first maximum WRED thresholds can be set to 100% for these queues. Since Q3 has 4 separate CoS values assigned to it, intra-queue QoS can be achieved by mapping these to different WRED thresholds. Thus, the minimum WRED thresholds for Q3T1, Q3T2, Q3T3, and Q3T4 can be set to 60%, 70%, 80%, and 90%, respectively; and the maximum WRED thresholds for Q3T1, Q3T2, Q3T3, and Q3T4 can be set to 70%, 80%, 90%, and 100%, respectively.

Following this, CoS values 5 (VoIP and broadcast video) and 4 (realtime interactive and multimedia conferencing) can be mapped to the priority queue. CoS 7 (network control) can be mapped to Q3T4, CoS 6 (internetwork control) can be mapped to Q3T3, CoS 3 (signaling and multimedia streaming) can be mapped to Q3T2, and CoS 2 (network management and transactional data) can be mapped to Q3T1. CoS 0 (Best Effort) can be mapped to Q2 and CoS 1 (scavenger and bulk) can be mapped to Q1.

Example 2-112 verifies that 1P3Q8T (CoS-based) egress queuing has been enabled on the interface, with the queue limits, bandwidth allocations, WRED thresholds, and CoS-to-queue mappings as described at the beginning of this section. Additionally, the “Packets Dropped on Transmit” table shows that 8771 packets were dropped from Q1 (the scavenger/bulk queue).

1P7Q8T (CoS-Based) Egress Queuing Model

In the 1P7Q8T (CoS-Based) egress queuing model, 15% of the queuing buffers and link bandwidth can be allocated for the priority queue (Q8), 15% for Q7, 5% for Q6, 5% for Q5, 15% for Q4, 15% for Q3, 25% for Q2 (the best effort queue), and 5% for Q1 (the scavenger/bulk queue). In this model, queue limits can be set to match the bandwidth allocations.

Additionally, WRED can be enabled on queues 1 through 7. Only basic WRED functionality is required for these queues (as only a single CoS value is assigned to each); therefore the first minimum WRED thresholds can be set to 80% for these queues and the first maximum WRED thresholds can be set to 100% for these queues. Since Q7 only has UDP-based flows assigned to it, the first minimum WRED threshold can also be set to 100% (to effectively disable WRED for this queue).

As eight queues exisit for this queuing model, each CoS value can be assigned to a dedicated queue. CoS 5 (VoIP and broadcast video) can be mapped to the priority queue. CoS 4 (realtime interactive and multimedia conferencing) can be mapped to Q7. CoS 7 (network control) can be mapped to Q6. CoS 6 (internetwork control) can be mapped to Q5. CoS 3 (signaling and multimedia streaming) can be mapped to Q4. CoS 2 (network management and transactional data) can be mapped to Q3. CoS 0 (best effort) can be mapped to Q2 and CoS 1 (scavenger and bulk) can be mapped to Q1.

Note As with the 1P3Q8T CoS-to-queue model, certain application classes that are normally mapped to differeing queue/threshold-combinations must be mapped to the same queue/threshold in the 1P7Q8T model, because of the limited CoS-to-queue mapping level-of-granularity. These include realtime interactive and multimedia conferencing (both sharing CoS 4), signaling and multimedia streaming (both sharing CoS 3), network management and transactional data (both sharing CoS 2), and scavenger and bulk data (both sharing CoS 1).

1P7Q4T (DSCP-Based) Egress Queuing Model

In the 1P7Q4T (DSCP-Based) egress queuing model, 30% of the link bandwidth can be allocated for the priority queue (Q8), 10% for Q7, 10% for Q6, 10% for Q5, 10% for Q4, 4% for Q3, 25% for Q2 (the best effort queue), and 1% for Q1 (the scavenger queue). In turn, 15% of the buffers can be allocated to the PQ (Q8), 10% of the buffers can be allocated (each) for Q3-Q7, 25% for Q2 (the best effort queue), and 10% for Q1 (the scavenger queue).

Additionally, WRED can be enabled on queues 1 through 7. Only basic WRED functionality is required for queues 1 and 2 (as only a single DSCP value is assigned to each); therefore the first minimum WRED thresholds for these queues can be set to 80% and the first maximum WRED thresholds for these queues can be set to 100%. As queues 3 through 6 have AF PHBs assigned to them, the WRED thresholds can be set to correspond to the three drop-precedence levels per AF class. Thus, the first three minimum WRED thresholds for these queues can be set to 70%, 80%, and 90%, respectively; and the first three maximum WRED thresholds for these queues can be set to 80%, 90%, and 100%, respectively. Additionally, since Q7 has 4 separate separate DSCP values assigned to it, intra-queue QoS can be achieved by mapping these to different WRED thresholds. Thus, the minimum WRED thresholds for Q7T1, Q7T2, Q7T3, and Q7T4 can be set to 60%, 70%, 80%, and 90%, respectively; and the minimum WRED thresholds for Q7T1, Q7T2, Q7T3, and Q7T4 can be set to 70%, 80%, 90%, and 100%, respectively.

DSCP EF (VoIP), CS5 (broadcast video), and CS4 (realtime interactive) can be mapped to the priority queue. CS7 (network control) can be mapped to Q7T4; CS6 (internetwork control) can be mapped to Q7T3; CS3 (signaling) can be mapped to Q7T2; and CS2 (network management) can be mapped to Q7T1. AF4 (multimedia conferencing) can be mapped to Q6. AF3 (multimedia streaming) can be mapped to Q5. AF2 (transactional data) can be mapped to Q4. AF1 (bulk data) can be mapped to Q3. DF (best effort) can be mapped to Q2. And CS1 can be mapped to Q1.

Note Due to the default WRED threshold settings, at times the maximum threshold needs to be configured before the minimum (as is the case on queues 1 through 3 in the example above); at other times, the minimum threshold needs to be configured before the maximum (as is the case on queues 4 through 7 in the example above).

This configuration can be verified with the command:

show queueing interface

EtherChannel QoS Model

As discussed in EtherChannel QoS, QoS policies on the Catalyst 6500/6500-E need to be separated, such that ingress trust, classification, marking, and/or policing policies are applied to the logical Port-Channel interface, whereas queuing policies (both ingress—if required—and egress) are applied directly on the physical port-member interfaces, as shown in Example 2-115.

Control Plane Policing

As previously stated, the Catalyst 4500 and Catalyst 6500 Series switches implement CoPP similarly; however, CoPP has been enhanced on both platforms to leverage the benefits of their hardware architectures and as a result each platform provides unique features.

This section describes the implementation details of CoPP on Supervisors 720 and 32.

In the Catalyst 6500 Series switches, CoPP takes advantage of the processing power present on line-cards by implementing a distributed CoPP model. In this platform, the class QoS policies are centrally configured under the control plane configuration mode. When configured, these policies are first applied at the route processor (MSFC) level and then they get automatically pushed to the Policy Feature Card (PFC) and each Distributed Forwarding Card (DFC). This CoPP model is illustrated in Figure 2-31.

CoPP at the RP is performed in software, while on the PFC and DFCs it is processed in hardware, with no performance degradation or increased latency. In this way, CoPP on Supervisors 32 and 720 provides two layers of protection: first, at wire speed on the PFC and DFCs, and second, at the Route Processor (RP) level. This helps to ensure that only the amount of traffic specified by the user actually reaches the control plane.

Note The PFC3 and DFC3 provide hardware support for CoPP. However, CoPP is not enforced in hardware unless MLS QoS is globally enabled using the mls qos global command.

The Cisco Catalyst 6500 supports CoPP on the Supervisor 720 and Supervisor 32 in hardware starting with Cisco IOS release 12.2(18)SXD1. CoPP supports IPv4 in hardware, while multicast and broadcast traffic are only supported in software. Support for IPv6 traffic has been introduced in IOS release 12.2(18)SXE.

Another important characteristic of CoPP in Supervisors 720 and 32 is that it does not support the definition of non-IP traffic classes, with the exception of the class default. Class-default is a default class for all remaining traffic destined to the RP that does not match any other class. This default class allows you to specify how to treat traffic that is not explicitly associated with any other user-defined classes. The class-default is the only class in CoPP capable of handling both IP and non-IP traffic. User-defined classes can only handle IP traffic.

CoPP helps protect the RP of Catalyst 6500 Series switches in multiple ways. From a policing perspective, by filtering traffic sent to the RP, CoPP ensures that only the expected protocols are allowed. This effectively shields the control plane from unwanted and potentially malicious traffic. On the other hand, by rate limiting the traffic sent to the RP, CoPP provides protection against large volumes of packets that might be part of a DoS attack, which helps maintain network stability even during an attack.

CoPP Configuration

Step 2 Optionally, define the necessary ACLs to be used to match traffic classes.

Step 3 Classify the control plane traffic using the class-map command.

Step 4 After the traffic is classified, you apply a policy-map with a police action to each class, indicating whether to permit all packets, to drop all packets, or to drop packets crossing a specified rate limit for that particular class.

Step 5 Apply the defined CoPP policy to the control plane by using the service-policy command from control plane configuration mode.

CoPP Considerations and Restrictions

The following are important considerations and known restrictions that should be taken into account prior to configuring CoPP on the Catalyst 6500:

Because CoPP relies on the QoS implementation, CoPP policies are downloaded to the PFC and DFCs only if QoS is enabled. For this reason, ensure that the mls qos command is enabled at the global configuration mode for the PFC and each DFC where CoPP is required.

CoPP does not support the definition of non-IP traffic classes except for the class-default. ACLs can be used instead of non-IP classes to drop non-IP traffic. At the same time, class-default can be used to limit non-IP traffic that reaches the RP CPU.

On Supervisors 32 and 720, ARP policing is done with a QoS rate limiter rather than CoPP. Even though there is a match protocol arp for CoPP on these supervisors, this type of traffic is processed in software. Therefore, ARP policing should be configured with the hardware-based QoS rate limiter using the mls qos protocol arp police bps command.

Prior to Cisco IOS software Release 12.2(18)SXE, only one match criteria was allowed for each traffic class. When using one of these earlier releases, to define multiple match rules with a match-any criteria, split the match access-group statements among multiple class maps instead of grouping them together.

Prior to Cisco IOS software Release 12.2(18)SXE, the MQC class-default was not supported on Supervisor 720. This is a minor limitation because the class-default could be emulated with a normal class configured with an ip permit any rule.

Omitting the policy parameters in a class causes the class to be handled by software-based CoPP. Use the police command and set the policy parameters to ensure the class is handled by hardware-based CoPP.

Currently, multicast packets are handled only by the software-based CoPP at the RP level. However, there are CPU rate limiters available that can rate limit multicast packets to the CPU in hardware. These CPU rate limiters include the Multicast FIB-miss rate limiter and the Multicast Partial-SC rate limiter. These CPU rate limiters can be used in combination with ACLs and software CoPP to provide protection against multicast and DoS attacks.

CoPP is not supported in hardware for broadcast packets. The combination of ACLs, traffic storm control, and CoPP software protection provides protection against broadcast DoS attacks.

With PFC3A, egress QoS and CoPP cannot be configured at the same time. In this situation, CoPP is performed in software and a warning message is generated.

In the rare situation where a large QoS configuration is being used, it is possible that the system could run out of TCAM space. When this scenario occurs, CoPP can be performed in software. Use the show platform hardware capacity command to monitor TCAM space.

You must ensure that the CoPP policy does not filter critical traffic such as routing protocols or interactive access to the switches. Filtering this traffic could prevent remote access to the switch, requiring a console connection.

Supervisor Engines 32 and 720 support built-in special-case rate limiters, which are useful for situations where an ACL cannot be used (for example, TTL, MTU, and IP options). When you enable the special-case rate limiters, you should be aware that the special-case rate limiters override the CoPP policy for packets matching the rate-limiter criteria.

CoPP does not support ACEs with the log keyword.

CoPP uses hardware QoS TCAM resources. Use the show platform hardware capacity and show tcam utilization commands to verify the TCAM use.

ACE hit counters in hardware are only for ACL logic. You can rely on software ACE hit counters and the show access-list, show policy-map control-plane, and show mls ip qos commands to troubleshoot evaluate CPU traffic.

CoPP Model

In Example 2-116, CoPP has been deployed on the Catalyst 6500 inline with the recommendations for CoPP class definitions and deployment models presented earlier in this chapter.

The internal DSCP was shown to be the primary mechanism for QoS processing in most Cisco Catalyst switches and is determined by the trust state of the interface on which the packet enters the switch. These trust states include trust CoS, trust DSCP, and conditional trust. Trust CoS and trust DSCP are static port trust states that accept the Layer 2 or Layer 3 QoS markings of a packet, respectively. Conditional trust performs a CDP-based negotiation between the access switch and the endpoint, which—if successful and permitted by policy—results in a dynamic extension of either trust CoS or trust DSCP to the endpoint.

Beyond discussing basic ingress QoS policies, like trust, more complex ingress QoS policies were also presented, including applying QoS policies to physical interfaces (port-based QoS), logical interfaces (VLAN-based QoS), or to a combination of physical and logical interfaces (per-port/per-VLAN based QoS), as in the case of trunked switch ports. Per-port/per-VLAN based QoS was shown to provide the highest levels of policy granularity, particularly for policing policies.

Next, the four steps for deploying campus QoS were outlined, including:

4. Enable control plane policing (on platforms that support this feature).

Ingress QoS models were detailed at length, providing flexible template policies that applied to most access edge scenarios. Similarly, best practice egress QoS models were presented, showing that at a medianet campus Gigabit/Ten-Gigabit Ethernet interfaces should support a minimum a 1P3QyT model, including a:

Realtime queue (to support a RFC 3246 EF PHB service), which should not exceed 33% of the link’s bandwidth.

Guaranteed-bandwidth queue (to support RFC 2597 AF PHB services).

Default queue (to support a RFC 2474 DF service), which should be at least 25% of the link’s bandwidth.

Bandwidth-constrained queue (to support a RFC 3662 scavenger service), which should not exceed 5% of the link’s bandwidth.

A flexible queuing model was also presented that would form as an egress queuing policy template to provide consistent per-node queuing behavior across discrete and disparate Catalyst queuing structures.

Control plane policing was discussed next and general best practice guidelines were presented for deploying CoPP within the medianet campus on both the Catalyst 4500 and 6500 switch platforms.

AutoQoS and SmartPorts were briefly overviewed as to their respective merits and caveats relating to medianet campus QoS deployments.

The second main section then applied these considerations to platform-specific designs for the Cisco Catalyst desktop/stackable switch family (specifically the Catalyst 2960 & 2975, 3560G & 3750G, and 3560-E & 3750-E). Trust models and per-port and per-VLAN marking models were presented for this switch family, as were per-port policing and per-port/per-VLAN policing (via hierarchical QoS). Additionally, both the 1P1Q3T ingress queueing model and the 1P3Q3T egress queuing model for this switch family were detailed.

The third section detailed designs for the Catalyst 4500 switch family (both the Classic Supervisors and the Supervisor 6-E). Trust models and per-port and per-VLAN marking models were presented for this switch family, as were per-port policing, per-port/per-VLAN policing, and UBRL. Additionally, both the 1P3Q1T+DBL and the 1P7Q1T+DBL egress queuing models for this switch family were detailed. Also control plane policing policy recommendations were specified for this switch family.

And the fourth main section presented design recommendations for the Catalyst 6500 switch family (both the Catalyst 6500 and 6500-E switches, for the Supervisor Engine 720, the Supervisor Engine 32, and the Supervisor Engine 32-10GE [with PISA]). Trust models and per-port and per-VLAN marking models—for both ACL-based or NBAR-based classification—were presented for this switch family, as were per-port policing and microflow policing. Additionally, the 1P3Q8T (CoS-based) queuing model, the 1P7Q4T (CoS-based) queuing model, and the 1P7Q4T (DSCP-based) queuing model for this switch family were detailed. Finally, control plane policing policy recommendations were specified for this switch family.