QoS policing on a network determines whether network traffic is within
a specified profile (contract). This may cause out-of-profile traffic to drop
or to be marked down to another differentiated services code point (DSCP) value
to enforce a contracted service level. (DSCP is a measure of the QoS level of
the frame.)

Do not confuse traffic policing with traffic shaping. Both ensure that
traffic stays within the profile (contract). You do not buffer out-of-profile
packets when you police traffic. Therefore, you do not affect transmission
delay. You either drop the traffic or mark it with a lower QoS level (DSCP
markdown). In contrast, with traffic shaping, you buffer out-of-profile traffic
and smooth the traffic bursts. This affects the delay and delay variation. You
can only apply traffic shaping on an outbound interface. You can apply policing
on both inbound and outbound interfaces.

The Catalyst 6500/6000 Policy Feature Card (PFC) and PFC2 only support
ingress policing. The PFC3 supports both ingress and egress policing. Traffic
shaping is only supported on certain WAN modules for the Catalyst 6500/7600
series, such as the Optical Services Modules (OSMs) and FlexWAN modules. Refer
to the
Cisco
7600 Series Router Module Configuration Notes for more
information

To set up policing, you define the policers and apply them to ports
(port-based QoS) or to VLANs (VLAN-based QoS). Each policer defines a name,
type, rate, burst, and actions for in-profile and out-of-profile traffic.
Policers on Supervisor Engine II also support excess rate parameters. There are
two types of policers: microflow and aggregate.

Microflow—police traffic for each applied port/VLAN
separately on a per-flow basis.

Aggregate—police traffic across all of the applied
ports/VLANs.

Each policer can be applied to several ports or VLANs. The flow is
defined using these parameters:

source IP address

destination IP address

Layer 4 protocol (such as User Datagram Protocol
[UDP])

source port number

destination port number

You can say that packets that match a particular set of defined
parameters belong to the same flow. (This is essentially the same flow concept
as that which NetFlow switching uses.)

As an example, if you configure a microflow policer to limit the TFTP
traffic to 1 Mbps on VLAN 1 and VLAN 3, then 1 Mbps is allowed for each flow in
VLAN 1 and 1 Mbps for each flow in VLAN 3. In other words, if there are three
flows in VLAN 1 and four flows in VLAN 3, the microflow policer allows each of
these flows 1 Mbps. If you configure an aggregate policer, it limits the TFTP
traffic for all flows combined on VLAN 1 and VLAN 3 to 1 Mbps.

If you apply both aggregate and microflow policers, QoS always takes
the most severe action specified by the policers. For example, if one policer
specifies to drop the packet, but another specifies to mark down the packet,
the packet is dropped.

By default, microflow policers work only with routed (Layer 3 [L3])
traffic. To police bridged (Layer 2 [L2]) traffic as well, you need to enable
bridged microflow policing. On the Supervisor Engine II, you need to enable
bridged microflow policing even for L3 microflow policing.

Policing is protocol-aware. All traffic is divided into three
types:

IP

Internetwork Packet Exchange (IPX)

Other

Policing is implemented on the Catalyst 6500/6000 according to a "leaky
bucket" concept. Tokens corresponding to inbound traffic packets are placed
into a bucket. (Each token represents a bit, so a large packet is represented
by more tokens than a small packet.) At regular intervals, a defined number of
tokens are removed from the bucket and sent on their way. If there is no place
in the bucket to accommodate inbound packets, the packets are considered
out-of-profile. They are either dropped or marked down according to the
configured policing action.

Note: The traffic is not buffered in the bucket, as it may appear in the
image above. The actual traffic does not go through the bucket at all; the
bucket is only used to decide whether the packet is in-profile or
out-of-profile.

Several parameters control the operation of the token bucket, as shown
here:

Rate—defines how many tokens are removed at each
interval. This effectively sets the policing rate. All traffic below the rate
is considered in-profile.

Interval—defines how often tokens are removed from
the bucket. The interval is fixed at 0.00025 seconds, so tokens are removed
from the bucket 4,000 times per second. The interval cannot be
changed.

Burst—defines the maximum number of tokens that the
bucket can hold at any one time. To sustain the specified traffic rate, burst
should be no less than the rate times the interval. Another consideration is
that the maximum-size packet must fit into the
bucket.

For example, if you want to calculate the minimum burst value needed to
sustain a rate of 1 Mbps on an Ethernet network, the rate is defined as 1 Mbps
and the maximum Ethernet packet size is 1518 bytes. The equation is:

Note: In Cisco IOS® Software, the policing rate is defined in bits per
second (bps), as opposed to kbps in Catalyst OS (CatOS). Also in Cisco IOS
Software, the burst rate is defined in bytes, as opposed to kilobits in
CatOS.

Note: Due to hardware policing granularity, the exact rate and burst is
rounded to the nearest supported value. Be sure that the burst value is not
less than the maximum-size packet. Otherwise, all packets larger than the burst
size are dropped.

For example, if you try to set the burst to 1518 in Cisco IOS Software,
it is rounded to 1000. This causes all frames larger than 1000 bytes to be
dropped. The solution is to configure burst to 2000.

When you configure the burst rate, take into account that some
protocols (like TCP) implement a flow-control mechanism that reacts to packet
loss. For example, TCP reduces windowing by half for each lost packet.
Consequently, when policed to a certain rate, the effective link utilization is
lower than the configured rate. You can increase the burst to achieve better
utilization. A good start for such traffic is to double the burst size. (In
this example, the burst size is increased from 13 kbps to 26 kbps). Then,
monitor performance and make further adjustments if needed.

For the same reason, it is not recommended to benchmark the policer
operation using connection-oriented traffic. This generally shows lower
performance than the policer permits.

As mentioned in the Introduction, the
policer can do one of two things to an out-of-profile packet:

drop the packet (the drop parameter in
the configuration)

mark the packet to a lower DSCP (the
policed-dscp parameter in the
configuration)

To mark down the packet, you must modify the policed DSCP map. The
policed DSCP is set by default to remark the packet to the same DSCP. (No mark
down occurs.)

Note: If "out-of-profile" packets are marked down to a DSCP that is mapped
into a different output queue than the original DSCP, some packets may be sent
out of order. For this reason, if the order of packets is important, it is
recommended to mark down out-of-profile packets to a DSCP that is mapped to the
same output queue as in-profile packets.

On the Supervisor Engine II, which supports excess rate, two triggers
are possible:

When traffic exceeds normal rate

When traffic exceeds excess rate

One example of the application of excess rate is to mark down packets
that exceed the normal rate and drop packets that exceed the excess
rate.

As stated in the Introduction, the PFC1
on the Supervisor Engine 1a and the PFC2 on the Supervisor Engine 2 only
support ingress (inbound interface) policing. The PFC3 on the Supervisor Engine
720 supports both ingress and egress (outbound interface) policing.

The Catalyst 6500/6000 supports up to 63 microflow policers and up to
1023 aggregate policers.

Configurations with Distributed Forwarding Cards (DFCs) only support
port-based policing. Also, the aggregate policer only counts traffic on a
per-forwarding-engine basis, not per-system. The DFC and PFC are both
forwarding engines; if a module (line card) does not have a DFC, it uses a PFC
as a forwarding engine.

Egress policing. The Supervisor 720 supports ingress
policing on a port or VLAN interface. It supports egress policing on a port or
L3 routed interface (in the case of Cisco IOS System Software). All ports in
the VLAN are policed on egress regardless of the port QoS mode (whether
port-based QoS or VLAN-based QoS). Microflow policing is not supported on
egress. Sample configurations are provided in the Configure and Monitor Policing in CatOS Software
section and Configure and Monitor Policing in Cisco IOS
Software section of this document.

Per-user microflow policing. The Supervisor 720
supports an enhancement to microflow policing known as per-user microflow
policing. This feature is only supported with Cisco IOS System Software. It
allows you to provide a certain bandwidth for each user (per IP address) behind
given interfaces.�This is achieved by specifying a flow mask inside the service
policy. The flow mask defines which information is used to differentiate
between the flows. For example, if you specify a source-only flow mask, all
traffic from one IP address is considered one flow. Using this technique, you
can police traffic per user on some interfaces (where you have configured the
corresponding service policy); on other interfaces, you continue to use the
default flow mask. It is possible to have up to two different QoS flow masks
active in the system at a given time. You can associate only one class with one
flow mask. A policy can have up to two different flow
masks.

Another important change in policing on the Supervisor Engine 720 is
that it can count traffic by the L2 length of the frame. This differs from the
Supervisor Engine 2 and Supervisor Engine 1, which count IP and IPX frames by
their L3 length. With some applications, L2 and L3 length may not be
consistent. One example is a small L3 packet inside a large L2 frame. In this
case, the Supervisor Engine 720 may display a slightly different policed
traffic rate as compared with the Supervisor Engine 1 and Supervisor Engine
2.

With the Supervisor Engine II, you can view aggregate policing
statistics on a per-policer basis with the show qos statistics
aggregate-policer command.

For this example, a traffic generator is attached to port 2/8. It sends
17 Mbps of UDP traffic with destination port 111. You expect the policer to
drop 16/17 of the traffic, so 1 Mbps should go through:

Note: Notice that allowed packets have increased by 65 and excess packets
have increased by 1090. This means that the policer has dropped 1090 packets
and allowed 65 to pass through. You can calculate that 65 / (1090 + 65) =
0.056, or roughly 1/17. Therefore, the policer works correctly.

Define a service policy that uses class, and apply the policer to a
specified class.

Apply the service policy to a port or VLAN.

Consider the same example as that provided in the section
Configure and Monitor Policing in CatOS Software,
but now with Cisco IOS Software. For this example, you have a traffic generator
attached to port 2/8. It sends 17 Mbps of UDP traffic with destination port
111:

There are two types of aggregate policers in Cisco IOS Software:
named and per-interface. The named aggregate
policer polices the traffic combined from all interfaces to which it is
applied. This is the type used in the above example. The per-interface policer
polices traffic separately on each inbound interface to which it is applied. A
per-interface policer is defined within the policy map configuration. Consider
this example, which has a per-interface aggregate policer:

Microflow policers are defined within the policy map configuration, as
are per-interface aggregate policers. In the example below, every flow from
host 192.168.2.2 that comes into VLAN 2 is policed to 100 kbps. All traffic
from 192.168.2.2 is policed to 500 kbps aggregate. VLAN 2 includes interfaces
fa4/11 and fa4/12:

The example below shows a configuration for per-user policing for the
Supervisor Engine 720. Traffic that comes in from users behind port 1/1 toward
the Internet is policed to 1 Mbps per user. Traffic that comes from the
Internet toward the users is policed to 5 Mbps per user:

Note: Allowed packets have increased by 304 and excess packets have
increased by 5068. This means that the policer has dropped 5068 packets and
allowed 304 to pass through. Given the input rate of 17 Mbps, the policer
should pass 1/17 of the traffic. If you compare the dropped and forwarded
packets, you see that this has been the case: 304 / (304 + 5068) = 0.057, or
roughly 1/17. Some minor variation is possible due to hardware policing
granularity.

For microflow policing statistics, use the show mls ip
detail command: