Configuring 802.1Q tunnneling (Q-in-Q) on Cisco switches

Overview

The 802.1Q tunnneling technology also known as Q-in-Q is an extension to the well known 802.1Q standard which allows service providers to transport customers VLANs by simply adding another layer of IEEE 802.1Q tag to the original 802.1Q tagged packets that enter the ISP network. Customer VLAN IDs are preserved and traffic from different customers is segregated within the service-provider infrastructure even when they appear to be on the same VLAN. The primary benefit for the service provider is reduced number of VLANs supported for the same number of customers. By using 802.1Q tunneling the layer 2 domain of a customer can be extended across multiple sites. A Q-in-Q frame can be identified by the Ethertype field 0x8100 in the Ethernet header and it’s called a double-tagged frame. One outer ISP VLAN tag can carry 4096 customer VLAN tags and this brings the total number of available VLANs to approximately 16.8 million.

Q-in-Q configuration

In my scenario there are two customers each having 2 sites and the requirement is to extend the layer 2 domain between SITE A and SITE B for each customer. The service provider network is composed of switches SW1, SW2 and SW3. For each customer the ISP will allocate a VLAN tag, CUSTOMER 1 will be assigned VLAN 111 and CUSTOMER 2 will be assigned VLAN 222. Both sites for CUSTOMER 1 will use 10.10.10.0/24 subnet and they will be placed in VLAN 100 internally. For CUSTOMER 2 both sites will use 20.20.20.0/24 subnet and VLAN 200.

By default the maximum transmission unit (MTU) of an interface is 1500 bytes. Using Q-in-Q an outer VLAN tag is attached to an Ethernet frame, and the packet size is increased with 4 bytes. Therefore, it is recommended to adjust the MTU of each interface within the service provider network to a higher value. The recommended minimum MTU value is 1504 bytes. In order to support frames larger than 1500 bytes on Ethernet ports we can use the command system mtu <value> in global configuration mode. After changing the MTU size a reload will be required for the new MTU to take effect.

The increase of MTU size is needed only on the service provider switches, no modifications are required on the customer equipment. By default, IEEE 802.1Q tunneling is disabled because the default switchport mode is dynamic auto. On the service provider network first we need to configure the 802.1Q trunks between all 3 switches.

Since we are not using VTP we need to apply this VLAN configuration manually also on the switches SW2 and SW3.

Next we need to enable Layer 2 protocol tunneling on the edge interfaces that are connected to the customer. Based on the topology shown above these interfaces are: Gi0/0 and Gi0/2 for switch SW1 and Gi0/1 and Gi0/2 for switch SW3. The customer facing interfaces must be configured as access ports since Layer 2 protocol tunneling cannot be enabled on ports configured in dynamic auto or dynamic desirable mode.

The customer facing ports, need to be assigned to the appropriate VLANs and configured to be in 802.1Q tunnel mode. When you enter dot1q-tunnel, the port is set unconditionally as an IEEE 802.1Q tunnel port. Below is the dot1q-tunnel configuration for all switch ports which are connected to the customer switches.

We stated that both sites for each customer which are connected across the service-provider network need to have Layer 2 connectivity. By default switches intercept and process a number of layer two protocols, like CDP, STP, VTP, and others. Our goal is to tunnel these Layer 2 protocols so the service provider switches become transparent and forward CDP, VTP, STP instead of processing them. This can be achieved using layer 2 protocol tunneling. When L2 protocol tunneling is enabled all these layer 2 protocol will behave as there were part of the same broadcast network. In order to enable L2 tunneling we need to use l2protocol-tunnel command in interface configuration mode.

When protocol tunneling is enabled, protocol packets are encapsulated with a well-known Cisco multicast address for transmission across the network. When the packets reach their destination, the well-known MAC address is replaced by the Layer 2 protocol MAC address. Using the command l2protocol-tunnel without any options we enable L2 tunneling for all layer 2 supported protocols. If you want to enable only specific layer 2 protocol you can use the following command with one of the values available between the vertical bars.

SW(config-if)# l2protocol-tunnel <cdp|lldp|stp|vtp|cos>

Now the that the ISP side configuration is complete let’s focus on the client side and configure the interfaces facing the ISP switches. On the customer side we need to define the required VLANs and enable 802.1Q trunking in order to allow all customer VLAN traffic to be transported to the other site. On customer switches CST1-A and CST1-B we need to create vlan 100 and on switches CST2-A and CST2-B we need to create vlan 200

Let’s proceed and do some tests to check if Q-in-Q configuration works properly. First from PC-1 we’ll try to ping PC-3 . We have assigned for PC-1 ip address 10.10.10.101 and for PC-3 he have used 10.10.10.103 ip address.

MAC address of PC-3 is present in the local ARP table of PC-1 which means PC-1 and PC-3 can communicate al layer 2. Next let’s check if layer 2 protocol tunneling is working. We can use the show l2protocol-tunnel command to display information about Layer 2 protocol tunnel ports, including the protocols configured, the thresholds, and the counters.

From the output above we can see that interface Gi0/0 is configured as a tunnel port. By looking at the Encaps/Decaps Counter columns we also can observe that CDP and STP traffic is being tunneled through this interface. Finally let’s check the output of show cdp neighbors and show spanning-tree commands from one of the CUSTOMER-1 switches.

By looking at the output above we can conclude that switch CST1-A has spanning-tree for vlan 100 enabled. Also if we look at the MAC address above highlighted in red (008c.3729.dd00) we can see that switch CST1-B from the remote site is the root bridge for VLAN 100.

Summary

In this article we have demonstrated how to configure Q-in-Q technology and I hope this was helpful to anyone trying to better familiarize themselves with the concept.