Hrm... If this was really transparent shouldn't R1 and R2 appear to be directly connected? I think you know the answer, and luckily, so do I!

If we truly want to make the provider network transparent to the customer we need to enable "L2 Protocol Tunneling" on top of our QinQ tunnels. This is simple enough to do, and is added with the l2protocol-tunnel command in interface mode on the customer facing ports of the provider switches. Here's the IOS help for the command:

As you can see we have a few options here. Let's start at the top shall we?

By adding the cdp keyword cdp frames will be tunneled inside the QinQ tunnel we created earlier instead of being processed by the switch itself. If you remember when we enabled the dot1q-tunnel mode on the interface CDP was automatically disabled, so the switch already is ignoring CDP, but because CDP is link local the switch doesn't forward it.

If we enable l2protocol-tunnel on both SW2 and SW1 for the interfaces facing our routers, and wait a minute for the next CDP advertisement interval, we'll see the following:

The other two keywords in the list above that are directly related to this are STP and VTP. Those 2 function in exactly the same way and allow for a customer to maintain their own spanning trees and VTP DB's over the provider cloud as if the switches were directly attached. And as a pint of interest if you enter the l2protocol-tunnel command without any keywords it enables all three of these options simultaneously.

The next keyword I want to look at is the point-to-point keyword. And as usual I'll start with the IOS help:

Hey, I recognize those! The first two make etherchannels, and the last one is a link integrity mechanism that's mainly used for fibre links!

I'm not going to dwell on the udld keyword much. If you know what udld is then you can pretty much figure out what this does. You'll want to note the upcoming discussion around the meaning of "point-to-point" that the command implies, but otherwise you're on your own.

The etherchannel protocols though I am going to talk about. When you enable the L2 Protocol Tunnel for either LACP or PAgP the end result is the same, so for brevity's sake I'm just going to discuss LACP.

As I'm sure you've figured out by now if you enable L2 Protocol Tunneling for LACP you can create etherchannels between two devices that are not directly connected. Or put another way, you can create an etherchannel between two switches at different sites over a provider network.

I might normally insert a witty comment about some sort of astonishment, but we've been doing this for a blog post and a half now, so if you're still shocked and awed then you seriously need to get with the program here...

To illustrate how we do this I'm going to insert another switch, SW3, in between SW1 and SW2 in our original topology.

There is however a catch here that you need to take note of, and that is
that the connections between the customer device interfaces need to
be... You guessed it: point-to-point. This means that if you use a
single metro tag for all the customer facing interfaces then your poor
customer's etherchannels will not form properly, and all kinds of bad
things will happen.

If you look at the diagram you see that there are four links that we are concerned with as far as forming an etherchannel between SW1 and SW2 with SW3 in the middle; two links each connect SW3 to SW1 and SW2. What we need to ensure is that we don't put all four links onto the same VLAN, or really any VLAN that has more than one interface of SW1 and one interface of SW2 on it.

Does that sound like something? A segment with only two "ends" A point to point link perhaps?

This is where the lightbulb in my head went on. I at first tried this by using a single metro tag VLAN on SW3, and adding the 4 interfaces concerned to that VLAN in a dot1q tunnel. The ned result was LACP got very confused and I had flapping port-channel interfaces. After re-reading over the docs, and hashing it out in my head the point to point stuff, and thinking about how a normal etherchannel is set up, it clicked for me.

Which, if you recall, we're using subinterfaces on the routers with dot1q encapsulation, and a QinQ tunnel between SW1 and SW2... And we've now added another QinQ tunnel through SW3 making this a QinQinQ ping extravaganza!

*Note* Since I'm out of switches, the third tag is actually only internal on SW3 and is never sent over a wire. If you added a 4th switch to this instead of connecting SW1 and SW2 to a single switch, and ran a trunk between SW3 and "SW4" then you would have frames with 3 dot1q tags.

To wrap up I'm going to encourage you to check out the show l2protocol-tunnel command, and its extended keywords. There's some useful info to be found there. I'll also drop the link to the Cisco DocCD for the 3560's and all the QinQ goodness you can handle.