BGP Multipath-Relax

So I learned a new command today. As usual I want to share with everyone. Today’s command is “bgp bestpath as-path multipath-relax”, which is actually hidden in IOS.

To give some background, BGP will not load balance across multiple paths by default. We can configure it to do so with the “maximum-paths n” command, which is pretty well known. The criteria of this command is that all attributes must match (Weight, LP, AS Path, etc). This is acceptable if we are multihomed to a single AS, but what if we are multihomed to different ASes? In that case we are not able to load balance across theoretically equal paths. Enter the “bgp bestpath as-path multipath-relax” command…

Real short one today. This post is about Nexus port profiles. Port profiles are great for ensuring consistency across port configurations. They allow us to configure a template which is inherited by a group of ports. There are three types of port-profiles: Ethernet, Interface-VLAN (SVI) and Port-Channel. In my example, we’ll be configuring several ports as “VM Server” ports. Some may be asking why one would choose these over the simple “interface range” command. In my opinion, port profiles are more strict. The range command configures any range of ports where a port profile configures ALL ports which inherit it. Any new configuration added to the profile is pushed to the inheriting ports as well.

Pretty basic. We create an “ethernet” port profile named VM and assign some config to it. The command “state enabled” makes this profile usable, without this command we wouldn’t be able to inherit the profile on a port.

I’ve been working with a good amount of Nexus gear lately. Today we’ll configure Configuration Synchronization offered on the Nexus 5K platform. This feature allows one to create a switch profile on a vPC member and push the profile’s configuration to the peer. This is crucial as vPC configurations need to match exactly on both peers. If configurations don’t match, the channel could be suspended. Here’s our topology:

We’re using an Enhanced vPC (EvPC) here (supported in 5.1(3)N1(1) and up) topology – the FEXes are dual-homed and connected to the 5Ks via vPC and we’re also running a vPC to the host. Config Sync is almost a necessity here. We’re using 169.254.0.0/30 for the IPs Peer Keepalive links (stole this practice from Chris Marget. It’s important to note that CFS (Cisco Fabric Services – this is the magic that makes config sync work) communicates over the Managment 0/peer-keepalive interface.

Here we’ve picked a range of ports and joined them to a port-channel. Then we enter the port-channel and configure our settings – notice that we’ve made this “vpc 50″. Before committing we run the “verify” command. This command should run through the config and ensure that it can be applied to both peers. Finally we commit the changes. The switch pauses for a bit and then tells us we’ve succeeded. A couple notes on this. I’ve seen the switch return a successful verification but still fail on the commit. This is typically due to pre-existing commands that will cause the range or port-channel config to fail. The other note is if you do fail your commit, you can run a “show switch-profile
status” on the peer to determine why it’s failing.

Everything looks good on the 5k-2 ports. We can see the configuration came through as expected. Keep in mind that if a port is configured using a profile you will not be able to configure it manually; all additions/changes need to be made through the profile.

That’s the basics for config sync. You can do quite a bit with this and it is definitely helpful in vPC environments. I made this post mostly because I was unable to find this information posted in a way I liked. Hopefully this is helpful to some.

Disclaimer: This is a new feature to me. It’s working well in the lab, but please let me know if I have anything wrong or there is a better way to accomplish something.

Hello everyone. I know I’ve been neglecting this blog for too long. Can’t promise that things are going to change, but I have a good post for today.

I was recently exposed to some new technology while working with a customer. I had to learn it pretty quickly. This post is about a new feature in the Cisco ASA 8.4 code called Bridge Groups. This is essentially the addition of BVI interfaces, which have existed in IOS forever. This feature is useful when running an ASA transparently, but not physically inline. Running a firewall physically inline works well, but it can limit you to the number of available interfaces you have on each firewall. Adding physical interfaces to a firewall is expensive. This feature also saves you from using a context per firewalled VLAN on your ASA. Here we’ll use a 3750 for physical connectivity and use BVIs to force traffic through the firewall. Here is the physical topology:

I’m trying to convey quite a bit of information with this diagram. We have a layer 3 switch as the “core” and an ASA 5505 trunked to it. We have two hosts, PC1 and Server which are in VLAN 50 and VLAN 25, respectively. VLAN 26 is only for management of the ASA. VLAN 50 (and 2050) is where we’ll focus for this post. I’m going to start with the config here as I think it will make more sense when I try to explain.

This is where the magic happens. First, we configure the firewall in transparent mode, then we configure out interfaces and allow the necessary VLANs, 50 and 26 (management) on the e0/1, which is the “inside” interface, and 2050 on e0/2, which is the “outside” interface. We then configure VLAN 26 for management and add it to bridge group 2, which has an IP and default route configured at the bottom, again, this is for management only. Then we configure VLANs 50 and 2050 with 50 as inside and 2050 as outside. Both VLANs are in bridge group 1. In my testing it appears that the BVI interface REQUIRES an IP address to pass traffic, but which IP you pick seems irrelevant, so I’ve used 1.1.1.1/24 here. Finally we have our inside-in access-list allowing any to any and we’ve applied it to the inside interface inbound.

Hopefully this is beginning to make sense. Essentially, all PC1 traffic destined outside of its subnet will traverse the firewall. When PC1 initiates traffic outside of its subnet, it will ARP for its gateway, which will begin on VLAN 50, then be bridged through the firewall to VLAN 2050. The 3750 will reply on 2050 which will be sent back to 50. All L3 traffic should flow this way. I’ve tried to make a diagram to show the flow:

That’s about it for this one. We can now run our transparent firewalls through an intermediate switch and we’re also able to run multiple VLANs through a transparent firewall without using up contexts. All we need to do to allow more VLANs to be firewalled is create the L2/L3 on the switch, add the VLANs to the trunks and configure the ASA with the appropriate SVIs and bridge groups (and ACLs). Hope this was useful.

Disclaimer: This is a very new feature to me. It’s working well in the lab, but please let me know if I have anything wrong or there is a better way to accomplish something.

I’ve been meaning to post this for some time. Awhile back there was a thread on Networking Forum where someone mentioned that 2960s can route now. The 2960 is now a layer 3 switch. I was skeptical, but then I was pointed to this link. I was very, very surprised. I’m not sure why Cisco decided to add this functionality to the 2960s, but I’m definitely grateful. As of 12.2(55)SE, 2960s are layer 3 switches (with some limitations which I’ll cover later). This knowledge came in handy shortly after reading that thread. I was working on a circuit upgrade for a remote side at my previous company. The circuit was ordered incorrectly and I ended up in need of a layer 3 switch ASAP. The tech we’d sent was leaving the next day, so there was no time to ship him anything. Luckily, we had some 2960s on site.

Configuring 2960s to route is pretty simple. The Switch Database Management template (SDM) needs to be changed to “lanbase-routing”. A reboot is (always) needed after changing the SDM template. After reboot, it’s just like enabling routing on any other L3 switch with the command “ip routing” from global config.

First we’ll change the SDM template:

SwitchA(config)#sdm prefer lanbase-routing
Changes to the running SDM preferences have been stored, but cannot take effect until the next reload.
Use 'show sdm prefer' to see what SDM preference is currently active.
SwitchA(config)#^Z
SwitchA#reload
System configuration has been modified. Save? [yes/no]: y
Proceed with reload? [confirm]

After changing the SDM template, we are reminded that we’ll need to reboot and also given a command to verify the change after the next boot.

Now we verify:

SwitchA#show sdm prefer
The current template is "lanbase-routing" template.
The selected template optimizes the resources in
the switch to support this level of features for
8 routed interfaces and 255 VLANs.
number of unicast mac addresses: 4K
number of IPv4 IGMP groups + multicast routes: 0.25K
number of IPv4 unicast routes: 4.25K
number of directly-connected IPv4 hosts: 4K
number of indirect IPv4 routes: 0.25K
number of IPv4 policy based routing aces: 0
number of IPv4/MAC qos aces: 0.125k
number of IPv4/MAC security aces: 0.375k

The change was successful and we’re given the details about this SDM template.

Now seems like a good time to touch on the limitations of the layer 3 capabilities on 2960s. As we see in the output above, we’re limited to 8 routed interfaces. These will be SVIs. At this point, the 2960s don’t support routed physical interfaces (“no switchport”). Another important note is that we’re only allowed 16 static routes and there is no dynamic routing capability.

Today we’ll go over the process to connect an IOS voice gateway/CME (Call Manager Express) to the PSTN. I set this up last night and thought it would be a good post. I’ll briefly touch on using a SIP trunk as backup/failover too.

I’ve been running a SIP trunk to Flowroute for quite awhile, but I just recently got a “landline” from my ISP because they’re doing a promotion where it’s basically free. I’m keeping my SIP trunk, but I’ll be using it as backup since all US calling through the ISP is free. I’m using a 2811 with an NM-HD-2V and a VIC2-4FXO.

We’ve configured three dial peers here. First note that the dial peer type is “pots”, this is used when the destination is an analog port (like FXO). Next you see the “preference” command, lower is better, making these peers more preferred than my SIP peers (with a preference of 2). The “destination-pattern” command is matching the dialed string, sort of like a static route. For the first dial peer, We’re matching 11 digits including the 1, area code (digits 2 through 9, wildcard which matches 0 through 9, and then digits 1 through 9), then seven wildcards matching 0 through 9. This is my convoluted way of blocking 900 numbers. For the incoming call, we’re matching any digits. The “port” command tells the router where to send the call when it matches the patter, port 1/1/0 here. Then we tell the router to forward all digits. This is important because it will strip the explicitly defined digits, which we don’t want here, we want all digits sent to the PSTN.

Not much to that one. We go into the port config and use the “connection plar ” command. PLAR tells the router to automatically forward to an extension when the line goes off-hook. So when this port gets an incoming call from the PSTN (which takes it off-hook), it will instantly forward it to extension 5001. We’ve also used the “caller-id enable”, which is pretty self-explanatory; it enables incoming caller-id on this FXO port.

That’s all for this one. This could be the first(ish) of many voice-related posts. I’ll be moving through the voice exams (hopefully quickly) in the next few months and if there is interest, I can try to do some posts on things I’m learning/studying. Real voice (CUCM) can be tough blog about because it’s mostly GUI based, which requires (lots of) screenshots. Making 11ty screenshots for every post could get old quick. Post in the comments if you’d like to see voice topics, and if you have anything specific you’d like to read about.

Disclaimer: I am, by no means, a voice guy (yet?), so if you see any errors please let me know in the comments. I can say that this works, but I wouldn’t doubt if it’s not the “best” way.

Another quick one. Today I’m going to cover a simple, but very useful OSPF command: “show ip ospf rib”. This command is similar to “show ip route ospf”, but goes a bit deeper.

If you’ve ever done a routing protocol migration, you know how important it can be to see each protocol’s full routing table. Much of the time AD makes this difficult. Administrative Distance (AD) is the believability of a routing protocol on a Cisco device. The default AD values are:

Route Source

Default Distance

Connected Interface

0

Static Route

1

EIGRP Summary

5

eBGP

20

Internal EIGRP

90

IGRP

100

OSPF

110

IS-IS

115

RIP

120

EGP

140

ODR

160

External EIGRP

170

iBGP

200

Unknown

255

Lower is better. If a router has identical routes from RIP and OSPF, the OSPF routes will be added to the table. If it’s EIGRP versus OSPF, EIGRP will win.

Scenario:

Company ATN Solutions is migrating from EIGRP to OSPF. They’ve chosen to run both protocols simultaneously, while leaving the AD values at the default. This will allow both protocols to co-exist without affecting the routing domain. EIGRP routes will stay in the table due to EIGRP’s lower AD. I’m not going through the migrations steps or really any detail related to how this would be performed, just using this to demonstrate the command.

During this migration, we’ll need to verify that all existing EIGRP prefixes are also being learned by OSPF (we’ll use process number 200). If we were masochists, we could look at the LSDB to determine this, but that’s not really ideal. So we’ll use the “show ip ospf 200 rib”. First we’ll look at the existing RIB:

And there it is. We see that OSPF is learning the same prefixes as EIGRP. The output is similar to “show ip bgp” in that * = Best, and > = Installed. We could now, theoretically, feel comfortable in taking the next step on our migration path, maybe raising EIGRP’s AD to make OSPF more preferred.

That’s all for today. Another quick post to make up for my hiatus. Post any questions in the comments.

Dropping in to do a quick post today. Sorry for the ridiculous lack of content lately. I’ve been busy with finding/changing jobs and new responsibilites and all that.

Today I’m going to cover “object groups” on ASAs. I was never a big fan of these, which I realized had a lot to do with using them behind others, not actually writing them myself. It takes awhile (for me, at least) to wrap your head around what the person before you was trying to accomplish. This is what put me off object groups. Though, I discovered that if I write them myself, I love them, lol. They can be hugely useful. They’re even available in IOS now (as of 12.4(20)T). Here’s an example of when they’d be used:

Scenario:

We need to allow several hosts (192.168.1.100-105) to access a group of servers (192.168.2.10-15) on multiple ports (21, 22, 25, 80, 443). Without object groups, this would produce a pretty lenghty ACL. First I’ll do the object group config, then I’ll show what it would look like with normal ACL entries.

Pretty simple. We create some object groups matching IPs for the hosts and servers, then we match ICMP traffic and various TCP ports. Notice that there are two object group types used here, the first is “network”. This type allows us to specify IPs and subnets. The second type is “service”. This type allows us to match different ports and protocols.

Amazingly, we only need one line. We’ve configured an ACL line with three object groups. Notice that the ports actually come first, which threw me a bit when I first saw object groups in actions. Other than that, everything is relatively normal. We need to specify “object-group” before each one, and as usual, it’s source, then destination.

Now let’s look at part of the “show access-list” output. This will show us what the firewall sees and matches, and also what we were saved from typing manually:

I’m not pasting all 252 lines, that would just be a waste of space. You get the idea though, the firewall is showing us what our single ACE actually does. All those rules come from our one line. That’s the power of object groups.

Just a short one today. Again, sorry for the lack of posts. Hopefully I can get back to posting regularly. I hope this all made sense. If you have any questions, please post in the comments.

It’s Jared from CCNPJourney.com. Colby had asked me a couple weeks ago if I would be interested in posting some articles on his blog as he’s been fairly busy lately, and of course I said yes. So I thought for my introductory post on the blog I would do a brief write-up on how to use Iperf!

For starters here’s a little bit of info on Iperf. Iperf is a tool that many system/network admins use to measure the bandwidth on a network, as well as the quality of the path on that network. It is capable of generating traffic using TCP and UDP. The TCP and UDP tests are useful for performing the following kinds of tests:

- Latency (response time or RTT): can be measured with the Ping utility.

- Jitter: can be measured with an Iperf UDP test.

- Datagram loss: can again, be measured with an Iperf UDP test.

- Bandwidth tests are done using the Iperf TCP tests.

Iperf also allows you to run simultaneous tests, and bi-directional. Developers in the community have also created a GUI for Iperf called Jperf, which is a Java based GUI that allows you to save settings, and more easily make changes to your settings. For information on Iperf head on over to the Wiki page, or their page over at SourceForge.

Now lets get down to business…

Lets start out by downloading Iperf. In this example I will be using the Windows version of Iperf, but feel free to use what you choose, it’s all the same switches regardless of the OS). You will need Iperf on a machine you wish to use for the “client” role, and one for the server”.

Once you have put the files on each machine we can begin. Start out by opening a command prompt and then navigating to the folder you have Iperf stored in. You will then enter the commands below, on the server and client respectively to begin your first basic Iperf test!

Server

Enter the command “iperf.exe -s”, without the quotes, to start Iperf in server mode (indicated by “-s”).??

The screenshot above shows what you will be presented with after you’ve started Iperf in server mode. It shows you the port that was automatically chosen (which can be manually changed), as well as the TCP window size, which again is chosen by default based on the OS, but can be changed.

Client

To start Iperf in client mode (using no arguments) enter the command “iperf.exe -c x.x.x.x”, where “x.x.x.x” equals the IP address of your Iperf server in the above step.

In the screenshot above you can see the final results of a basic Iperf test. Again presenting you with the TCP Window size and port that were chosen by default. You will also see the resulting bandwidth calculation as well as the amount of data transferred during the test.

That about sums up this post on Iperf. It’s really a very basic program that can help a lot in day to day troubleshooting. I plan to make another post on how to use Jperf as well, so keep an eye out for that!

Hope you enjoyed!

Related Posts:

Jared Miller has been in IT since he started school in 2007. He is currently an Information Systems Analyst at a local hospital and holds the CCNA certification and is beginning to study for the CCNA Voice and CCNP exams, in that order.

I said “it means there’s a loop, give me the switch IP”. Then I began the mission of tracking down the loop. This was a pretty large site, but luckily I only had to go through a couple switches. Unfortunately this happened a couple hours ago and I didn’t save my work so we won’t be able to go through the real steps.

To track down a loop, you start with the “show mac-address-table address [flapping mac]” command:

We see that the MAC is coming in on port gi2/2 and gi2/4. One port will lead us to where that MAC is plugged in and the other will lead us to the loop. Pick a port and start working through. This is where CDP comes in handy:

Next we move to that switch and so on and so forth. Eventually we will come to the switch with the loop. In this case one of our switches had a little workgroup switch plugged into two ports, in two separate VLANs, which is why it wasn’t caught by STP.

This was a short one, just quickly posting up a scenario I ran into today. Let me know if it needs more information or I left anything out.

Update: I’m including the error %SW_MATM-4-MACFLAP_NOTIF in this post, which is essentially the same issue.

Today’s topic is HSRP (Hot Standby Routing Protocol). HSRP is a Cisco proprietary “First Hop Redundancy Protocol”. It is typically used for redundancy at the first hop from a client segment. It is used with two or more routers in a group who share a virtual IP address. One router is active at a given time and will reply to ARP requests. In this example, we have R1 and R2 in standby group 100 with a virtual IP of 192.168.100.1. This IP will be the default gateway for all hosts in VLAN 100. Here is the topology:

This is a basic topology, both R1 and R2 have connections to the internet. They are running HSRP on their FastEthernet 0/0 interfaces. Here’s the basic HSRP config:

The main command we’ll use with HSRP is “show standby”. It gives us quite a bit of information, we see the group number (100), we see that R1 is the active router in the group, we also see information about state changes, the VIP, timers, other useful details, and priority, which we’ll talk about next.

HSRP routers use “priority” to determine which router should be active, the default is 100. We’ll set R1′s priority to 110, forcing it to be the active router. We will also use interface tracking, which tells the router to decrement its priority if the tracked interface goes down. Here we’ll track both routers’ Fa0/1 interfaces, which connect them to the internet. We will also enable preemption, which will cause the router with the highest priority to become active. Here’s the config: