As it turns out, albeit I don’t do voice as my career focus, I decided to help out a team member and took ownership of an issue that came up. It was a phone that wasn’t working, one that was attached to an ATA 190. After I tracked down the device I typed it’s IP address into my web browser and found it in a Recovery Firmware state. What us traditional Cisco route/switch guys would consider “ROMMON” as a loose equivalent. In this case it is important to note, since the device wasn’t function on proper firmware and was in recovery mode, the IP address of the ATA is actually in the Data VLAN at this point, NOT the Voice VLAN.

Well this isn’t good. My first reaction is to look at what firmware the functioning ones are on and log into CCO to download the firmware. While that’s happening I think to myself…MATT!!! You don’t know this stuff…So I do some digging. Turns out, the ATA in firmware recovery won’t accept the .bin.sgn file. I perused some forums and found that you need to download the zip and do some hex editing. It wasn’t the clearest post in the world but I figured it out and thought I’d screenshot and share it with you all.

On Mac this file automagically unzips once it reaches my downloads folder however, you may have to unzip it. Once it’s unzipped we are going to open the file with a hex editor. I quickly downloaded Hex Fiend for the job on my laptop which brings us to the next step.

Step 2: Open the *.bin.sgn file in your hex editor.

Step 3: Now we get to edit this lovely little file in our hex editor. To do so we are going to search for the string 00 0d 0d 0d.

Step 4: Select everything from the beginning of the file and mash that delete key!

Step 5: Verify that the characters just after 00 0d 0d 0d are now at the beginning of the file.

Step 6: Once that is verified do a SAVE AS on the file and name it the same filename without the .sgn

Step 7: Return to the ATA’s webpage and upload the newly created file and hit Start on the upgrade. The ATA should bounce and reboot onto the newly uploaded firmware and should now successfully be in the voice subnet.

Note, if you perform this to a different version that CUCM is expecting the device may take quite some time (as I’ve noticed with these 190s) to “upgrade” before it will register. Do yourself a favor, turn on logging in the ATA, monitor the files being download, and do not bounce the device. There isn’t much of a status indicator to tell you it’s upgraded except watching the logs at debug level.

This year I was unable to make it to Cisco Live U.S. for a variety of reasons. Sometimes the stars don’t align and you can’t make logistics work, or maybe financials just fall short. That doesn’t mean you can’t “go” to CLUS even if it may be remotely and in spirit. Trust me, if the spirits right it’s an exhausting week even when you aren’t there.

I dedicated a lot of effort this year into “attending” even though I was remote just shy of a couple thousand miles away. Here’s how I did it!

I guess to start I should say I may have went a little overboard this year. To start out me and my pup got geared up in last years swag to get in the mood for the Sunday opening Tweetup. Feel free to shame me on being a little too excited about an event I wasn’t even at but come on, I had a blast all week.

First and foremost ciscolive.com was a go to center for information. They had details about keynotes, speakers, etc. They had a great list of events and which ones they were broadcasting. This allowed me to put highlight broadcasts in my calendar such as the opening keynote and of course the closing keynote which allowed me to watch them. They key to these keynotes was participating online with friends, colleagues, and event staff by means of Spark Rooms, Ryver Rooms, Facebook, Twitter, and occasionally even good ol’ text messages.

To further the live video information firehose we had the likes of some very awesome podcasters that seemed to take over CLUS this year. It was awesome being able to see outside parties insights to the event and not only talk technology, but also just talk about the experience of being there and get a public view of it all. Thanks to Network Collective, TechFieldDay, Whiskey and Wireless, and all of the others that took over and recorded podcasts or delivered live streaming events. Of course, to keep with an important theme in this post on how I was able to keep involved. All of these podcasts have a great twitter presence which allowed easy interaction from remote during their recordings.

Of course, I have to mention twitter and how relevant it is to the Cisco live experience. I must preface it though with the fact that where in person, or remote during or after Cisco live. Twitter is the go to channel for communicating with friends, colleagues, and like minded geeks throughout the year. If it weren’t for them, the remote or in person experience just wouldn’t be the same. Throughout the week the entire crew running @Ciscolive, the folks in the Social Media Hub chasing down #CLUS, and everyone else interacting on twitter were total rockstars. I had near real time communication with various people all day and all night. They never missed a beat and because of that I felt involved as if I was in person having those conversations.

Thanks to Twitter and a few creative people there was the fun and excitement of getting a selfie with “Figus” and The Cisco Live Hat. I have some pretty cool friends (Brennan and Stewart) who helped me out through the use of technology to get my very own selfie with them both. Just a little bit of fun being at the conference, all the way from my house. Such a cool creation that sparked the conference in person and remote. Thanks for the magic Rob!

There was a large theme this year at Cisco Live. Spark! I’ve been in Spark rooms for a while now. Occasional video calls with friends from last year to keep in touch. The Cisco Champion room where collaboration is such a wonderful thing. This year, I was lucky enough to be on a Spark call with Marena when the one and only Silvia noticed and the two of them managed to get me on a Spark Board. Here I was, thinking I was on a small video call but nope, I was on a big screen smack dab in the middle of the conference. What a surprise! It was really cool to be able to utilize the technology to literally “be there” on such a large scale. I guess that’s one way to attend remotely.

Then there was the CAE. Last year I witnessed quite a few people make the good ol’ call a friend during the concert to let them experience the fun. This year, my friends did that for me when I couldn’t make it. Such a great way to wrap up an exhausting week of keeping up with technology and session discussions online, watching keynotes and using technology to visit the conference live and in (video) person!

I guess I should summarize how I found it possible to attend the conference even when I wasn’t there.

Follow Ciscolive.com, Cisco Live on Facebook, Cisco Live on Twitter. Follow them on everything you can and you’ll catch great information as well as broadcasts and hot topics around the conference.

Follow the great podcasters and live video presenters. They are at the conference doing what they do so you can see see the communities response, what they think is relevant, and how the conference feels in person.

Engage on Twitter. I can’t stress this enough. People tweet what session they are going to, interesting facts about the session, responses to the session, and everything in between. Scott McDermott and Daniel Dib did an absolutely fantastic job this year providing this insight.

Most of all, have fun! Insert yourself. Participate in hash tags with vendors, Cisco, podcasters, the whole nine yards! Talk with your friends and colleagues that are there via any means possible. Spark, Slack, Twitter, Facebook, etc.

Say you have a network that currently has an MPLS WAN from your HQ to all of your Branches. You want to migrate these MPLS connections into a DMVPN design and in doing that, you would like to move the MPLS links into a Front Door VRF. There comes a challenge with this move in regards to the routing tables and when to move the headend.

The topology used in the following demonstration is below:

A common approach would be to migrate all of the spokes into their new VRF and then flash cut the headend into it’s VRF. However, there is an inherent risk in this approach. While you can pre-stage the DMVPN tunnels on all of the spokes the migration process would look similar to this:

At this point our Spokes should be pre-staged and we will migrate the headend. Here is where the challenge comes in. If there is an issue on the headend, or the spoke configurations, we could be in a situation where all spokes are down for an extended period of time while we figure out where the issue in routing lies. In my opinion a better approach is to pre-stage and cutover the headend first. With proper planning we can migrate spokes individually, one at a time while maintaining full connectivity.

Here is our challenge scenario:

At our headend our datacenter core has an OSPF peering to our MPLS CE router. Our MPLS provider peers PE-CE with eBGP. Mutual OSPF to BGP redistribution is occurring on the CE router. We want make this CE router our DMVPN headend with EIGRP as the overlay protocol allowing summarization to the spokes for a Phase 3 DMVPN. I’ve worked out the following strategy in such a way to allow our Headend router to be cutover in a “pre-staged” DMVPN which will allow full connectivity between migrated and un migrated spokes utilizing our new Overlay routing protocol. In this strategy we maintain the headend PC-CE eBGP peering until the very end.

Since our spokes will be using either the BGP underlay peering or EIGRP overlay peering at a given time as the primary means of learning prefixes destinations redistribution will be required at the headend between BGP & OSPF, as well as EIGRP & OSPF. The following outlines the redistribution requirements.

All BGP routes will be redistributed into OSPF. This is already occurring due to the MPLS eBGP peering with the SP and OSPF being the primary infrastructure routing protocol at the headend.

EIGRP will also be redistributed into OSPF as once a spoke has been migrated to DMVPN the headend will be learning the respective spokes routes via EIGRP.

OSPF is redistributed into BGP to facilitate the headend subnets reaching non-migrated sites.

EIGRP will need to be redistributed into BGP to facilitate non-migrated sites learning prefixes of migrated sites. Since EIGRP has a more preferred AD then OSPF the headend will see migrated spokes as EIGRP routes not OSPF routes. Care will be taken to now leak the summary address(es) or the overlay tunnel address into BGP causing tunnel issues.

First we will look at the headend routing table before we stage and perform the cutover:

The first part of the strategy is going to be planning the route-map and prefix lists needed for the FVRF to Global and Global to FVRF route table leaking on the headend CE. In this case I am going to simply allow all prefixes to be leaked between to two creating a simple prefix-list and route-map configuration:

Next we will pre-stage the VRF Configuration to allow the leaks between the two routing tables as well as prestige the default route required for the FVRF. In this case, the default route will point to what is currently the eBGP neighbor. In lobbing both the rd as well as the route-targets were required for this to function properly:

At this point we will add the VRF aware configuration for eBGP. This requires adding the existing neighbor underneath the address-family ipv4 vrf command. Below is the finished output of our BGP configuration at this point:

Now I am going to pre-stage the CE Tunnel interface. For simplicity in this post I am going to leave of crypto. Below is the basic DMVPN headend tunnel configuration. Notice the tunnel VRF command. This allows this tunnel interface to be pre-staged for the first spoke migration. Also take special note that at this time redirects are turned off. They will be turned on after the final spoke has been migrated:

The next step is to pre-stage the overlay routing protocol on the CE. In this case we are going to use EIGRP and summarize the address space down the tunnel. For simplicity I have left off items such as authentication that would traditionally be used in a production deployment:

Now we need to create our protection route-map for redistributing EIGRP into BGP. Afterwords we will redistribute EIGRP into both BGP and OSPF. The following prefix list denies the overlay tunnel prefix as well as the summary prefix.

We are now ready to migrate the CE MPLS interface into the FVRF. This will naturally cause a short outage while the new VRF aware BGP peering comes up and routes are leaked into the global routing table. After this is done however, the CE router will be able to route traffic for both migrated and non-migrated spokes. That is, MPLS native spokes and DMVPN over FVRF MPLS Spokes:

As you can see and are likely aware, once you add the interface to the VRF the IP address is removed, the tunnel interface will go down and then up, and the original BGP peering goes down. After a short time the new VRF aware BGP peering comes up.

We are now ready to cutover our first spoke route. With the headend already migrated to an FVRF model, and the tunnel interface pre-staged along with the overlay routing and redistribution. We should be able to migrate a spoke and still maintain full connectivity to non migrated spokes. To start, we will pre-stage the VRF, Tunnel, and EIGRP on Branch 1’s spoke. Again, the VRF’s default route will point to the current eBGP peer of that spoke. Mutual redistribution will occur between OSPF and EIGRP.

We are not ready to move the spokes PE facing interface into the VRF. The tunnel should come up on it’s own and we should maintain full connectivity. Once done we will look at the Spoke routing table, as well as the HQ core switch to verify the routes are as expected.

The following output shows the configuration of the PE-CE interface on Branch 1 spoke as well as it’s routing table after the tunnel and EIGRP adjacency comes up:

Now both branched only have a summary route via EIGRP, and their local OSPF routes. No BGP routes show up in the routing table. If we look at the headend CE we will noticed that it too, has only EIGRP and OSPF routes with the exception of the BGP underlay routes for the point-to-points between each PE and CE:

Armed with this information in mind, we are safe to remove the BGP configuration at all locations relying solely on the default routes within the FVRF and obtaining routing only from the overlay in the global RIB.

Once all of the BGP configuration is removed we will apply nhpr redirects to the headend, and nhrp shortcuts to the spokes. This will allow dynamic spoke to spoke tunnels. We will then confirm full connectivity in our new FVRF DMVPN over MPLS. The first trace will be from Branch 2 to branch 1 showing the normal path before the spoke to spoke tunnel. The second trace shows the same source destination after the spoke to spoke tunnel. This proves spoke to spoke connectivity. The last trace is from Branch 2 to the headend.

As this strategy shows. It is possible to set up an exiting MPLS WAN to allow for a slow and methodical migration of spokes as time permits. By taking care with redistribution and carefully planning and staging the headend as your first migration you allow spokes to come up and maintain migrated and non migrated spoke connectivity.

First of all a disclaimer. I am NOT a programer. I promise this could probably be cleaned up considerably by someone that actually does programming. Also, It may require some tweaking to work on your system. This is tested on Mac 10.12.3 and SecureCRT 8.1*

I’ve always loved using SecureCRT. I often find myself needing to add anywhere from a small to a large number of sessions to my list. Especially in my current role. I had remembered in my past at an old roll where I used Windows as my primary OS (work issued) that I had discovered a forum that had a python and VBS script to import sessions out of a CSV. Now that I am running on Apple I sought out that old forum and grabbed the python script. Drats!!! The python script doesn’t work on my new version of SecureCRT for Mac (8.1). Then I started thinking. Most of the time clients give me a nice spreadsheet of IP addresses. This got me thinking, why not write my own that uses Excel. So here it is!

First off, you need to install XLRD into python. I used easy_install but pip may be an option for you as well.

Second, create a folder for the script and it’s template file used to import sessions. This is where you will place the below two files named SecureCRTImport.py and template.ini

Yesterday morning I opened up my Spark app and was surprised to see I was added to the Cisco Champions room. I checked my e-mail and saw nothing. I knew it was being announced soon do to some twitter chatter. After validating with members it was true. I was selected as a 2017 member of Cisco Champions. I’m going to say I’m blown away even still today. I am absolutely honored to be part of such an amazing group of individuals. It has caused me to sit back and think about how I even came to know the people I look up to. So how did it start?

My path to working with Cisco has the typical I.T. back story. To start I grew up around technology. My father has over 30 years in with the same telco so I was already around the Central Office with it’s big networking and telephone switches. I took it for granted back then but being around that technology shaped me into the love for gadgets and tech that I have today. In fact, being around the telco industry is what caused me to get my Associates degree in network administration post high school. My father also taught me the importance of continued education. To this day, at the humble age of 63 he is studying for his CCNA Wireless simply due to his passion for learning and interest in wireless technologies. I’m glad he’s passed that learning desire onto me. Thanks Dad!

During my time at my community college studying for my degree I came across my most influential teacher. He ran the entire program and curriculum for the Network Administration degree. He was a seasoned veteran in running computer networks. He passed on some of my favorite concepts about learning that I still use to this very day. However, the part I reflect most from him was one day in class he went around and asked every student how far they wanted to go in their career. He asked things such as “Do you want to be a bench tech?” “Do you want to be a help desk / support center technician?” Most importantely I recall him coming up to me and asking “Would you want to run a network?” I remember my answer to vividly I laugh at it to this day. My response: “I would never want that much responsibility. It could cost companies way to much if I screwed something up”. Thanks to his dedication to helping students learn and gain confidence and carry that forward throughout their career I can only say this. Thank You Ken!

Skipping all of the food industry and physical labor jobs after college I’m going to jump into when I started retail at Staples. I started in my home town as simply an electronics sales associate. I wanted to be a technician though. I was lucky enough to have some very support managers and co-workers that were willing to push me to learn and back me in studying for CompTIA’s A+ during bits of downtime at work. This was a major step in my career path. Thank you Don, Jeff, Dan, Todd, Scott, and David!

From my home town I transferred to a Staples near my current residence. Once again, I worked with an awesome group of people who eventually made me the resident technician at the facility. It was here that I was taught how to interact with customers that needed technical advice and assistance. I learned how to simplify concepts not only for them but for myself. I was once again supported by a group of peers that were interested in seeing me expand my career where I continued to learn and study for a couple of other CompTIA certifications. Thank you for the continued support and life advice Steve!

It was at this point where I transitioned to a small healthcare provider group as a help desk technician. Here is where I first started learning anything Cisco. Answering phone calls and trouble tickets I eventually asked to be educated on simple things such as VLAN changes and Voicemail resets. I was lucky to have my fellow help desk team member support me learning and taking more of the networking role as opposed to answering general hardware/keyboard/mouse tickets. Pushing each other to learn and obtain certifications in our respective areas of interest (Cisco for myself and Windows Server for him) we both pushed through small staff and political issues to progress forwards. Thank you Jason for allowing me to take time to learn Cisco!

At this point in my career I had gained more and more experience in the Cisco field doing both Route/Switch work as well as Voice work. Luckily, my small 400 employee company was purchased by a local hospital system that employees nearly 8000 individuals. I ended up on a small but highly skilled team of network engineers and voice engineers whom all pushed me to get better and learn new things. This is where I was introduced to the world of SE’s, PM’s, AM’s, Partners, VAR’s, etc. I was pushed to do implementations and planning. Own changes and projects. Thanks for the push all of the Chris’s, Lori, Mark, Jason, Matt, Travis, and all of the engineers on that infrastructure team!

It was between those last two paragraphs where I started interacting in the communities online. Most notably I started participating on twitter and blogging. This is where a lot of fun began. Between snarky bantering back and forth about voice and faxes with @amyengineer and single malt discussions with @silviakspiva. The help in testing implementations I got from the likes of @CdnBeacon and @th1nkdifferent. The ridiculousness or the Arby’s jokes and fry challenge with @JSDavenport, @network_phil, and @matthewnorwood. The Brewery swag airdrops with @highspeedsnow and @k00laidIT. Amazing support from @MsJamieShoup with books for learning, and @matthaedo helping push me towards my CCIE dream. @joshuarkittle and @Renegade604 for being my friends during my first CLUS and pushing me to move my career foward. There are so many I can thank for so many reasons in me being where I am at today! The list goes on and on but it’s too long for a blog post.

The new year just sprung upon us. This is usually when I go through my bag and reorganize. I figured hey why not post what I carry. I know, it’s nothing new nor original. I’m surely not the first person to do this post. I always find it interesting though to see what others carry so maybe someone is interested in my daily carry.

So, here we go. Lets start with the top left and move through from there.

Super Glue

I always end up ripping a finger or knuckle home on something. Super glue it the go to fix

Again, haven’t had a reason to upgrade but love noise canceling when necessary

So where does it all go? It seems like a lot listed out but to be honest it barely fills up the backpack I carry. I currently carry an OGIO Renegade RSS. Plenty of room for more than you need. Also, before anyone asks “what? No box cutter?!”. Daily carry is a Gerber Paraframe of sorts on my person.

I’ve always wanted to find a quick way to test a multicast deployment in a Cisco environment. Many of us are already familiar with simply pinging a multicast address from an interface, and going to another router and issuing the ip igmp join-group command.

I’ve came across a new way to test that I’ve missed over the years but has apparently been around. This tool is the Multicast Routing Monitor. It has a fairly straight forward configuration and will at least give you some view into your multicast domain and it’s functionality.

The diagram we are working with is below. All routers have a loopback on them number X.X.X.X/32 where X is the routers number. In our tests we will be sourcing multicast from the loopback and the receiver will also be a loopback. In this case R5 exists as the MRM manager. That is, the device that will be doing the monitoring and starting/stoping the test. I used an additional out of band device to show that this test can be performing using a router that is NOT part of the multicast routing domain. (Please note, R5 has no multicast configuration on it what-so-ever)

To start out you need to have your multicast routing domain set up. This included turning on multicast routing, setting your desired PIM mode and associated RP’s if necessary. The next few steps are rather simple.

We will be using R1 as the sender in our MRM tests. We will be using the loopback 0 interface specifically for this. The configuration ip mrm test-sender will go under the loopback 0 interface.

Now we will utilize R5 as the MRM Manager. This is where the test and it’s sender and receiver list itself is defined, started, stopped, and where the status report will be visible. We will start out by creating two access-lists. One for the senders, and one for the receivers. I will utilize access-lists 1 and 2 respectively.

R5

access-list 1 permit 1.1.1.1
access-list 2 permit 4.4.4.4

Now that we have our access-lists defined we can setup the initial test. I will utilize defaults and will not change the beacon intervals, hold times, or the UDP ports used. In this configuration we define the MRM manager name and it’s sender and receiver list. The senders command allows options to utilize all multicast-enabled interfaces, or all multicast test-send enabled interfaces. We are simply going to leave this at it’s default however. Our receivers configuration indicates the list of receivers, as well as the list of senders to be monitored. Per the Cisco configuration guide both the senders configuration as well as the receivers configuration statement including the senders list are used. We also define the managers interface for monitoring as well as the multicast group that will be used.

If we note the highly lines the first one shows us a lost packet. Correlated with that lost packet is a %. In this case 4 percent. This is rather typical to ping operations on Cisco devices where we lose the first packet sent. If we notice the next highlighted line it indicates that two duplicate packets (Dup:#) were noticed. This number does not show up in the % field.

We will now issue the command to look at the finished status report on R5.

If you notice the status-report output is nearly identical to the log messages we receive while the test is running. Lines 7 and 9 indicate our lost packet (noted in the % column) as well as duplicate packets (not counting towards the % field).

There you have it. A quick and easy test for multicast. I would recommend digging into deeper options for more specific tests. However, this is another tool you can use in the case you don’t have access to both the sending and receiving ends of the network to send pings and join the correct igmp group.

I came across a paragraph in an older book in regards to EIGRP operation. As I read it I was kind of dumfounded. To be honest I didn’t believe it at first so of course I had to lab it to see if it was true. It turns out that it is in fact the way EIGRP operates in this very specific circumstance. I had never seen it before in some of my favorite books nor through my favorite video training vendors. So my findings are this: In a very specific scenario, EIGRP will advertise static routes into EIGRP as internal routes without any redistribution statements.

So I’ll admit that sentence reads a little funny. Of course it’s internal if it isn’t redistributed. Unfortunately I can’t think of a better way to summarize this scenario of static routes showing up as EIGRP routes without somehow mentioning redistribution.

The scenario goes like this. If an interface is matched by the EIGRP network statement, and a static route whose destination also matches the EIGRP network statement, it will be advertised into EIGRP ONLY IF the static route points out the matched interface not including the next hop. In other words, the static routes destination and outgoing interface must match the EIGRP network statement, while leaving the next hop out of the static route statement.

The topology is simple for this example. There are three routers with only two being relevant for output. In this case R2 is simply there to provide and up/up interface for R1’s static routes to point out of.

EIGRP Static Advertisements

Lets start by configuring The R1 — R2 link with a /24 subnet as 172.16.12.0/24.

Now we will bring up EIGRP AS 42 on R1. In this case I am going to use a class full network statement. I will show a non class full output at the end of this post. The important part comes down to the network statement matching. In this case R2’s network statement and EIGRP configuration are irrelevant so long as the adjacency forms. The same goes true for R1 R3’s EIGRP adjacency and subnet. For brevity I will leave those configurations out.

Now we add the magic statement. I am adding a static route which points out of an interface but not towards a next hop. This occurs on R1. The magic in this statement is that the interfaces IP address, as well as the destination of the route both are matched by the EIGRP network statement. Please note that this requires auto-summary to be turned off as we are using discontiguous class full networks.

If we go over to R3 we will now see these two routes in the routing table as internal EIGRP routes. Again this indicates that it isn’t truly redistribution however, we are logically “redistributing” the static routes into EIGRP.

If we look at this new route on R1 specifically we will see that the routing table sees it as a “connected” route to the EIGRP process. Also note that it states it is “redistributing” via our EIGRP process.

Now, if we modify the static route on R1 to point out towards a next hop as opposed to out an interface we will see that the route is no longer “advertised” by EIGRP as an internal route. In fact it isn’t advertised at all. Again, using “redistribution” loosely the EIGRP process doesn’t not redistribute the static route into it’s process. (The lack of advertisement occurs with a static route pointing to an interface AND a next hop, or just out a next hop).

With this change made we can look again at R1s version of the route. In this case it sees the route as Static and via a next hop as opposed to Static connected advertised and redistributed by EIGRP 42.

As you can see in this very specific scenario static routes pointed out an interface in which both the destination of the route, and the interface in which it points out are both matched by the EIGRP network statement will be advertised as internal EIGRP routes as if they are connected.

To verify this operation with non class full network statements I confirmed using the same network of 172.16.0.0/24 subnetted into further networks. The output is below. In this case the interface address was unchanged.

This lab will cover the topics 5.5.a, 5.5.b, and 5.5.c HSRP Priority, Preemption, and Version from the Cisco Certified Network Associate (CCNA) blueprint. It will test your understanding and knowledge of configure DHCP Servers on Cisco IOS devices. Please use the initial configurations as a template for your lab utilizing whatever console means you have (GNS, Physical Gear, VIRL, etc).

In this lab we will configure the First Hop Redundancy Protocol calls Hot Standby Router Protocol (HSRP). This is a two part lab. The first part we will configure “Legacy” HSRP. In part two we will configure HSRPv2. The initial config files contain the starting configs that will be used for both labs. They set up routing and DHCP for you.

Part1:
Configure R1 Eth1/0 with address 192.168.12.3
Configure R2 Eth1/0 with address 192.168.12.2
Configure HSRP on R1 and R2 using the Virtual IP address of 192.168.12.1
Ensure PC can obtain a DHCP address and ping 3.3.3.3 with either R1 or R2 failing.

Part2:
Rebuild HSRP using group number 4000
Use Virtual IP address of 192.168.12.1
R1 should be active whenever it is online. If it is to fail and come back it should take over as the active forwarder.

As indicated in the directions we will exclude R2’s Ethernet 0/1 address from any created DHCP Pools and create a pool to supply addresses to the 12.1.2.0/24 subnet utilizing 12.1.2.2 as the default gateway. This configuration will be done on R3

With this configuration in place we will debug R1 with debug DHCP detail which will enable DHCP client messages. On R3 we will debug DHCP server packets to verify it’s DHCP pool functionality. After debugs are enable we will enable the R1 Ethernet 0/1 as a DHCP client.

Not fix this issue we will add the necessary command for relaying DHCP requests to R2. This command is applied on the incoming interface for DHCP discovery messages. In the case of R2 it will be on the Ethernet0/1 interface.

R2(config)#int e0/1
R2(config-if)#ip helper-address 23.2.3.3

Now that we have the helper-address in place we will bring up the R1 interface again with debugs running on all three routers.

R1’s debug is show below. We can see the router issues DHCP discover messages out it’s interface ultimately coming up with and address of 12.1.2.4 with a default gateway of 12.1.2.2.

On R2 with DHCP server debugs on we can see R2 setting the GIADDR value to the interface the DHCP Discovery came in on. This is relayed to the address listed in the helper-address configuration and is used to help identify the correct DHCP pool to pick address from.

On R3 we can see the DHCP Discover message being received and it is indicated that it came in through the relay address of R2’s interface. The DHCP server utilizing this information to select an address from a pool and send it back to the relaying router as unicast. R2 then sends it to the appropriate client and this process repeats through the DORA operation.

We can now use R1 to verify our configuration is successful. With everything in place we should see a default route achieved from the default-route command in the DHCP pool pointing to R2’s interface on Ethernet0/1. We can also ping R3’s 23.2.3.3 address.