I am trying to upgrade Sup2T from 15.0 to 15.1 (same feature sets). I have the new image in bootflash and have changed the boot system statement to boot from the new image and the config register is 0x2102. However, everytime it reboots it loads the old image. [code]

Today we have 2 VSS, and are about to buy another two 6500 chassies to bulid a new VSS. Our currens chassies have supervisor 720, and the new one will probably have SUP2T

We have a network design that allowes us to run on a single chassie per VSS if we have hardware failure. This means that in a really worst case scenario we need to move around hardware to have 3 system up and running (that would mean multiple failures on all systems... so really really worst case scenario )

So here is my first question. Can I run a VSS with different supervisors in the chassies? Second question: If I want to upgrade a VSS from SUP720 to SUP2T, cna I run 2 supervisors per chassie (quad supervisor VSS) where one supervisor is SUP720 and one is SUP2T?

Doing a bug scrub on our dual core, dual Sup 720 6500s tonight. We are going from s72033-advipservicesk9_wan-mz.122-33.SXI4a.bin to SXI9. I want to get a second set of eyes on my script since I have not done this for about 1 1/2 years. Following this doc:[URL]

My task is to upgrade a couple of 6500 series switches, 6513 with SUP720/MSFC3 (WS-SUP720) and Policy Feature Card 3 (WS-F6K-PFC3B) installed. How to upgrade those switches if in SSO redundancy mode with two SUPs installed?

I understand that it is good to connect to the MSFC3 via console and upgrade this first, is this correct?

I also have to upgrade some 6509 but I only can test it on one 6509-E, how to get everything up to date. [code]

We have 70 vlans and 90+ closet switches (2900) connecting the core switches We have 2 WLC connected to the core switch. We also have a 1 x 1 connection to a VSS switch which in turn connects to our Server Co-Location data center utilizing IPSec & GRE tunnel to connect to our Server Co-Location data center.

Our routing protocol is EIGRP. Our VTP domain at Server Co-Location is separate from our location “A” campus. I was wondering what is the best way to migrate our Core switches at location “A” campus.

The requirement is we would like to replace these switches with minimum downtime.

I have tried to test copy tftp: numerous time with no success. I believe the reason it is failing is my laptop to Ethernet port is in vlan 62 and the tftp process operates in a different IP space.I am using gig 7/1 and configuring my laptop nic for x.x.x.254 mask 255.255.255.0. I can ping from laptop to gateway) and I can ping from the switch to my laptop using ping vrf production x.x.x.254. Can you tell me what vlan I need to set my laptop connection in or if there is something else I need to change to make tftp work on vlan62?Does TFTP only work in vlan1 or can it be changed?

We are setup like a hotel style workers camp. We have wings full of rooms and residents with 3750 stacks in them. Those switches connect back to our core 6500's. The network is mostly all Layer 3, interfaces are routed with IPs.

When it was built before my time they included an ACL for each wing so that residents couldn't access internal devices (IE SSH to 6500) but I've come to notice it's not working.

I see hits on the ACL for accepts but nothing is hitting the deny rule at the top.Here is the configuration below:

On googling I came across documents that say OTV (Overlay Transport Virtualization) is supported on Cat 6500. Any authentic information whether OTV is supported on Cat 6500, especially with Sup-720B? FYI, Cisco Feature Navigator does not mention it.

We are getting ready to start testing Quad VSS for our production VSS environments we have done the research and per documentation it seems pretty straight forward.

I want to make sure that the dual to quad VSS is easily done across our multiple VSS setups and I am curious of those that have done this already have you ran into any gotchas on the turn up of the ICS Sup?

Also, just a ICS in a single chassis instead of one in both chassis of the VSS?

In one of our environments we have all single home devices going to VSS switch 1 and only dual homed devices. going to switch 2 so may be desireable to only install an ICS in the switch 1 VSS.

The problem is that, I am not able to configure Multilayer Switching (MLS) (mls rp ip) in the global config command. Although the "mls" is visible on the config menu. but when I say "mls ?", the router prompt "unrecognize command"

I have a question which i am unsure of, on the 6500 i know i can set mls qos trust to cos or dscp since I don't have any trunks configured on that switch that i want to trust cos most of my ports trust dscp instead. The question is will packets coming in or going out at L3 with the TOS bits set get placed in the correct in/out queue. For example if a packet comes in on a port with a mls qos trust dscp and has the TOS set to XX will this XX get mapped to the correct COS value based on the default dscp to cos map and end up going out the correct queue which handles that specific COS number?

I mainly asked this because i saw the following on the cisco site and again i am suing dscp trust and not cos.

i have server with two uplink to pair of 6500 non-VSS, this server member of vlan 100 sw-1 is active HSRP while sw2 is a standby HSRP , how can i make this server forward traffic on both Link . the server admin told me only one link is active (green) on the server while the other link is orange

I am looking for a way to see packets that are matched on certain ACLs in a CoPP policy map. I have read that it is not a good thing to add the log keyword at the end of an ACL when using that ACL for CoPP. I initially tried to use a logging policy map but the 6500 12.2sx doesn't support this.

how I can see source/destination IP for a certain class in a CoPP policy map?

We have an existing network with a core 6500 as a VSS connecting 4 buildings with 4500 chassis under which number of L2 switches are connected. Currunlty we are using RSTP in ring for redundancy but we want to use OSPF in LAN for faster conversion.All the VLAN's are created on 6500.

We've been mocking up a test lab to test VSS on two 6500's. Each 6500 has one sup720 and a 6708-10ge blade and we've established the two 10ge links between the two chassis; the first from the each chassis' sup and the second from each 6708.My question is, what happens when the supervisor fails on one of the chassis?

I am looking for soem best-practice and useful logging commands on 6500 and 3750 platforms. Some of them I have listed below. Is there any important ones I am missing Also, I need to know what kind of recommended logging level is for buffer and what is loggign level for syslog server?

I have a 6500 that keeps booting into Rommon, even though it has a valid image file on the sup-bootdisk, and the config register is 0x2102. When it reloads I see this on the screen: [code] It seems like the repeating error is "error - on read during ELF program load". I can boot the system off of a CF card, but I'd like to use the sup-bootdisk.

I search at all cisco pages about support of VSS quad supervisor 2T support.Even relase notes, q&a etc. But until now I don’t found any pros or contra. Customer use the newest IOS 15.1.1-SY Customer uses already several system with quad sup720, has also experience.Customers actual state is:With quad sup 2T the 2nd sup2t of each VSS-chassis drops in rommon.Without VSS the same sup2t comes up either as active or standby!