Ask the Expert: Nexus 5000, 3000 and 2000 Series

Welcome to the Cisco Support Community Ask the Expert conversation. This is an opportunity to learn about design, configuration and troubleshooting of the Cisco Nexus 2000, 3000, 5000 Series with cisco Expert Lucien Avramov. Lucien Avramov is a technical marketing engineer in the Server Access Virtualization Business Unit at Cisco, where he supports the Cisco Nexus 5000, 3000 and 2000 Series. He has several industry certifications including CCIE #19945 in Routing and Switching, CCDP, DCNIS, and VCP #66183

Remember to use the rating system to let Lucien know if you have received an adequate response.

Lucien might not be able to answer each question due to the volume expected during this event. Remember that you can continue the conversation on the Data Center sub-community discussion forum shortly after the event. This event lasts through October 21, 2011. Visit this forum often to view responses to your questions and the questions of other community members.

We have a small server room that requires 10Gig connectivity plus several 6513s. A Nexus 7000 doesn't seem ideal for this location so I decided to use a pair 3064 switches that will be the aggregation point for approximately10 6500s via 2 x 1Gig uplinks. The 6500s are for dense user access requirements. I would have used the 5596 switches as an alternative for the 3064s as the fiber aggregation but I was told they will not support Netflow in the routing module nor is it in the road map.

The 5596 switches will server 10Gig access for my virtual servers. If the Netflow feature is offered down the road I may use the 5500s instead.

We have a couple of nexus 5010p and are low on ports - I was considering this 10 gig 6 ports expansion-module but on the other hand I can get many extra ports trough a 10 gig fex instead - are there any drawbacks by this? how many fex can I expect a 5010 can handle and will it add som extra latency compared with the "build-in ports" ?

You can have up to 12 Fabric Extenders on the 50x0 family. So if you go with 10 2232, that will give you up to 320 10GE ports, modulo you tolerate a 4:1 oversubscription. No drawbacks as far as capability: the 10 GE fabric extender port will provide you 10 GE ethernet or FCoE capacity as a 10GE port on the Nexus 5000. If will add around 800 ns latency. Do you have specific latency requirements?

Well specific latency requirements - just as fast as possibly - we are running iSCSI instead of FCoE (or FC) so as fast as possibly might be the best answer here...

My concern is just that I have a pretty fast cut-though switch here and if I add some extra latency with the fex instead of using the build-in ports it might have been better to buy the expansion module instead. I don't expect that I am going to use all the ports in the fex (we are cabling directly from the nexus to a 10gig pass-through module of a Dell M1000 bladecenter so we 10 gig all the way though to the servers) - but one newer know - better have to many ports than be low on ports..

The latency on the 5010 is around 3.4 usec, a fex will add you 0.8 usec if you add 10G ports on the FEX instead of the 5010, so that would be a choice for you to make. So there is a trade off.. port count / port cost or port latency.

Another question - I have added a vpc between 2 nx5010 where I have a 20 g peerlink between and a mgmt-interface acting as keepalive link. Now I add another vpc between a remote set of nexus (similary) and has tested if this can be used as failover between two sites - works fine. but - what happens if my keepaliave link fails (uplinket it to a 3. switch where we run all the management links on)

ok - now I try to understand what you are telling me - the secondary will suspends its vpc ports - this means that the the primary will continue to forward packets and the other nexus - the secondary - will suspend its vpc similary as if I run into a blocked port (spt) - so that I f.ex in my case here with a vpc between a local and remote set of nexus will only run on one link?

I'd suggest to work with our TAC engineers and escalation team to exactly find the issues you were running into and you can have them also involve me so I can work with you on a get well plan. There were reasons for the deferral and we had introduced new platforms and new software features which required to have a few releases in 5.0.3 code. In general due to the high feature velocity, the latest version is the greatest, here for instance 5.0(3)N2(2), but I would suggest to first identify precisely the root cause of your current issue(s).

Hi Lucien! What is the maximum number of hardware port-channels in Nexus 5548 and 5596? If I remember correctly it was 16 in 5010 and 5020. Is the hw port-channel allocation logic also the same as in 5010 and 5020?

we have just put a new core switch up for a customer, currently one 5548 feeding L2 into several 2960S switches, as well as 4 FEX 2248 (with 2x10GE each) for Data Center access in two locations. The 5548 and two 2248 are in one data center, the other two 2248 are in the second DC. Early next year the customer will be adding a second 5548 for full redundancy (currently, the servers in the DCs are connected to both 2248's in DC, which doesn't help if the 5548 crashes or dies). I've seen docs about doing a dual 5548 configuration with the FEXes connected to two N5K's. What do I need to watch out for when extending the current configuration? How is the configuration of the N5k's handled in respect of the FEX ports? Manually I assume?

When you will extend the configuration, you will be able to dual home the FEX to the Nexus 5000s, each FEX being dual connected to both upstream Nexus 5000. This will be what we call a vPC design. Depending on the number of ports free on your 2960 / 2248s, you could first configure both Nexus 5000s in vPC, then one FEX on each side dual home in vPC at the time. Migrate the server ports to the other FEX or on the 2960, then when the FEX comes back online in ''vpc'' topology you can move servers to this FEX and upgrade the other one in each site. This would minimize the down time in the design topology change.

Here is a document about quick start on vPC, a design guide and a operations guide:

Hello I have a question about vPCs ..relating to being visble to the third switch via port-channel.Correct me if am wrong and expand on this staement please " Allows a single device to use port channel across two upstream devices'. Am trying to grasp the main comcept of vPCs and how to verify them in a live environment on the involved devices assuming its spannoing through more than two Nexus switches

vPC always works for a pair of Nexus switches (7ks, 5ks, 3ks). The pair of switches will be configured as vPC and will then appear as a ''single'' logical switch from the point of view of the device you port-channel to them (one or more link to each).

I have one problem , I have two N5k with single Nortel Core switch.My uplink is 1G ports.I was able to ping the outside the network.But when access data ,it wont work. If I remove the redundant link it is working fine.Now I have upgraded firmware from 5.0(2) to 5.0(3). Now everything is working fine.What could be the problem.Can you pls explain ?

even though it is not a vpc forum I'll try this question - how do I verify load-balancing between the nexus boxes in a vpc domain? I have a set of nx5020 which holds a vpc domain - this vpc is interconnected to another set of nexus boxes which also holds another vpc domain. These two vpc domains are interconnected with 2 10 gig links - and I hoped that I somewhere/somehow could verify load-balancing and see how many links participate in this vpc etc - but hmm - ? - well as far as I can see it is a bit cumbersome to verify which links participate into a vpc and the traffic- the boxes exchange a lot of info about the state - are there some commands here I don't know about or something on the road map?

Ti, from a server or a switch attached in a vPC to both Nexus switches, this is then the principle of hashing: the device will hash to either one of it's link going to the Nexus switches. When a packet comes through the Nexus Switches going to this end device depending on which one received the packet it will send it on that link.

hi again lucien - is there a reason why you not "promote" a vPC solution between two set of nexus there? - In all the design-guides I have seen the vPC is used to bundle channels on fexes to hosts - but today many blade-servers are equipped with 10 gig interfaces so it make more sense to me to look for the vPC between pairs of nexus boxes instead - in our case we are using vPC to implement a DCI with 4 10G black-fiber connections (distance 50km) between 4 pair of nexus boxes (2 pairs for "user-data" and 2 pairs for iSCSI) - any comments on this? Are there any pitfalls ?

In case of 10 GE servers, you have 2 options: either you go to the Nexus Switch directly or, you can also use the 2232 fabric extender if you like. No really comments on dark-fiber connection. Are you going through a DWDM provider to interconnect?

Well - now - I am not that familar with the different technologies on the fiber-connection - we get 4 10G fiber-connections from the service-provider and I try not to worry about how they are connected (but as far as I can guess from what they have put into our rack we are going trough a DWDM)

But - I have but these 2 x 2 10G fiber-connections into to sets of nexus 5k's which support iSCSI on the one set of 10G fibers and "data" on the other set. I have tried to isolate each set of nexus in a seperate STP domain as described in the Cisco DCI doc 2.0 (BPDU filter on each of the vPC's) and this also looks ok. Testing failover from one link to the other with a "ping" is almost 100% - but what do I have to be aware of in this setup? Everything looks ok so what I am looking for are there pitfalls or what I have to be aware of ? Tuning - or "not to do"...

For FCOE, the distance between N5ks for synchronous storage is 3 km. If you are running Ethernet only type of traffic (including NFS / iSCSI) then thee is no specific caveats to interconnect your data-centers.

We are using Nexus 7k & 5K in our lab. I have created new VLAN in AGG(N7k VDC) and configured hsrp and created the interface vlan for routing aslo. now the problem is existing VLAN's traffic are going via N5k from UCS without any problem , but the newly created vlan is not hitting. Pin group, VLAN , Uplink is configured in UCS FI(6140xp) correctly. And also added the new vlan to the existing portchannels.

NOTE:If i connect a uplink from FI to N7k all the trraffic is going. But VIA N5k the new Vlan is not going.

short question which I cannot find a 100% answer on from the websites - are the n7k compatible with the n2k fex - and if so - also the 10 gig ? and if so - any linitations - what about if using the L3 features?

MD5 is correct and same OS we have upraded in a different N5k and it is working fine. It is very difficult to get the downtime for reload..pls find the logs..Also we have tried with different version NX-OS.