Menu

Category Archives: Uncategorized

Everyone is aware of the benefits of restricting L2 broadcast domains to the Top of the Rack switches. ( Route when you can, Switch when you must)
In a Leaf/Spine Network, Routing can be extended until the Leaf Switches(TOR) with Point to Point Routing to each of the Spines. Traffic Hashing is done via ECMP on a per-flow basis.
There are some interesting demonstrations of deploying BGP in the data center as an IGP. of course, OSPF and ISIS can be used as well.

In the test-setup that I use, VLAN’s are restricted to the top of the rack switch. The TOR acts as the default gateway for servers connected to it. Hence there is no VRRP or additional spanning tree configs.
These Local vlans are -redistributed to the Spine(Core) via a L3 routing adjacacencies.

Most Overlay Technologies (SPB,Trill, VxLAN) have to solve the problem of the Flooding Domain. How to create a E-LAN or a tree such that BUM traffic could be flooded across efficiently and only to those end-points which need them.

Since Vxlan does not have a Control Plane (Unlike SPB/Trill which uses ISIS ) ; PIM Bidir is used since it is most efficient for the (*.G) joins which is needed by gateways to reflect dynamic VM presence.
There is a another mode of running Vxlan called Headend Mode which does not need PIM deployment.

In future posts, I will demonstrate how the OmniSwitch OS6900 series of switches act as a Vxlan Gateway (Hardware Vxlan Tunnel Endpoints).

These Tunnel End points/VxLAN gateway are configured at the edges ( Leaf Switches). The Core does not need to support any additional new features except Layer 3 routing and PIM Bidir ( Optional) since they do not do any tunnel terminations or encapsulations.

Now that you have learned how to Configure the Alcatel-Lucent OmniSwitch Series of DataCenter Switches for FCOE Storage Support, here is a quick note on the remaining pieces.
1) Configuring the Converged network Adaptors on Esxi

Depending upon the features that one requires on the Esxi hosts, the appropriate device drivers have to be downloaded & installed. It is straightforward to find the driver software at either Vmware (for esxi) or the Network Adaptor vendor’s website ( for Windows/Linux etc)
Installation via Command line would be through the esxcli vib install command. A reboot would be necessary in most cases.

It is also worth investigating whether a Remote management for the CNA’s ( either through a Vcenter plugin or a standAlone application) will make life easy.

Setting up Fiber-Channel Storage on the array is a matter of a few steps. Note that the configurations that I perform may not be Accurate nor COMPLETE for deployment. They are only listed here to show the complete list of steps to provide a proof of Concept for an FCoE Storage Solution.
2.a) Configure Storage Array –
Create Volume Groups
Create Volumes within the newly Created Volume Group
Create Hosts by creating a Host Port Identifier #
Create a Host group if required.
2.b) Map Volumes to Host/Host Groups for use in I/O Operations & associate with a Logical Unit Number

3) Setting up Fiber-Channel Switch (Brocade/Qlogic) Zoning

In order for Esxi hosts to see the DataStores, another additional but important task ( which can be overlooked) is putting the targets and initiators in the same zone so that they see each other.

Here is the link that I found useful for performing Zoning Config on the Brocade Silkworm Fiber-Channel Switch

Port part of other ADs: No
FC_SW_DC1:admin>
4) The next step would be to add new DataStores. The Options are either Disk/LUN or NFS. Choose the LUN that you have originally created on the NetApp Storage Device. The DataStore thus created should be visible from all the Esxi hosts which have access to the Storage device. This is essential for VM migration.

5) Bringing it all Together
Via vcenter, you can create a new VM on any of the hosts which has access to the newly created DataStore.

Creating a Lossless infrastructure on the OS6900/OS10K in support of FCoE/Fiber-Channel Storage

Fiber-Channel Storage systems are inherently Lossless. In support of Converged Networks (FCoE), a lossless infrastructue must be created over traditional Lossy ethernet fabrics.
TCP inherently supports a lossless mode via retransmissions and acknowledgements. Applications using UDP will have to implement their own recovery mechanisms.
FCoE as it runs over plain Ethernet will have to depend on Priority Flow Control, Enhanced Transmission Selection & DataCenter Bridging Exchange Protocol to create it.
If you want to learn more, go the IEEE webpage :

The OmniSwitch Series of Switches support DataCenter bridging protocols via the application of Profiles at a port Level.
In essence, The profiles are pre-defined templates which enable a particular priority for Lossless behavior ( at the ingress) ; Provide a bandwidth guarantee for a class of traffic at the egress
and enable auto-configuration of local/peers via LLDP/DCBx negotiations. Both the earlier CEE and current IEEE modes of DCBx are supported.

By Setting PFC and ETS Willing FLAGS accordingly , it is possible to steer the negotiations across the entire fabric to be consistent.

In most cases, the Servers are Willing and configure themselves to send FCOE traffic at a particular 802.1p priority based on LLDP negotiation with the Switch.

Lets take a predefined Template : DCP-8. DCP-8 is the default profile on all ports on bootup. It classifies traffic into 8 different traffic classes and schedules traffic based on Strict-Priority.
However, All traffic classes are non-Lossless. Hence they could be dropped when there is a congestion .

Now that the Custom Profile is created, We need to apply it to all the Edge-facing ports on the Core.
It is also required to Configure the Core to be non-willing, So that it does not change its PFC/ETS behavior based on LLDP packets received.
The Edge should remain willing. This ensures that the configuration is applied at a central place at the Core and pushed downwards to the edges.

As PFC/ETS is required to be configured at a per-link level, we have to do the same at the edge-port connected to the server. In addition, we have to configure the LLDP Application TLV.
This is information which the Server acts upon to configure Lossless behavior and start the FCOE discovery process.

Please refer to the earlier post for steps to configure the NPIV Gateway. Once that is complete, the following should be configured in order to allow initiators connected to edge Switches to login to the Fibre-Channel Fabric.

fcoe port 1/1/18 role edge [Configure Port connected to Server as an Edge Port. This enables dynamic ACL’s which configures a level of security to the Initiator-Target communication ]
fcoe linkagg 22-23 role mixed [ Core Facing Ports can be either configured as Mixed/Trusted/FCF-Only/E-Node only based on the location in the network relative to the FCF or the Enodes]
vlan 252 members port 1/1/18 tagged [Tag both edge and network facing ports with the fcoe vlan]
vlan 252 members linkagg 22-23 tagged

Here are some Useful validation Commands to check if the OS6900 is indeed acting as an NPIV Gateway :

DC-EDGE-103-> show fcoe fcf
FCF-MAC VLAN Config Sessions A-bit MaxFrmVer Priority
——————–+——-+———–+———+——–+———+———-
E8:E7:32:36:1E:F6 252 Npiv 4 1 no 0 >>>> Note that the Switch has started to act as a FC Forwarder and sends Advertisements on the FCOE VLAN.

DC-EDGE-103-> show module status
Operational
Chassis/Slot Status Admin-Status MAC
————–+————-+————+——————
1/CMM-A UP POWER ON e8:e7:32:36:1e:f5
1/SLOT-1 UP POWER ON e8:e7:32:36:1e:fc
1/SLOT-2 UP POWER ON e8:e7:32:94:68:14