RackSwitch G8264CS

View online

More options

This document is being moved! In September 2015, this document will be moved to lenovopress.com and will no longer be available on the IBM Redbooks site. Please bookmark the URL http://lenovopress.com/TIPS0970 to access this specific document both today and after the migration in September.

Abstract

Many clients successfully use Ethernet and Fibre Channel connectivity from their servers to their LAN and SAN. These clients are seeking ways to reduce the cost and complexity of these environments by using the capabilities of Ethernet and Fibre Channel convergence.

The RackSwitch™ G8264CS top-of-rack switch offers the benefits of a converged infrastructure. As part of its forward thinking design, this switch has flexibility for future growth and expansion. This switch is ideal for clients who are looking to connect to existing SANs and clients who want native Fibre Channel connectivity, in addition to support for such protocols as Ethernet, Fibre Channel over Ethernet (FCoE), and iSCSI.

Note: This Product Guide describes the models of the RackSwitch G8264CS that support Networking OS up to version 7.x. For Lenovo RackSwitch G8264CS models that support Networking OS version 8.x onwards, see the Lenovo Press Product Guide Lenovo RackSwitch G8264CS.

Many clients successfully use Ethernet and Fibre Channel connectivity from their servers to their LAN and SAN. These clients are seeking ways to reduce the cost and complexity of these environments by using the capabilities of Ethernet and Fibre Channel convergence.

The RackSwitch™ G8264CS top-of-rack switch (shown in Figure 1) offers the benefits of a converged infrastructure. As part of its forward thinking design, this switch has flexibility for future growth and expansion. This switch is ideal for clients who are looking to connect to existing SANs and clients who want native Fibre Channel connectivity, in addition to support for such protocols as Ethernet, Fibre Channel over Ethernet (FCoE), and iSCSI.

The RackSwitch G8264CS simplifies deployment with its innovative and flexible Omni Port technology. The 12 Omni Ports on the G8264CS give clients the flexibility to choose a 10 Gb Ethernet, 4/8 Gb Fibre Channel, or both for upstream connections. In FC mode, Omni Ports provide convenient access to FC storage. The Omni Port technology that is provided on the G8264CS helps consolidate enterprise storage, networking, data, and management onto a simple-to-manage, efficient, and cost-effective single fabric. Also, the G8264CS can be used to create 252-node PODs or clusters with Flex System Interconnect Fabric.

The part numbers to order the switch and additional options are shown in Table 1.

Table 1. Part numbers and feature codes for ordering

Description

Part number

Feature code for MTM 7309-HCK

Feature code for MTM 7309-HCM

Switch

RackSwitch G8264CS (Rear to Front)

7309DRX

A3FL

None

RackSwitch G8264CS (Front to Rear)

7309DFX

None

A3FM

Miscellaneous options

Console Cable Kit Spare

90Y9462

A2MG

A2MG

Adjustable 19" 4 Post Rail Kit

00D6185

A3KP

A3KP

Recessed 19" 4 Post Rail Kit

00CG089

None

A51M

Switch Seal Kit

00Y3001

None

A4WX

iDataPlex Rail Kit

90Y3535

None

A1SZ

Air Inlet Duct for 483 mm RackSwitch

00D6060

A3KQ

None

Hot-Swappable, Rear-to-Front 750W CFF Power Supply Spare

00D5858

A2X7

None

Hot-Swappable, Front-to-Rear 550W CFF Power Supply Spare

00D5961

None

A3FN

Hot-Swappable, Rear-to-Front Fan Assembly Spare

00D6071

A54K

None

Hot-Swappable, Front-to-Rear Fan Assembly Spare

00D6073

None

A54J

The part numbers for the G8264CS switches include the following items:

One RackSwitch G8264CS with two power supplies and four fan assemblies (rear-to-front airflow or front-to-rear airflow)

Generic Rack Mount Kit (2-post)

Console Cable Kit that includes:

RJ-45 (plug) to RJ-45 (plug) serial cable (1 m)

Mini-USB to RJ-45 (jack) adapter cable (0.2 m) with retention clip

DB-9 to RJ-45 (jack) adapter

Warranty Flyer

Important Notices Flyer

Documentation CD-ROM

Note: Power cables are not included and must be ordered separately (see Table 2 for details).

The G8264CS switch supports up to two redundant hot-swap 550 W AC power supplies for front-to-rear air flow or 750 W AC power supplies for rear-to-front air flow (two power supplies come standard with the switch) and up to four redundant hot-swap fan assemblies (four fan assemblies come standard with the switch). Spare power supplies and fan assemblies can be ordered, if required. Each Power Supply Spare option contains one hot-swap power supply (rear-to-front or front-to-rear), and each Fan Assembly Spare option contains one hot-swap fan assembly (rear-to front or front-to-rear).

The G8264CS switch also comes standard with the Console Cable Kit for management through a serial interface. Spare serial management cables can be ordered, if required. The Console Cable Kit Spare option contains the following items:

RJ-45 (plug) to RJ-45 (plug) serial cable (1 m)

Mini-USB to RJ-45 (jack) adapter cable (0.2 m) with retention clip

DB-9 to RJ-45 (jack) adapter

The G8264CS switch supports optional adjustable 19-inch, 4-post rack installation kit, part number 00D6185. Optionally, Air Inlet Duct, part number 00D6060, can be ordered with the G8264CS (rear-to-front airflow) switch for 4-post rack installations with the Adjustable 4-post Rail Kit (00D6185).

The G8264CS (front-to-rear airflow) switch also supports optional recessed 19-inch, 4-post rack kit (00CG089) together with the Switch Seal Kit (00Y3001) that are used when the switch is installed in the Intelligent Cluster™ Rack (MT 1410), Enterprise Rack (MT 9363), or PureFlex® System Rack (MT 9363) with NeXtScale™ System. The G8264CS (front-to-rear airflow) switch also supports 4-post iDataPlex® rack kit (90Y3535) which is used when the switch is installed in the iDataPlex Rack.

The G8264CS switch ships standard without any AC power cables. Table 2 lists the part numbers and feature codes to order the power cables (two power cables are required per switch).

With the flexibility of the G8264CS switch, clients can take advantage of the technologies that they require for multiple environments:

For 1 GbE links, clients can use RJ-45 UTP cables up to 100 m with 1000BASE-T SFP transceivers. Clients that need longer distances can leverage the 1000BASE-SX SFP transceivers, which can drive distances up to 220 meters by using 62.5 µ multi-mode fiber and up to 550 meters with 50 µ multi-mode fiber, or the 1000BASE-LX transceivers that support distances up to 10 kilometers using single-mode fiber (1310 nm).

For 10 GbE links, clients can use direct-attached copper (DAC) SFP+ cables for in-rack cabling and distances up to 7 m. These DAC cables have SFP+ connectors on each end, and they do not need separate transceivers. For longer distances, the 10GBASE-SR transceiver can support distances up to 300 meters over OM3 multimode fiber or up to 400 meters over OM4 multimode fiber with LC connectors. The 10GBASE-LR transceivers can support distances up to 10 kilometers on single mode fiber with LC connectors. For extended distances, the 10GBASE-ER transceivers can support distances up to 40 kilometers on single mode fiber with LC connectors.

To increase the number of available 10 GbE ports, clients can split out four 10 GbE ports for each 40 GbE port using QSFP+ DAC Breakout Cables for distances up to 5 meters. For distances up to 100 m, optical MTP-to-LC break-out cables can be used with the 40GBASE-SR4 transceiver, but Lenovo does not supply these optical breakout cables.

For 40 GbE to 40 GbE connectivity, clients can use the affordable QSFP+ to QSFP+ DAC cables for distances up to 7 meters. For distances up to 100 m, the 40GBASE-SR4 QSFP+ transceiver can be used with OM3 multimode fiber with MTP connectors or up to 150 m when using OM4 multimode fiber with MTP connectors. For distances up to 10 km, the 40GBASE-LR QSFP+ transceiver can be used with single mode fiber with LC connectors.

For 8 Gb or 4 Gb FC links (supported on Omni Ports only), you can use 8 Gb FC SFP+ SW optical transceivers plus LC fiber optics cables for distances up to 150 m with 50 µ multi-mode fiber or up to 21 m with 62.5 µ multi-mode fiber. For longer distances, the 8 Gb FC LW optical transceivers can support up to 10 km on single-mode fiber with LC connectors. These transceivers can operate at 4 Gb or 8 Gb speeds.

The traditional approach of segmenting storage and data traffic has certain advantages, such as traffic isolation and independent administration. Nevertheless, it also poses several disadvantages, including higher infrastructure costs, complexity of management, and under-utilization of resources. Clients must invest in separate infrastructures for LAN, SAN, and interprocess communications (IPC) fabrics, including host adapters, cables, switching, routers, and other device-specific equipment.

The RackSwitch G8264CS is considered particularly suited for these clients:

Clients who want to implement a converged infrastructure with FCoE where the G8264CS acts as a Full Fabric FC/FCoE switch for the end-to-end FCoE configurations or as a Fibre Channel Forwarder (FCF) NPV Gateway breaking out FC traffic for the native Fibre Channel SAN connectivity.

Clients who want to reduce TCO and improve performance while maintaining high levels of availability and security.

Clients who want to avoid or minimize oversubscription, which can result in congestion and loss of performance.

The RackSwitch G8264CS offers the following features and benefits:

Lowers the total cost of ownership (TCO) with consolidation

By consolidating LAN and SAN networks and converging to a single fabric, clients can reduce the equipment that is needed in their data centers. This benefit significantly reduces the costs that are associated with energy and cooling, management and maintenance, and capital costs.

Improves performance and increases availability

The G8264CS is an enterprise-class and full-featured data center switch that offers high-bandwidth performance with 36 1/10 Gb SFP+ connections, 12 Omni Ports that can be used for 10 Gb SFP+ connections, 4/8 Gb Fibre Channel connections, or both, plus four 40 Gb QSFP+ connections. The G8264CS switch delivers full line rate performance on Ethernet ports, making it an ideal choice for managing dynamic workloads across the network. This switch also provides a rich Layer 2 and Layer 3 feature set that is ideal for many of today’s data centers. Combined with redundant hot-swappable power and fans, along with numerous high availability features, this switch comes fully equipped to handle the demands of business-sensitive traffic.

High performance

The 10 Gb/40 Gb switch provides the best combination of low latency, non-blocking line-rate switching, and ease of management. It has a throughput of up to 1.28 Tbps.

Lower power and better cooling

The G8264CS uses as little as 330 W of power, which is a fraction of the power consumption of most competitive offerings. Unlike side-cooled switches, which can cause heat recirculation and reliability concerns, the front-to-rear or rear-to-front cooling design of the G8264CS switch reduces the costs of data center air conditioning by having airflow match the servers in the rack. In addition, variable speed fans help to automatically reduce power consumption.

Support for Virtual Fabric

The G8264CS can help customers address I/O requirements for multiple NICs while reducing cost and complexity. By using Virtual Fabric, you can carve a physical dual-port NIC into multiple vNICs (between 2 - 8 vNICs) and to create a virtual pipe between the adapter and the switch for improved performance, availability, and security. It is also important to know support for FCoE, as 2 vNIC cans be configured as CNAs to allow for additional cost savings through convergence.

VM-aware networking

VMready software on the switch simplifies configuration and improves security in virtualized environments. VMready automatically detects virtual machine movement between physical servers and instantly reconfigures the network policies of each VM across VLANs to keep the network up and running without interrupting traffic or impacting performance. VMready works with all leading VM providers, such as VMware vSphere, Citrix Xen, IBM PowerVM, and Microsoft Hyper-V.

Layer 3 functionality

The G8264CS includes Layer 3 functionality, which provides security and performance benefits, because inter-VLAN traffic stays within the switch. This switch also provides the full range of Layer 3 protocols from static routes for technologies, such as Open Shortest Path First (OSPF) and Border Gateway Protocol (BGP) for enterprise customers.

Seamless interoperability

The G8264CS switches perform seamlessly with other vendors' upstream switches.

Fault tolerance

The G8264CS switches learn alternative routes automatically and perform faster convergence in the unlikely case of a link, switch, or power failure. The switch uses proven technologies, such as L2 trunk failover, advanced VLAN-based failover, VRRP, and Hot Links.

Multicast support

These switches support IGMP Snooping v1, v2, and v3 with 2K IGMP groups. They also support Protocol Independent Multicast (PIM), such as PIM Sparse Mode or PIM Dense Mode.

Transparent networking capability

With a simple configuration change to easy connect mode, the RackSwitch G8264CS becomes a transparent network device, invisible to the core, eliminating network administration concerns of Spanning Tree Protocol configuration/interoperability and VLAN assignments, and avoids any possible loops. By emulating a host NIC to the data center core, it accelerates the provisioning of VMs by eliminating the need to configure the typical access switch parameters.

12 Omni Ports, each of which can operate as a 10 Gb Ethernet (support for 10GBASE-SR, 10GBASE-LR, 10GBASE-ER or 10 GbE SFP+ DAC cables), or auto-negotiating 4/8 Gb Fibre Channel, depending on the SFP+ transceiver installed in the port. SFP+ modules and DAC cables are not included and must be purchased separately (see Table 3).

The RackSwitch G8264CS comes with a standard 3-year hardware warranty with Next Business Day (NBD), 9x5, Customer Replaceable Unit (CRU) warranty service from Lenovo. Software Upgrade Entitlement is based on the switch’s warranty or post warranty extension and service contracts. Optional warranty and maintenance upgrades are available for the G8264CS switch through Lenovo Services:

Warranty service upgrades (3, 4, or 5 years)

24x7 onsite repair with 2-hour target response time

24x7 onsite repair with 4-hour target response time

9x5 onsite repair with 4-hour target response time

Maintenance (post-warranty) service offerings (1 or 2 years)

24x7 onsite repair with 2-hour target response time

24x7 onsite repair with 4-hour target response time

9x5 onsite repair with 4-hour target response time

9x5 onsite repair with next business day target response time

Lenovo warranty service upgrade offerings are country-specific, that is, each country might have its own service types, service levels, response times, and terms and conditions. Not all covered types of warranty service offerings might be available in a particular country.

The following configurations are expected to be among the popular configurations that clients are likely to implement. Not all of these configurations are available at the time of writing. For more information, speak to a Lenovo sales representative or Lenovo Business Partner. For examples of official tested configurations, see the System Storage Interoperation Center (SSIC) at the following website:http://ibm.com/systems/support/storage/ssic/interoperability.wss

Lenovo provides extensive FCoE testing to deliver network interoperability. For a full listing of supported FCoE and iSCSI configurations, visit the System Storage Interoperation Center (SSIC) at the following website:http://ibm.com/systems/support/storage/ssic

Leveraging an existing LAN

Figure 4 shows how a client with existing SAN switches can use the G8264CS to simplify its rack environments by deploying 10 Gb Ethernet in the rack between the System x or ThinkServer® servers, and the G8264CS. The client can leverage Ethernet in the rack while breaking out the FC connections at the top of the rack, to connect to the existing SAN switches, and then on to the client's storage devices.

Figure 5 shows an example of how a client might further simplify its data center by using Ethernet more in its data center before connecting to its existing SAN switches. This example shows how clients can use the RackSwitch G8124E to simplify its rack environments with Ethernet only. Ethernet can go down to the end of the row, or closer, to the client's storage where the client can install the G8264CS. Next, Ethernet can break out the FC connections at the top of the rack to connect to the existing SAN switches, and finally it can go on to the client's storage devices.

Figure 6 shows an example of how a client might further simplify their data center with the FC SAN switching fabric and implementing end to end FCoE configuration. This example shows how clients can use the RackSwitch G8264 to simplify their rack environments with Ethernet only. Ethernet can go down to the end of the row, or closer, to the client's storage where the client can install the G8264CS to connect directly upstream to an Storwize V3700/V7000 using simpler Ethernet connectivity.

Figure 6. Leveraging Ethernet further in an existing data center with end to end FCoE

Figure 7 shows an example of how a client can use a BladeCenter environment. The client reduces costs inside the chassis with a single adapter in the blade, with a 10 Gb Ethernet adapter only in the chassis. The client can use the G8264CS at the top of the rack or somewhere else in the data center, breaking out the FC connections and connecting to the existing SAN switches, and then to the storage devices.

Figure 7. Leveraging a BladeCenter environment

Table 8 summarizes the supported components.

Table 8. Components

Adapter

NIC mode

FCoE switch

SAN switch

Storage target

OS levels

Emulex VFA 2(Adapter + FoD Key)

pNICvNIC2

G8264CS

Cisco SANBrocade SAN

Storwize V3700Storwize V7000SAN Volume ControllerDS3K/5KDS8KXIVTape

Win2008WS2012ESX 4/5RHEL 5/6SLES 10/11

Leveraging a Flex System environment: NPIV to FC SAN

Figure 8 shows an example of how a client can use the G8264CS for convergence in a Flex System environment. This approach can help a client significantly reduces costs inside the chassis with a single adapter in the compute node using CNA functionality, with a 10Gb Ethernet module, such as the SI4093/EN4093/EN4093R, in the chassis (no FC adapter or switches necessary). The client can then use the G8264CS at the top of the rack or somewhere else in the data center, breaking out the FC connections and connecting to the existing Brocade or Cisco SAN switches, and then in to the storage devices.

Figure 9 shows an example of how a client can use the G8264CS for convergence in a Flex System environment to connect directly into their storage using FCoE. This approach can help clients significantly reduce costs inside the chassis by removing the need for FC SAN switches between the G8264CS and the storage. In the chassis, you simply have a single adapter in the compute node using CNA functionality, with a 10Gb Ethernet module, such as the SI4093/EN4093/EN4093R, in the chassis (no FC adapter or switches necessary). The client can then use the G8264CS at the top of the rack or elsewhere in the data center and then connect directly into the storage device.

Note: FLOGI is used to obtain a routable FCID for use in the FC frame exchange between the G8264CS and the Storwize V7000. The switch provides the FCID during a FLOGI exchange.

Figure 9. Leveraging G8264CS for end to end FCoE in a Flex System environment

With the growth of cloud, media applications, mobile connections and big data clients, IT departments are faced with many new requirements. Flex System Interconnect Fabric is designed to meet client needs by providing a simple Ethernet fabric cluster that accelerates deployment, simplifies management, and enables dynamic scalability, increases reliability, availability and security in medium to large scale POD deployments. This solution offers a solid foundation of compute, network, storage, and software resources in a Flex System POD.

The key I/O components of this solution consist of a pair of RackSwitch G8264CS. One of the G8264CS will be the center of intelligence and provides all direction and updates to the redundant G8264CS and the 2-18 Flex System SI4093 System Interconnect Modules. By using Flex System x222 compute nodes, clients can easily setup a single chassis and then scale up to nine chassis with easy to build a 252-node POD or cluster. In addition, the automated capabilities of adding additional chassis after initial setup of the first client can also exploit the acquisition and operation cost savings of converging Ethernet and Fibre Channel traffic within the POD or cluster, but still be able to simply connect into their existing upstream networks.

Figure 10 shows the Flex System Interconnect Fabric using 9 chassis with SI4093, and a pair of RackSwitch G8264CS connecting to a clients existing LAN and SAN which could be a Brocade switch or a Cisco MDS switch.

Figure 10. Flex System Interconnect Fabric using 9 chassis
The solution components that are used in the scenarios that are shown in Figure 10 are listed in Table 11.

Table 11. Building a Flex System Interconnect Fabric POD using FCoE (Figure 10)

Diagram reference number

Description - part number - quantity

1

RackSwitch G8264CS - 7309DRX - 2 per POD

For upstream connections to the LAN simply leverage the SFP+ or QSFP+ ports and the appropriate cables/transceivers

For upstream connections to the Brocade/Cisco FC SAN leverage the Omni Ports and the 8 Gb FC transceivers

On this page, enter RackSwitch G8264CS, select the information type, and then click Search. On the next page, narrow your search results by geography and language.

Special Notices

This material has not been submitted to any formal IBM test and is published AS IS. It has not been the subject of rigorous review. IBM assumes no responsibility for its accuracy or completeness. The use of this information or the implementation of any of these techniques is a client responsibility and depends upon the client's ability to evaluate and integrate them into the client's operational environment.