[updated 10.11.2012]In many data centers, rack servers offer organizations the ability to keep server and networking responsibilities separated. However, when blade servers are introduced into an environment, the server and network admins roles start to blur. Should the server admin have to learn networking, or should the networking admin have to learn blade servers? Some blade server environments use pass-thru modules instead of network I/O modules. Pass-thru modules are easy to use and offer the ability to pass the networking upstream to the switch via a 1 to 1 port connection . This approach allows the networking admin to maintain ownership but there is no cable savings within the blade infrastructure since each server port requires a connection to the external top of rack switch. Network I/O modules used within the blade chassis offers a reduction of external cabling along with local switching enabling blade servers to communicate easily within the chassis. admins are forced to learn each others’ roles, which is a great practice, but adds complexity to blade server implementations. What if we could take the ease of a pass-thru module and combine it with the ability to communicate locally? Now you can, with the introduction of the Dell PowerEdge M I/O Aggregator.

Attributes

The PowerEdge M I/O Aggregator comes out-of-the-box with 40 x 10GbE ports available through 32 internal ports and 8 external ports. It also has the option of extending the external capabilities through 2 optional FlexIO modules. The FlexIO modules options include:

The PowerEdge M I/O Aggregator is fully IEEE DCB compliant for converged IO supporting iSCSI, NAS, converged Ethernet and Fibre-Channel-based storage applications. Here are the details on the performance of this module:

MAC addresses: 128K

Switch fabric capacity: 1.28 Tbps (full-duplex)

Forwarding capacity: 960 Mpps

Link aggregation: Up to 16 members per group, 128 LAG groups

Queues per port: 4 queues

VLANs: 4094

Line-rate Layer 2 switching: all protocols, including IPv4

Packet buffer memory: 9MB

CPU memory: 2GB

[updated info 10.11.2012] One of the best features the Dell PowerEdge M I/O Aggregator offers is out of the box it comes enabled with instant “plug and play” connectivity to Dell and multi-vendor networks. The PowerEdge M I/O Aggregator comes with all VLANs on all ports with the option to set VLANs. It is also designed to be “no touch” with iSCSI DCB and FCoE settings downloaded from the top of rackswitch through DCBx protocol. The Dell PowerEdge M I/O Aggregator supports DCB (protocols PFC, ETC and DCBx), Converged iSCSI with EqualLogic and Compellent (supports iSCSI TLV) and FCoE Transit to Top of Rack Switch via FIP Snooping Bridge.

Since this is a “first look” this is all I can reveal at this time. Stay tuned – more updates to come shortly.

Kevin Houston is the founder and Editor-in-Chief of BladesMadeSimple.com. He has over 15 years of experience in the x86 server marketplace. Since 1997 Kevin has worked at several resellers in the Atlanta area, and has a vast array of competitive x86 server knowledge and certifications as well as an in-depth understanding of VMware and Citrix virtualization. Kevin works for Dell as a Server Sales Engineer covering the Global 500 market.

Just heard about a “limitation” that it supports up to 16x10GbE external ports using the breakout cables. Physically up to 24x10GbE external ports are possible with the two integrated QSFP+ and two dual-port QSFP+ modules using QSFP+-to-4xSFP+ breakout cables.

No, 2xQSFP+ integrated plus 2xQSFP+ in each FlexIO bay is a total of six QSFP+. 6×4 is still 24 in my book. But that configuration is not supported.

This is what I got from the PG team regarding the “Up to 16 external 10GbE ports (4 QSFP+ ports with breakout cables)”-wording in the I/O:

“Max 16 external 10GbE ports is correct on the IOA. … . It’s counterintuitive but while you can physically add 2 QSFP modules for a total of 6 QSFP ports and what we seem to be 24x10GbE ports using breakout cables… only max 16 10GbE ports are addressable regardless of which FlexIOs you choose.”

I apologize – your math was correct. For some reason, I was calculating 1 port per FlexIO module, not 2. Therefore, if you had 2 x FlexIO modules with QSFP ports, you would have 6 total per PowerEdge M I/O Aggregator. I’m not sure if there is an external limit of 16 ports or not, but realistically, more than 16 wouldn’t be relevant, because then it turns into a pass-through module. Anyway – sorry for the miscalcuation. I’ll look into this and let you know.

I know there is a limit of 16x10GbE ports since that was the answer the PG team told me when I wondered if the wording in the I/O guide about “Up to 16 external 10GbE ports..” was a miscalculation. Their answer that I quoted above explained that it’s a limitation regarding adressability of more than 16x10GbE external ports. Feel free to verify this.

Since the I/O aggregator has 32 internal ports it can never be a simple pass-through. It should also be a cheaper option than the Force10 MXL when 4x10GbE mezzanines are released.

Andreas – realized I never got back to you on why the IO Aggregator supports 16 x 10GbE. The reason is that I/O Aggregators support 1 LAG per group, and the LAG only supports 16 x 10GbE uplinks therefore the marketing material shows 16 x 10GbE uplinks as the “max”. Hope this makes sense.