400 Gbit Ethernet: The Next Leap

Our insatiable appetite for network bandwidth will lead to the development of 400 Gbit/s data links.

Growth and reliance on networking are permeating every aspect of our world. The needs of local area, data center, access, and metropolitan area networks are constantly growing, and we're seeing a rapid expansion in the number and use of new bandwidth-hogging applications. As a result, bandwidth growth is increasing across numerous applications and markets.

As these bandwidth requirements press the limits across Ethernet networking, equipment manufacturers must stay ahead of the curve by introducing network devices capable of higher speeds. With future demands for more mobile, video, devices, and data, equipment manufacturers must move beyond today's 40 Gbit/s and 100 Gbit/s capabilities for network and data center providers.

According to the IEEE 802.3 Ethernet Bandwidth Assessment Ad hoc, industry bandwidth requirements are continuing to grow at an exponential pace. At such a rapid speed, networks will need to support terabit-per-second capacities by 2015 and 10 Tbit/s capacities by 2020.

How did we get here?

In May 2013, recognizing this growth and foreseeing the need for a new Ethernet speed rate, the IEEE 802.3 working group formed the IEEE 802.3 400 Gbit/s Ethernet (400 GbE) Study Group. When the working group last addressed the need for a new Ethernet speed rate, two rates were created: 40 GbE, which was intended to provide a medium path for servers, and 100 GbE, which was targeted at network aggregation applications.

In May 2014, the study group received the "Task Force" designation and met for the first time at the IEEE 802.3 May 2014 Interim Session. There, it began work on defining the 400 GbE standard for enabling high-bandwidth solutions for web-scale data centers, video distribution infrastructures, service providers, and new application areas. The newest standard will reach data-transfer speeds of 400 GbE, which is fast enough for 50,000 simultaneous high-definition Netflix video streams.

Just five years ago, 10 GbE was sufficient for most data, 40 GbE was the newcomer, and 400 GbE wasn't even on our radar. Based on Moore's Law, the industry expected 40 GbE to meet data center needs through 2014. Fast forward to today, and it looks like that prediction was right, but demand keeps growing.

The fact that 40 GbE has become commonplace, and 100 GbE is no longer considered cutting-edge, means the move toward 400 GbE has begun. At this stage, the standard for 400 GbE is just starting to be worked out. It will be a few years before we see wholesale approval and acceptance of 400 Gbit/s links, but this doesn't mean development is on hold. Equipment manufacturers need to begin thinking about creating this next-generation technology. The development of 400 GbE capabilities is clearly becoming a reality. To stay ahead of the curve, equipment manufacturers must anticipate demand and begin introducing and testing network devices capable of higher speeds.

Looks like the oddity of 40Gb/s Ethernet, which doesn't fit in the powers of 10 scheme Ethernet has used in the past for speed increments, has become perpetuated. It would seem more logical to introduce a 500Gb/s version, instead of 400G, which would then give you an easy path to the 1Tb/s level.

The use of 400Gbps is really an interim technology until 1Tbps links are developed. Of course, you could argue that every technology is animterim until something better comes along.

Understood the point about interim step, and that ultimately everything is an interim step. But as you see in my previous post, SONET/SDH has never incremented by speeds of 4X, and that 400 G speed should not be in the sequence of speed increments.

The 40 G step was motivated by a stop up the ladder of SONET/SDH speeds. There is no stop at 400 G, up that same SONET/SDH ladder, if things progress as they have in the past. If you create a new step of the ladder, why not choose one that's more suitable to Ethernet?

There is still a desire by many carriers to use OTN as the lower layer transport and keeping Ethernet aligned (as a multiple of 4) makes it nice to have an OUT container that an Ethernet payload will fit into.

There is still a desire by many carriers to use OTN as the lower layer transport and keeping Ethernet aligned (as a multiple of 4) makes it nice to have an OUT container that an Ethernet payload will fit into.

Okay, so here's the point I was trying to make. If the motivation is to use a SONET/SDH container, for Ethernet carriage on the WAN, which I think we have agreed here is a motivation, then is there a SONET/SDH container expected to be at 400 Gb/s?

To understand the progression of SONET/SDH, I find it easiest to use the SDH levels, and then compute the SONET equivalent. This makes it more obvious why certain STS levels have been called "dormant." SDH levels go up in this sequence: 1, 4, 16, 64, 256, 1024, 4096, ... In other words, you multiply each SDH STM level by 4, to reach the next speed increment. STS refers to SONET convention, STM to SDH, and the two seem to standardize on the same speeds. So here is the progression:

STS-3 = STM-1 = 155.52 Mb/s

STS-12 = STM-4 = 622.08 Mb/s

STS-48 = STM-16 = 2,488.32 Mb/s

STS-192 = STM-64 = 9,953.28 Mb/s.

STS-768 = STM-256 = 39,813.12 Mb/s

STS-3072 = STM-1024 = 159,252.48 Mb/s

STS-12288 = STM-4096 = 637,009.92 Mb/s

Where is that 400 G?

So, if the sequence of SONET/SDH speed steps is not being followed anymore, after the 40 G level, which I can understand, why not instead choose speed steps that make more sense for the traditional Ethernet speed increments?

40G was an Ethernet anomaly, motivated by STS-768/STM-256. What's the excuse for 400G? Multilane Ethernets in the past have been based on either 10X pipes, or on 2.5/25/250X pipes.

This goes back to commercial and technical feasibility. There are many issues just moving to 50Gb/SERDES, one of the major considerations is that, even today, there is no layer instrumentation to measure 50Gb/s signaling very well. Another example is just the move from 10Gb/s SERDES to 25Gb/s SERDES required the use of a next generation, more expensive printed circuit board material called Megtron 6. This is 2-3x more expensive than FR4. Moving to even higher SERDES might mean the use of Teflon-based materials at (10-15x more cost than FR4) or optical circuits which can be even more expensive than Teflon –based materials

If any one just monitors his/her Inbox , for every genuine email there are almost 20 SPAM and univited emails pushed into your mailbox.

If we extrapolate this , almost 95% of email internet traffic is junk.

Similar scenario can be found in other traffic such as video downloads by individuals, the same old jokes making rounds of message boxes and overfilling them, Gbs and gbs of emails lying in everybody's mailboxes unread and undeleted just because of the laziness of the users to do some housekeeping on the regular basis, thousands of photos on each facebbok page lying unseen and what not.

Isn,t it an utter misuse of the available resources just because the users are getting them free ( free email accounts with unlimited storage, free FB accounts again with unlimited storage)

And here we are scratching our heads as to how to go to next level of the bandwidth, the next level of storage, the next levl of data centers..

Isn't this a right time to make the people understand the valeue of the resources they are using( wasting is the right word) by making such services chrageable.

I am sure the internet bandwidth , whatever is available today will be much more than sufficient if we do way with these free services available on internet.

True! Facebook, Google and others are leading the demand to higher bandwidth. 400GE can be built today (relatively speaking) using existing key components that are also commercially feasible (i.e. have material costs that result in affordable Ethernet products). Jumping to 1TB any time before about CY2020 is not commercially feasible. It is important to remember that the IEEE develops standards that have to meet all five criteria: Broad Market Potential, Compatibility, Distinct Identity, Technical Feasibility, and Economic Feasibility. 400GE can meet all 5, 1TB in the present day cannot.