The speed at which networking is evolving in the data center is accelerating. Four year cycles that we saw in the transition from 10 Gbps to 25 Gbps are shrinking and the 100 Gbps to 400 Gbps port cycle will occur even faster. The market will move from 56 Gbps SERDES to 112 Gbps SERDES in less than two years. There are a number of reasons why we are in the midst of more rapid technology transitions, but it can be summed up by more intelligent and efficient infrastructure. The introduction of the Smart NIC, and years of data are allowing the cloud providers to run at extremely high rates of utilization which is causing network bandwidth and topologies to evolve.

There were several key takeaways from the OIF 100G workshop in Santa Fe. First, the Cloud providers are pushing all their suppliers to ship 56 Gbps SERDES today in high volumes and to quickly move towards 112 Gbps SERDES as quickly as possible. Cloud providers will move to 112 Gbps SERDES before 2021 if the industry can provide enough volume. Second, there are a number of different ways to get the industry there faster and at volume, including the use of gearboxes and retimers in order to use existing optics. What this means for the industry is that there is tremendous opportunity.

What was also interesting was the continued discussion of different port densities. It is possible to increase the density of a 1RU switch or a line card from 32 ports (25.6 Tb/s) to 36 ports (28.8 Tb/s). While there is some work to be done with certain length optics and power budgets, it is also promising that to increase the port density. It also can open the door for debate of what is a top-of-rack switch vs. aggregation or end-of-row switch. One could see some cloud providers choose the more traditional 48-port 100 Gbps switch with 400/800 Gbps uplinks instead of using a splitter cable. Also by moving the top-of-rack to the middle, we could see some unique deployments as we see 100 Gbps server connectivity. 100 Gbps switches can also make their way into the enterprise as an enterprise core/aggregation box and into traditional SPs, both of which will help drive additional port demand.

As we look into demand of 2020/2021 it is also important to remember the size of the cloud, especially the US Top 5 hyperscalers (Amazon, Apple, Facebook, Google, and Microsoft) which grew their DC equip0ment CAPEX in aggregated by 32% in 2017. It is likely that in three years (2020), there spend in networking will be nearly twice as much as it was in 2017. There is also potential, with optics pricing and increased use of DCI that the spend in networking can be even higher.

2018 has been off to an impressive start with many 400 Gbps announcements and likely another record year for data center networking growth. With shipments of 400 Gbps starting in late 2018 and widespread adoption in 2019, it is important to start looking at what is coming next as we look into 2H19 and 2020. All current 400 Gbps announcements are based of 56 Gbps SERDES, so 8 lanes of 50 Gbps. This is an interim technology as the next important technology which has already been demonstrated electrically and optically is single lane 100 Gbps via a 112 Gbps SERDES. 400 Gbps ports ultimately come in two waves, with the second wave being the more important one for the market and being the enabler of an important building block for the market.

112 Gbps SERDES will be the next big building block for data center networks and it is coming sooner rather than later. First, hyperscalers will adopt it as a way to move towards 800 Gbps and beyond. Second and shortly after this, enterprise networks, such as the campus core, and telco networks, such as backhaul will benefit from the technology. 56 Gbps does not have these additional market drivers and is more of an incremental technology. In many ways 112 Gbps SERDES is like 28 Gbps SERDES, with widespread adoption beyond hyperscalers.

The ability to use a gearbox and/or retimers to use existing optics and the ability to rethink how a switch gets built gives the market multiple paths to serial 100 Gbps. OFC 2018 also highlighted that multiple vendors in the ecosystem are looking to quickly move in this direction as well. Bringing the entire supply chain with it will help mitigate the early supply shortages seen with 28 Gbps SERDES in 2016 and 2017. Keeping in mind the hyperscalers buy in units of 100K or 1M at a time, early volumes need to be large with a strong set of suppliers underneath.

There are many factors in the data center that have caused bandwidth to increase more rapidly in the past several years. Hyperscalers, using a combination of hardware acceleration (Smart NIC) and software (implementing SDN) are able to get higher utilization of their infrastructure. At the same time, hyperscalers are in the early stages of micro data center buildouts, their DCI deployments, Artificial Intelligence, and Machine Learning offerings, all of which will quickly consume currently available networking pipes. The increased demand of this new type of application will require hyperscalers to move more quickly to next generation speeds, something that is easily picked up in the supply chain conversations by the increased speed at which higher speed offerings are hitting the market.

It is likely that by the end 2022 that half the bandwidth shipping in the data center switching market will come from 112 Gbps SERDES based products. With the hyperscalers being almost twice the size they are today, it becomes very clear that the market is eagerly awaiting this next generation technology.

Credo is a provider of high performance mixed-signal integrated circuits used in high bandwidth applications, ranging from cloud-scale data center to high performance computing to enterprise networks. Founded in 2008, our technologies enable optimized solutions that demand leading edge speed, power, and signal processing requirements.