This migration ultimately will spur an Ethernet switch refresh cycle and generate an estimated $50 billion in revenue for suppliers over the next five years, however the 10Gbps transition has been slow and is following a different path from that of 1Gbps.

This article describes the current status of the 10Gbps server migration and explores the factors impacting it. We then discuss our view on the catalysts as well as the timing for 10Gbps to become the mainstream server network choice.

10Gbps server network ports have been shipping in material quantities for more than five years. Nevertheless, in mid-2012, we estimate that 10Gbps ports are integrated on less than 20% of servers while 1Gbps ports had reached a server penetration rate of well over 50% by that point in their ramp. Why has the migration to 10Gbps been so slow? We believe it's due to five factors:

" Price too high: Based on scenarios of typical data center deployments, connecting servers at 10Gbps Ethernet rather than 1Gbps provides a price advantage only when servers need more than five 1Gbps connections (Figure 1). However, there is a catch. Those scenarios assume 10Gbps Ethernet switches in the core. The use of 40Gbps and 100Gbps speeds -- which are not yet widely available -- may shift the point of price parity.

" 10G Base-T not yet ready: 10G Base-T technology, which allows 10Gbps Ethernet to run over twisted pair, cannot currently provide a low-enough level of power consumption and latency. For example, the current 10G Base-T power consumption approximates 2.0 Watt per port for a distance of 10 meter, and we believe it needs to be less than 1.0 Watt as it is with 1G Base-T. An alternative technology, the SFP+ DAC (Direct Attached Copper) has been invented to overcome the 10G Base-T issues, but it is very expensive for mass deployment.

" Continued interest in 1Gbps: The 1Gbps speed is still attractive for IT managers who want to isolate certain types of traffic, such as management, for better bandwidth control or security reasons. We have observed that many Romley-based server offerings by major vendors -- HP, IBM and Dell -- are integrating either a quad port 1Gbps connection or a mix of two ports 1Gbps and two ports 10Gbps, which is a testament to continued customer interest in 1Gbps.

" Choice of protocol: IT managers are still evaluating the 10Gbps Fibre Channel over Ethernet (FCoE) to converge their Storage Area Network (SAN) and Ethernet traffic, which might be delaying 10Gbps purchases.

" Choice of brand: While the 1Gbps shipments are dominated by two vendors, Broadcom and Intel, the 10Gbps market is attracting new entrants that have loyal customer bases, such as Fibre Channel server connectivity vendors Emulex and QLogic, and InfiniBand server connectivity vendor Mellanox. Hence, server vendors have to find some ways to accommodate various customers' brand requirements.

10Gbps server migration: Catalysts and outlook

Given the wide number of choices and all the uncertainties about user needs and wants, there is no single solution that is likely to satisfy the requirements of all customers. Hence suppliers are hesitant to pursue the classic integrated approach of soldering a LAN chip on a server motherboard -- called LAN on motherboard (LOM) -- that would otherwise facilitate early migration to 10Gbps. Instead, major server vendors such as HP, IBM and Dell are using modular integrated network cards, called daughter adapters or modular LOM, to help IT managers through the transition by providing choice of speed, type of network connectivity (10G Base-T or SFP+ DAC), type of protocol and brand.

While these new adapters provide more flexibility to end users, they come at a premium. The 10Gbps daughter adapters cost two to three times more than LOM counterparts. We estimate the price of a 1Gbps server LOM port to be around $7, the price of a 10Gbps LOM port to be about $24, and the price of a 10Gbps daughter adapter port to be over $65 (note that these are prices to the server vendor and not the end user).

Given the daughter adapters' price premium, the question is whether or not 10Gbps LOM server connectivity is required for a mass migration to 10Gbps. Certainly, a move toward a LOM would accelerate the 10Gbps transition. However, we believe daughter adapters may persist in the market for two or more years. We also believe that a mass migration to 10Gbps could still occur without a full conversion toward LOM since the price of a server network port comprises less than 10% of the total price per server connectivity as illustrated in Figure 2.

We believe the vendor decision to switch from a daughter adapter to a true LOM will depend on several considerations:

" Uncertainties about users' requirements. We believe users' move towards a more common and converged set of choices will favor server vendors' transition towards a LOM solution.

" The ability to justify the price premium of daughter adapters, which again depends on users' need for flexibility.

" The degree of data center consolidation and the move to the cloud. It is not yet clear whether enterprises and small and midsize businesses (SMBs) will keep their own data centers or outsource them to service providers. We believe an outsourcing model would favor a move toward true LOM, as cloud providers usually can specify their needs and buy LOM in bulk.

Next, we believe the maturity of the 10G Base-T technology is a key catalyst for a 10Gbps mass migration. SMBs are very dependent upon 10G Base-T, as they will continue to have a mix of rack environments with an installed base of servers using 1G Base-T. Without 10G Base-T, smaller IT shops would require two switches, which is not optimal. Larger enterprises tend to purchase servers in racks, and therefore interoperability with an installed base of servers is less critical. We currently expect to see major improvement on the 10G Base-T technology with the next 28 nm PHY, planned for the 2014-2015 time frame.

Furthermore, we believe the price per 10Gbps switch port must come down to propel migration towards 10Gbps. As illustrated in Figure 2, switches and optics comprise the vast majority of the price per 10Gbps server connectivity. We expect a strong price per port decline from today's 10Gbps switch products to those that will be shipping in the first quarter of 2013. We anticipate an Ethernet switch refresh cycle at the beginning of 2013 based on Broadcom's Trident II silicon, which will enable higher switch port density and result in a price-per-port decline of more than 10%.

Now the question is whether 40 Gbps and 100 Gbps are needed in the core to aggregate servers with 10Gbps. We don't think they are critical, because IT managers are changing their network architectures to match current traffic needs. They are moving away from a three-tier network architecture, with two tiers of modular switches in the core, to aggregate fixed top-of-rack switches.

Older architectures were designed to move traffic from servers to end users (referred to as north-south traffic). Current architectures flatten the number of tiers to accommodate machine-to-machine traffic (referred to as east-west traffic), which now comprises the bulk of the bandwidth used in data centers. The change in traffic flow topology means less traffic flows through the core of most data centers, removing some of the legacy requirements for higher-speed switching cores to drive adoption of server networking speed upgrades.

The Dell'Oro Group predicts a mass migration of servers to 10Gbps ahead of a core switch migration to higher speeds. We expect 10Gbps server connectivity to out-ship 1Gbps sometime between 2014 and 2015 (Figure 3), which would coincide with the 10G Base-T maturity. We do not anticipate that the higher-speed 40Gbps and 100Gbps core switches will eclipse 10Gbps for many years.