Inphi Debuts Data Center Interconnect Gamechanger

ANAHEIM, Calif. -- OFC 2016 -- In collaboration with its customer Microsoft, Inphi has released a reference design for a 100G interconnect developed for the specific purpose of stitching together regional data centers so that they can, when necessary, act just like a hyper data center.

The approach enables data centers within 80km of each other to be connected almost directly switch-to-switch, running Ethernet essentially directly on DWDM, with the only equipment between the switches being the necessary amplifiers and multiplexers/demultiplexers.

In other words, the technology will, over distances of 80km or less, obviate the need for the kind of dedicated data center interconnect systems that have been developed by the likes of Ciena, Infinera and others. The approach will, of course, also eliminate the extra interconnect otherwise needed to integrate those systems. Furthermore, data center operators will save on the energy they will no longer have to draw to power those systems.

Microsoft plans to begin deploying the technology in the second half of the year, at scale, Microsoft's senior director of network architecture, Jeff Cox, Sr., told Light Reading.

Three years ago, Microsoft projected how its data center needs would develop and determined that the need for an inexpensive intermediate-distance 100G would arise, but discovered that no such thing existed and nobody was working on it. There were 100G standards and technologies (such as IEEE 802.3ba, CR4, SR4, LR4) for point-to-point links of 10km and shorter -- essentially for links within datacenters. There were 100G standards and technologies (OIF, coherent QPSK, 16-QAM) for long-haul connections of 100km or more. But when traversing intermediate distances, from a few kilometers to about 80km, the only standardized options were 10G.

Microsoft could not use the 100G coherent solutions for long-haul interconnect in these intermediate-distance applications because those solutions wouldn't scale, were too large, and too expensive, Cox said.

Meanwhile, the industry was fixated on replacing 10G links with 40G technologies. (Google was among those who demanded 40G).

Microsoft realized that if it wanted 100G for intermediate distances by the time 2016 rolled around, it would have to spur the development itself. So in 2013, Microsoft put out an RFI for an inexpensive 100G optical interconnect that would operate in the range between 10km and 100km. It chose to work with Inphi Corp. because Inphi's proposal seemed the most straightforward solution, Cox said.

Inphi has now delivered on its proposal. Its ColorZ reference design relies on silicon photonics to reach 100G for 80km DWDM data center interconnect (DCI). The interconnect is implemented in the standard QSFP28 form factor. (Inphi noted that 100G DWDM interconnect in QFSP28 is still otherwise lacking.)

ColorZ delivers up to 4 Tbit/s of bandwidth over a single fiber, Inphi said. It uses pulse amplitude modulation (PAM4) signaling, a relatively new modulation technique the IEEE originally adopted for 400G Ethernet and which rapidly gained favor in other Ethernet implementations. It was the PAM4 technology that Inphi was developing that originally caught Microsoft's attention.

The reference design has a few more registers, explained Inphi CTO Inphi Radha Nagarajan. Switch suppliers will consequently need to develop a driver for ColorZ, but having to write new drivers is standard operating procedure for most new interconnect anyway, he said.

Since ColorZ interconnect is implemented in standard modules, they plug directly into standard 100G switches. The first two switch vendors ready to work with the new modules are Cisco and Arista. The first amp vendor to work with the parts is ADVA. Microsoft and Inphi said they expect other vendors in both categories will follow.

Cox, who works in Microsoft's Azure operation, said that while Microsoft needs to have data centers in major metro areas, some metros simply aren't large enough to create enough demand to justify the expense of building a hyper data center there: Hence the need to stitch together these smaller, regional data centers into a "virtual" hyper datacenter.

Inphi quoted ACG Research projecting that the global market for optical DCI will increase from $1.1 billion in 2014 to $4.7 billion in 2019, a compound annual growth rate of 44.9%.

Irresponsible Hype"Gamechanger" is such an overused description in most cases but to use it on a product that has hardly been proven to actually work is nothing short of editorial hype. Thanks LR for not dissappointing as usual.

After more industry meetings we believe that Microsoft is likely to deploy Inphi's direct detect solution in shorter links of 10 miles or less associated with large data center campuses they are planning to connect in a mesh.

Re: More on this"In other words, the technology will, over distances of 80km or less, obviate the need for the kind of dedicated data center interconnect systems that have been developed by the likes of Ciena, Infinera and others." So now it's Inphi's design "obviates" very little if any Infinera gear. Thanks for the correction Mr. Santo.

Re: More on thisWhen you see the revenues that Inphi is expected to realize from this, it looks even less spectacular. This is an excerpt from Barron's:

Douglas Freedman with Sterne Agee CRT, who rates Inphi his "top pick" among small-cap names, writes today that the partnership is "a positive both financially and strategically" and that it "could represent an incremental revenue of $8M and $25M in F2016 and F2017, respectively."

More on thisKeep in mind that this is for a specific distance segment. The technology for interconnect within the data center -- <2km -- is more or less settled for now. The technology for >100km is more or less settled for now. Inphi's tech is for distances in between, and the applications for those distances are specific and (at least at the moment) limited: the big one for Microsoft is connecting regional data centers. Microsoft will still need to connect it's mega datacenters, and not much is going to change there. The new Inphi tech is less important to companies that rarely do anything but mega datacenters (or hyper datacenters, pick your terminology), such as Facebook and Google, for example.

We mentioned Infinera, but they say they actually don't sell a lot of DCI boxes in this particular application, but the companies that do sell boxes in just this applications will see diminishing sales, but DCI boxes will be obviated in just this application. DCI boxes will continue to be required in other applications.

Not a commercialized product yet? Yes, true. But I cannot imagine why Microsoft would tell me they are definitely deploying this technology at scale in the second half of this year unless they were already fully convinced it will work.

Re: Stock positionsSorry if I touched a nerve. The hyperbole seemed overboard. I like to see how products perform in the marketplace, BEFORE I call them gamechangers and bombshells. Let's touch base at the end of next year to see how things went.