Intel: the future of electronics is a hybrid silicon “laser device”

Intel has demonstrated a 50Gbps fiber optic interconnect using the company's …

Intel announced today that it has reached a milestone in its efforts to replace copper wiring with light by creating a stable, 50Gbps link between two devices using fiber optics. Dubbed "silicon photonics," the chipmaker's innovations are the basis for a fiber optic interconnect that can be theoretically scaled to 1Tbps for device-to-device and wide-area networking connections. Those same innovations could one day be used to replace copper interconnects in electronic systems.

One of the key innovations that drives this technology is research conducted in concert with UC Santa Barbara to develop hybrid silicon lasers. Using a unique process to bond indium phosphide to silicon along with carefully etched gratings in formed silicon waveguides, designers are able to create variable-wavelength solid state laser emitters by merely manipulating the etching pattern.

The light generated from these tiny lasers can be guided along etched channels called waveguides, and modulated with tiny silicon modulators that have ramped from a few hundred megahertz at the beginning of the millennium up to the current versions which can work at frequencies high enough to transmit 40Gbps on a single channel. Separate channels are muxed together and sent along a single fiber optic cable. On the receiving end, the composite light is demuxed into separate wavelengths, and each one is fed into a separate photodetectors for decoding.

The entire transmit and receive modules are single chips with special aligning pins to connect the fiber at each end. The current implementation uses four hybrid silicon lasers at different wavelengths, each encoded at 12.5Gbps, for a total of 50Gbps throughput. In the lab, the link was tested continuously for 27 hours, transferring over a petabyte of data with nary an error. Intel says this translates into a bit-error-rate less than 3e−15.

The bandwidth can be scaled by increasing the number of lasers operating at distinct wavelengths, or by increasing the encoding speed. By bumping the number of channels to 25 and the encoding rate to 40Gbps, speeds of 1Tbps are possible. Intel believes this could be pushed even further, but to give you an idea of what kind of data you can shuffle around at these speeds, consider this: at 50Gbps, in less than one second you could transfer an entire HD feature length film; at 1Tbps, you could move as many as three whole seasons of Law & Order. At that speed, you could download the entire printed collection of the Library of Congress in under two minutes.

The company plans to ramp up the tech for volume production, and expects to have it "widely deployed" by 2015. "We've demonstrated that in principle that we have all the pieces," said Dr Mario Paniccia, Intel Fellow and director of the company's Photonics Technology Lab, during a press conference this afternoon. "We need to develop a high-volume manufacturing process, and optimize power, packaging, and assembly. The process technology is CMOS-like, which is within our core competency. Our biggest challenge is testing its reliability, but we don't foresee any fundamental issues."

"This is the future path for increasing bandwidth," he added.

Intel CTO and director of labs Justin Rattner said the company isn't just eyeing high-end interconnect applications, either. "Current tech is about $100 per port; we're looking to take this down to $1 per port."

We asked how Intel's silicon photonics tech relates to its upcoming LightPeak standard. "Current LightPeak uses discrete optical components, and that's currently a 10Gbps tech," Rattner told Ars. "Scaling that up to higher speeds is going to be very challenging. Using multiple VCSELs would drive cost up. The second or third generation of LightPeak would be an excellent candidate for silicon photonics."

While silicon photonics is currently being used for device-to-device connections, the technology could be used to move data from one chip to another inside a PC or mobile device, for instance. "The real benefit will be realized when we begin to integrate photonic devices together, which can enable things that weren't possible or practical before," Rattner said.

Is it just me, or does "Silicon Photonics" feel like it's pulled out of a SciFi novel, but in a good way? PCI Express is 128 Gpbs so this technology is within reach of one of the fastest on-board copper interconnects.

Funny, I never heard of "USC Berkeley". I'm familiar with this research, and I think you probably mean UCSB, or University of California, Santa Barbara. Nice job giving credit to not just one, but _two_ different schools.

Edit: To clarify, the article originally attributed the research to a "USC Berkeley". As of this edit it had been changed to "UC Berkeley" which, while the name of an actual university, is still _the_wrong_university_.

Our town already has fiber (just outside edmonton, alberta, canada). But this fiber would allow even faster fiber. Currently the fiber offered here is 15mb/s.

Unfortunately not all fiber is created equal, and just like we have to keep upgrading twisted-pair copper with each new 'Category' of bandwidth specs, the same will likely happen with fiber media. The good news is that the headroom for each generation of fiber generally offers a lot more bandwidth than for each new generation of copper. (Similar to how Ethernet speed is increasing in powers of ten, and we're currently in a zone with some legs to it - 10Gbps Ethernet, going on 100G).

Funny, I never heard of "USC Berkeley". I'm familiar with this research, and I think you probably mean UCSB, or University of California, Santa Barbara. Nice job giving credit to not just one, but _two_ different schools.

Edit: To clarify, the article originally attributed the research to a "USC Berkeley". As of this edit it had been changed to "UC Berkeley" which, while the name of an actual university, is still _the_wrong_university_.

You're right, it's UCSB. I never got to finish my final edits before my Comcast went out earlier this evening, but it's been corrected now. Thanks for the sharp eye, though.

What chance do the rest of us have when even Ars and Intel get bits and bytes mixed up? The article makes claims of what you can transfer at 50Gbps and 1Tbps, which can only be achieved at 8 times that speed. The animation was discussing scaling up to 1 'terabit of data per second', while displaying a speedometer with 1TB/s.

Our town already has fiber (just outside edmonton, alberta, canada). But this fiber would allow even faster fiber. Currently the fiber offered here is 15mb/s.

Unfortunately not all fiber is created equal, and just like we have to keep upgrading twisted-pair copper with each new 'Category' of bandwidth specs, the same will likely happen with fiber media. The good news is that the headroom for each generation of fiber generally offers a lot more bandwidth than for each new generation of copper. (Similar to how Ethernet speed is increasing in powers of ten, and we're currently in a zone with some legs to it - 10Gbps Ethernet, going on 100G).

I'm not sure that's proper fiber as well. They mostly use fiber for the main connections and then copper out to customers. There probably is one fiber channel to the town and then you have a modem and a bunch of copper. Seems like a waste of money (fiber is expensive) to put full fiber into your house and only get 15mb/s. -_-

One thing I've always wondered...How do they handle tens of Gb/s of data at either end of these connections if they can't make transistors go much faster than ~4GHz stable? Generally speaking I mean, not specific to this article.

Is it just me, or does "Silicon Photonics" feel like it's pulled out of a SciFi novel, but in a good way? PCI Express is 128 Gpbs so this technology is within reach of one of the fastest on-board copper interconnects.

Yeah, like "robotics", he he he? This is so cool. I hope I live long enough to have a motherboard and video card that run on light instead of copper.

One thing I've always wondered...How do they handle tens of Gb/s of data at either end of these connections if they can't make transistors go much faster than ~4GHz stable? Generally speaking I mean, not specific to this article.

That's a good question. I presume that before transmission and after decoding it's largely processed in parallel. For instance if you are writing the incoming data to memory (or cache) there is no need for any one transistor to touch all the incoming data.

Still leaves the question of how it's decoded/encoded for the laser but I imagine it's a combination of working in parallel and using simple fast circuits but someone more expert than me can hopefully weigh in on this.

What chance do the rest of us have when even Ars and Intel get bits and bytes mixed up? The article makes claims of what you can transfer at 50Gbps and 1Tbps, which can only be achieved at 8 times that speed. The animation was discussing scaling up to 1 'terabit of data per second', while displaying a speedometer with 1TB/s.

I never got it mixed up—I consistently referred to bits per second (bps).

The information about what could be transferred at that speed was provided by Intel. In one second, you could transfer 50Gb, or 6.25GB—that seems more than sufficient for a 720p HD movie encoded in H.264. At 1Tbps, in one second you could transfer 128GB. Assuming 22 ep as 44m per episode, and 2GB/hr average encoding rate, three seasons would be 96.8GB. If you assume 3GB/hr ave, that's 148GB, which is pretty close. So those seem like reasonable estimates.

It does seems like Intel's video did make a flub, be we can definitely tell bits from bytes here in the Orbiting HQ.

this is huge...been using fiber optics in my music equipment for years...ive been wondering when i would see it deployed in next generation computers...

as far as handling all that data...each stream is a completely separate stream of 1's and 0's....im assuming that each discrete wavelength of light used to send data has a detector on the other end that only reads that specific composite wavelength of light data...thus each detector only interprets one stream of 1's and 0's...but for each added wavelength you have to add one more encoder and decoder...this means its highly parallel but each stream must be independent from the data in the other streams...much in the same way the airwaves are separated by their respective frequency so that each channel has a slice of frequency spectrum in order to transmit uninterrupted data...

of course the next level would be muxing and demuxing multiple light frequencies, which would require much faster transistors in order to benefit...it will of course be a while because we can hardly keep up with computing this flood of data...

its gonna be interesting to see computer systems that utilize this new tech...imagine having a general purpose ram bank computer that supplies memory for all the computers in your house...much in the way we use nas for hard drive storage solutions...

How is this news? Ultra long-haul fiber-optic transport systems from Alcatel-Lucent can currently provide up to 128 channels each of 100 Gbps transmission over standard singlemode fiber (10 micron diameter). Is this advance from Intel just using smaller lasers? Or am I missing something here...

One thing I've always wondered...How do they handle tens of Gb/s of data at either end of these connections if they can't make transistors go much faster than ~4GHz stable? Generally speaking I mean, not specific to this article.

This tech is not for long distances, so it wont replace your internetz. It's meant for going from chip to chip over a distance of inches or feet. It could simplify board design by requiring only a single optical wire instead of many matched-length copper wires. Latency might be much better too, depending on the speed of the specialized components that they mentioned.

How is this news? Ultra long-haul fiber-optic transport systems from Alcatel-Lucent can currently provide up to 128 channels each of 100 Gbps transmission over standard singlemode fiber (10 micron diameter). Is this advance from Intel just using smaller lasers? Or am I missing something here...

Yes, smaller. Also mentioned above in thread, it can be fabricated on a standard silicon wafer, so that it can be integrated into an IC. This allows them to be added without requiring a much more expensive wafers.

So instead of a dedicated optical I/O chip or chips, it is small enough to put the optics all on the main chip itself.

Oh, and no, this is not designed to replace that long haul equipment you refer to. Much lower power involved here.