This site may earn affiliate commissions from the links on this page. Terms of use.

IBM has announced a breakthrough in the field of silicon photonics — the first fully integrated wavelength multiplexed chip. This new device is designed to enable the manufacture of 100Gb/s optical transceivers and allow both the optical and electrical components to exist side-by-side on the same package. This type of on-die integration is critical to the long-term deployment of optical technology over short distances. But why deploy silicon photonics in the first place — and why has it taken decades of work from companies like IBM and Intel, with seemingly so little to show for it?

Silicon photonics — the long-term copper replacement

In theory, silicon photonics could solve some major problems associated with the continued use of copper interconnects. One of the biggest problems with copper wire is that it doesn’t scale nearly as well as other vital parts of a modern CPU. Past a certain point, it becomes physically impossible to make copper wires any smaller without compromising their performance and/or lifespan. In theory, optical interconnects could transmit data for far less power while simultaneously moving information much more quickly.

Silicon, unfortunately, is a poor native medium for optical devices. Because silicon lacks a bandgap and the scales of manufacturing are so different (optical waveguides and other components are far larger than the silicon CMOS devices they interact with), designing solutions that could scale effectively and affordably, integrate into existing CMOS manufacturing, and rely on silicon rather than costly alternative materials like gallium arsenide has proven extremely difficult.

The reason so many companies have pushed to bring this technology to market, despite the slow pace of progress, is that silicon photonics is generally believed to be necessary for exascale-level computing. Right now, copper and fiber typically split the transmission market by distance. Short-run cables between servers or racks tend to use copper, while longer distances rely on fiber.

The chart above is from an Intel presentation on silicon photonics, but it illustrates the cost and power consumption targets that manufacturers are trying to hit. The long-term roadmaps for silicon photonics enables bandwidths and energy/bit of information ratios that no copper signaling can match. Bringing power down from 75 picojoules to 250 femtojoules is a reduction of multiple orders of magnitude.

As part of that effort, IBM’s research teams have worked to reduce the process node that it used for circuit design. This slide from 2012 shows how 90nm – 65nm represents the “sweet spot” for these kinds of circuits. While we’re used to smaller nodes offering substantial benefits to traditional CPU transistors, other kinds of components don’t see the same benefits from scaling to smaller process geometries. IBM’s documentation refers to sub-100nm manufacturing, implying that the company standardized at 90nm or 65nm.

IBM isn’t giving timelines for when we might see more devices shipping with on-chip silicon photonics, but we can predict how the technology will roll out. Current cutting-edge designs put the optical components on the same physical package as the CPU, or at the edge of a motherboard. This makes the hardware useful for server-to-server linkages or possibly for peripheral connection. We expect to see silicon photonics roll out first in the HPC and scientific computing industries, where the sheer scale of many build-outs makes the power conservation critical and government grants are available to ease the cost of initial deployment.

After decades of work, silicon photonics might seem like just another pie-in-the-sky idea that sounds great on paper and never pans out — but from HP to Intel to IBM, progress is happening in this field. Hardware may not roll out today or next year, but optical signaling is going to play a part of computing’s future — in the datacenter, even if nowhere else.

aww cmon guys! dont be rude… at least its not his mothers son cousin that bought those monitors after a 20.000$ a month pay-check ^^

eonvee375

dont listem to them dude ^^ these are nice

BtotheT

75 picos vs .25 picos big news in the next wave of transmission efficiency, now we need equally efficient computation gains somehow(Mutli-Layer Laser Memory Crystals/Entanglement/Carbon Nano).
It’s funny to think that if our species sticks around long enough that the current will look like the 40-50s looks presently.

Matt Menezes

Yeah, I really get excited thinking what this place will be like in 2050 – if I make it that far.

BtotheT

I’m not sure if excitement is how I look at it.
Science both enables and enslaves the minds of man, with progress comes growth to both facets.

Joel Detrow

Well, for starters, don’t invest in beachfront property, ’cause a lot of it might be gone by 2050.

james johnson

And the Skynet begins.

Brian Klock

Holographic Crystal Storage! Where is it??

David in Texas

It’s been shipping for years. The problem is that modern SSDs are faster and cheaper.

Brian Klock

Talking about InPhase Technologies and the drive they developed that looked essentially like a DVD inside a disc caddy? That can’t be called Holographic Crystal Storage… that would be Holographic Disc Storage. Holographic Crystal Storage is close to a cube in shape and reads massive amounts of data in parallel, which I don’t think anyone has developed yet. That tech was bought by hVault and turned into a data archival system which works off of a library of up to 540 discs or daisy-chained and boosted to 1620 discs.http://www.techradar.com/us/news/computing-components/storage/whatever-happened-to-holographic-storage-1099304/2

Holographic Crystal Storage simply isn’t workable. Imagine a petabyte in something the size of a sugar cube. If people actually want to get the data in and out, then they’re going to need to design an I/O bus capable of getting enough information in and out to make it worthwhile.

The speed of light is just too slow. The problem isn’t capacity it is getting info to/from it. Put a petabyte of data in there and a bus that can only read/write 0.0001% of that before maxing out makes this unusable.

The bottleneck is the I/O BANDWIDTH, not the size of the unit. Light just isn’t fast enough to make this a workable product.

This is why people use RAID subsystems, not only for redundancy (Like who wants all their eggs in one holographic crystal), but also with smaller units you get benefits of improved throughput via parallel I/O operations.

Brian Klock

Yeah, I mean from the beginning when they announced it (way back around 1990?) I understood that the read-write mechanism of something as extremely advanced as that, that has to process a massive number of bits in parallel, is going to be a major challenge to develop. I guess even after more than 2 decades no one has met the challenge.

I sort of thought the possibility was there… there are all sorts of high-speed data buses such as in modern supercomputers, so at least speaking about today, I would think the only challenge is to re-engineer the same technology to be more parallel. Which is no small challenge, but hopefully within our grasp finally? I would think.

David in Texas

Fair enough, 20 years ago was different times. Today the problem is not saving data in a small amount of space. (Well, heat is a pesky problem and error correction, but ignore that for the moment).

The problem is reading/writing to it so we don’t have to queue up I/Os and wait. The bigger the repository of data, the more I/O requests it is expected to have, and the greater need for faster pipes.

As i said, light is too slow for mass quantities of data throughput. What we NEED is quantum entanglement for I/O. This will allow us to transfer data from A->B an unlimited distance in exactly … wait for it .. ZERO seconds.

This may be hundreds of years away, but if we could have a storage device that uses quantum entanglement then we would have infinite data transfer speed. Now THAT would be interesting.

Brian Klock

All I really want to say about this sort of thing is … impress me.

cityguyusa

It seems logical because of the large use of silicone in electronics but what about graphene and some of these newer ideas.

This site may earn affiliate commissions from the links on this page. Terms of use.

ExtremeTech Newsletter

Subscribe Today to get the latest ExtremeTech news delivered right to your inbox.