Back at IDF 2010, we wrote about Intel Light Peak nearing its eventual launch in 2011. Back then, the story was a 10 Gbps or faster physical link tunneling virtually every protocol under the sun over optical fiber. Though an optical physical layer provided the speed, in reality the connector and physical layer itself wasn’t as important as the tunneling and signaling going on beneath it. The dream was to provide a unified interface with enough bandwidth to satisfy virtually everything desktop users need at the same time - DVI, HDMI, DisplayPort, USB, FireWire, SATA, you name it. Daisy chain devices together, and connect everything with one unified connector and port. At IDF, we saw it moving data around between an Avid HD I/O box, a Western Digital external RAID array, and simultaneously outputting audio and video over HDMI. Intel also had another live demo working at over 6.5 Gbps.

That dream lives on today, but sans optical fiber and under a different name. Intel’s codename “Light Peak” is now named Thunderbolt. In addition, instead of optical fiber, ordinary copper does an adequate enough job until suitably cheap optical components are available. It’s a bit tough to swallow that optical fiber for the desktop still isn’t quite ready for mainstream consumption - issues like bend radius and the proper connectors were already mitigated - but copper is good enough in the meantime. Thunderbolt launched with the 2011 MacBook Pro, and though the interface isn’t Apple exclusive, will likely not see adoption in the PC space until 2012.

Although Thunderbolt in its launch instantiation is electrical, future versions will move to and support optical connections. When the transition to optical takes place, legacy electrical connector devices will work through cables with an electro-optical transceiver on the cable ends so there won’t be any need to use two separate kinds of cables. The optical version of Thunderbolt is allegedly coming later this year.

Thunderbolt shares the same connectors and cabling with mini DisplayPort, however Thunderbolt cables have different, tighter design requirements to fully support Thunderbolt signaling. DisplayPort is an interesting choice since it’s already one of the fastest (if not the fastest) desktop interfaces, topping out at 17.28 Gbps in DisplayPort 1.2 at lengths of under 3 meters. At longer distances, physics rears its ugly head, and throughput drops off over electrical links. Of course, the eventual advantages of moving to photons instead of electrons are greater distance without picking up much latency.

Thunderbolt is dual-channel, with each channel supporting 10 Gbps of bidirectional bandwidth. That’s a potential 20 Gbps of upstream and 20 Gbps of downstream bandwidth. The connection supports a daisy chain topology, and Thunderbolt also supports power over the cable, 10W to be precise. We aren't sure at this time what the breakdown on voltage/amperage is though.

Back when it was Light Peak, the goal was to tunnel every protocol under the sun over a common fast link. Multiplex everything together over one protocol-agnostic link, and then you could drop relevant data for each peripheral at each device in the daisy chain. Up to 2 high-resolution DisplayPort 1.1a displays and 7 total devices can be daisy chained. Thunderbolt instead carries just two protocols - DisplayPort and PCI Express. Tunnel a PCIe lane over the link, and you can dump it out on a peripheral and use a local SATA, FireWire, USB, or Gigabit ethernet controller to do the heavy lifting. Essentially any PCI Express controller can be combined with the Thunderbolt controller to act like an adapter. If you want video from the GPU, a separate dedicated DisplayPort link will work as well. Looking at the topology, a 4x PCI Express link is required in addition to a direct DisplayPort connection from the GPU.

Apple learned its lesson after FireWire licensing slowed adoption - the Thunderbolt port and controller specification are entirely Intel’s. Similarly, there’s no per-port licensing fee or royalty for peripheral manufacturers to use the port or the Thunderbolt controller. iFixit beat Anand and me to tearing down the 2011 MacBook Pro (though I did have one open, and was hastily cramming my OptiBay+SSD and HDD combo inside) and already got a shot of Intel's Thunderbolt controller, which itself is large enough to be unmistakable:

In addition, you can still plug normal mini Display Port devices into Thunderbolt ports and just drive video if you so choose.

Though there aren’t any Thunderbolt compatible peripherals on the market right now, Western Digital, LaCie, and Promise have announced storage solutions with Thunderbolt support. Further, a number of media creation vendors have announced or already demonstrated support, like the Avid HD I/O box we saw at IDF.

Thunderbolt already faces competition from 4.8 Gbps USB 3.0 which has already seen a lot of adoption on the PC side. The parallels between USB 2.0 / FireWire and USB 3.0 / Thunderbolt are difficult to ignore, and ultimately peripheral availability and noticeable speed differences will sell one over the other in the long run. Moving forwards, it’ll be interesting to see Thunderbolt finally realize the “light” part of Light Peak’s codename, and exactly how that transition works out for the fledgling interface.

Post Your Comment

107 Comments

Is there any reason that USB 3.0 couldn't run over Thunderbolt? Since it's using a PCI-Express link, all you'd need is a chipset built into a Thundebolt dock or some sort and you should be able to get at least 2 USB 3.0 ports thAt could run saturated without slowing each other down.

Plus, I also think that we'll be seeing some PCI-Express add-on cards this year as this technology is VERY attractive to the production professional who needs to deal with a lot of data at once. I would assume other connectors which allow a direct connection to a computer instead of over a network cost MUCH more than what Thunderbolt does. Reply

On paper it's definitely possible. And this would help explain why we didn't see any external PCIe setups this year at CES. However there are some caveats.

1) Intel hasn't told us what the latency is like. A 3 meter cable isn't very long, but PCIe is a very fast connection - a single lane runs at 5GHz. Without knowing much about the link layer, I can note that at 5GHz, even light only travels 6cm/Hz. For the copper implementation of Thunderbolt this would be worse. A reasonable guess would be that you're looking at up to an additional 50 clocks, on top of the latency between the Thunderbolt controller and the CPU.

The point of this being that I'm not sure what the tolerance is for PCIe latency with today's GPUs. It may not work out of the box if the GPU expects a response within X cycles.

2) Performance will be impacted. PCIe x2 is 8GB/sec, so a single Thunderbolt channel is only a bit more. You can run a GPU on such a setup, but you will be severely bandwidth limited, and this of course gets worse the faster the GPU. As a result performance would not be comparable to desktop parts. For something like a Radeon 5770 (I throw this out as it's a reasonable upgrade for most notebook GPUs), I'd guess it would achieve 75% of the performance, but that's very much a shot in the dark.

So in short, yes, it should be possible. If not with this generation of GPUs then with the next one. But with a copper connector it's going to be bandwidth limited.Reply

You have 2 channels though, which gives you PCIe 4x bandwidth, testing has found that in most games a 4x link only results in a few percent of penalty. There're exceptions like MS Flight Simulator X that take major hits but very few other games do.Reply

While bandwidth might not be impaired by such a link, latency might very well be. And the result of increased latency might be much more visible than a 10% loss of framerate (or there might be no issues at all). And this might even change from an application to another.Reply

I wonder why Toms stopped testing below 4x. While not as realistic a test case, it covered people using desktop GPUs on laptops (via a breakout box and a 1x PCIe ribbon cable) and showed worst case performance.

OTOH IIRC that test showed a much larger 4x penalty than had previously been seen with on the 280 or 8800 tests.Reply

For quick mental checks think about how large an antenna would have to be to pick up a 60 meter wave with any efficiency. An even easier thing to remember is that the wavelength of a 1 GHz wave is about a foot. Reply