How did they do it? By using an optical silicon chip called a “time telescope”, an idea first developed over 20 years ago by Brian Kolner at Hewlett-Packard. The basic idea of a time lens is to stretch or compress a wave over time, rather than spatially.

A long, 10-GHz pulse containing bits of data and a much shorter laser pulse with no information pass through one of these waveguides.

A race is then set up between the halves of the pulse, with the back speeding up and the front slowing down as it passes through an optical fibre …

… Just like an optical telescope, combining two of these temporal lenses creates a time telescope that can take a standard 10 GHz pulse and create an "image" of it.

That jams the same information into a pulse just one twenty-seventh as long.

More importantly, it also does this so “in an energy-efficient manner, because the only power required is that needed to run the laser,” according to Technology Review.

I’ll leave it to the engineers and physicists to argue how plausible this is. In any case, the obvious commercial barrier is that existing optical networks still rely on electronics to maintain the integrity of the signal across long distances, as well as the five-nines performance expectations of carriers and their customers. And it’s typically the electronic component that serves as the optical bottleneck.

(It’s also worth noting the MIT story doesn’t mention the length of the fiber, or the impact of a time lens on latency, and the Nature Photonics article describing the details is subscription only).

But Alexander Gaeta, professor of applied and engineering physics at Cornell University, who co-developed the device, told Technology Review that silicon-based lasers (a technology just under four years old) and modulators have proven they can hold their own in the optical playground. If electronic components can be made with the same material, that could clear the way for an integrated chip that would enable the electronics to keep up with the photons, he says.