1.5 terabits per second, with only the laws of physics slowing it down.

Share this story

Researchers say they have created fiber cables that can move data at 99.7 percent of the speed of light, all but eliminating the latency plaguing standard fiber technology. There are still data loss problems to be overcome before the cables could be used over long distances, but the research may be an important step toward incredibly low-latency data transmissions.

Although optic fibers transmit information using beams of light, that information doesn't actually go at "light speed." The speed of light, about 300,000 km/s, is the speed light travels in a vacuum. In a medium such as glass, it goes about 30 percent slower, a mere 200,000 km/s.

The research team from the University of Southampton in England solved this problem by taking the glass out of the glass fiber. This results in a "hollow-core photonic-bandgap fibre," which is made mostly of air yet still allows light to follow the path of the cable when it twists and turns.

The methods used by the researchers result in data loss of 3.5 dB/km, an impressively low number considering its incredibly low latency. However, that data loss is still too high for long-range communications. For now, these cables won't be used to wire up Internet Service Provider networks or for transatlantic cables.

The cable uses wide-bandwidth channels to send 37 streams of 40 gigabits per second each, with an aggregate transmission capacity of 1.48Tbps. Even with the current rate of data loss, the researchers say the cable technology is adequate for "short reach low-latency applications," such as future exaflop-speed supercomputers and "mega data centres."

"For longer transmission distances, additional work is needed to further reduce surface scattering loss and to achieve the sub-0.2 dB km," the researchers wrote.

UPDATE: Although this wasn't described in the paper, one of the researchers told ExtremeTech that the cable's throughput actually goes up to 73.7Tbps, "using wave division multiplexing (WDM), combined with mode division multiplexing, to transmit three modes of 96 channels of 256Gbps."

Promoted Comments

This is interesting. What sort of latency difference would we see, provided they work out the kinks, in something like the trans-atlantic data latencies? I know London to New York is important because of the stock exchanges, such that they're even building that new line to shave a few milliseconds off.

The article claims optic fibre (which is mostly what we use now?) travels at around two thirds the speed of light in a vacuum. This technology boosts that up almost to the full "speed of light".

So, you'd see about a 30% reduction in latency.

However, processing on endpoints and at every router in between would remain the same, so the actual real world improvement would be less than 30%.

I don't think it will be cost effective for long cables anytime in the forceable future. 30% just isn't good enough, and we could achieve similar results just by routing the cables in a more direct path.

But data-centres are another story. A 30% increase in speed for two servers sitting right next to each other would be a great speed boost, and this technology is probably much more realistic at short cable lengths.

This is interesting. What sort of latency difference would we see, provided they work out the kinks, in something like the trans-atlantic data latencies? I know London to New York is important because of the stock exchanges, such that they're even building that new line to shave a few milliseconds off.

Ars, I'm disappointed in you. Propagation delay effects latency, not bandwidth. There are plenty of standard fiber tech that can do 1.48Tbps. That pretty much irrelevant to the story. Its not the selling point of this tech. The selling poing is shaving a few ns or maybe 1ms of an extremely long run.

High frequency traders are probably getting excited. Every one else, not so much.

This is interesting. What sort of latency difference would we see, provided they work out the kinks, in something like the trans-atlantic data latencies? I know London to New York is important because of the stock exchanges, such that they're even building that new line to shave a few milliseconds off.

The article claims optic fibre (which is mostly what we use now?) travels at around two thirds the speed of light in a vacuum. This technology boosts that up almost to the full "speed of light".

So, you'd see about a 30% reduction in latency.

However, processing on endpoints and at every router in between would remain the same, so the actual real world improvement would be less than 30%.

I don't think it will be cost effective for long cables anytime in the forceable future. 30% just isn't good enough, and we could achieve similar results just by routing the cables in a more direct path.

But data-centres are another story. A 30% increase in speed for two servers sitting right next to each other would be a great speed boost, and this technology is probably much more realistic at short cable lengths.

I thought the laws of physics were more "guidelines" than actual rules.

Seriously though... I don't quite get it. Is the cable something like a hollow mirrored pipe, nothing down the middle and reflective on the inner surface?

Sort of, ever notice if you look at an angle at a window it can reflect light? Thats due to refraction. At a certain angle when light hits a change in material, refraction causes a complete reflection.

A fiber has 2 materials, a core and a clad. in this case the core is air, lihgt goes through the core bouncing off the clad. Light doesn't really travel sower then light, it travels a greater distance because its bouncing back and forth, thus seeming slower then it should be.

IANAN(etwork)E(ngineer), but I understand it as the sender of the data packet has to wait for a reply from the receiver before it will send the next data packet.

I have no clue if this is actually an issue with fiber or not, but latency and bandwidth are not strictly orthogonal.

There are various different techniques so it depends on exactly what's going on (eg: a webpage download is very different to a skype call) but for almost all traffic the recipient only sends occasional messages back to the sender saying "I've received this much so far" and the sender uses those messages to guess how much bandwidth it should use (too much bandwidth means some of the data will not be received by the recipient, not enough bandwidth is slow).

Basically the idea is to gradually increase the bandwidth until you detect packets are not reaching the destination and then you drop back a little bit. Because other people's internet traffic changes how much bandwidth is available to you, the connection is constantly fluctuating between too fast and too slow.

Lower latency will allow the sender to be more accurate in guessing how much bandwidth is the right amount, but a 30% improvement in latency won't have any appreciable effect.

Usually when advertising the speed of a connection, you advertise it's theoretical speed, not it's real world speed, because the real world speed is being adjusted every few milliseconds by software while the theoretical speed is what the hardware is actually capable of transmitting.

Your PC might have a gigabit connection to the router, but if it sends data at gigabit speeds, the router is only going to send 10 megabits or whatever out onto the internet. The remaining 990 megabits will be thrown away, so your PC really needs to send data at 10 megabits even though the only connection it can see is gigabit.

IANAN(etwork)E(ngineer), but I understand it as the sender of the data packet has to wait for a reply from the receiver before it will send the next data packet.

I have no clue if this is actually an issue with fiber or not, but latency and bandwidth are not strictly orthogonal.

You are talking about correcting for errors. To explain this in simple terms, you can send enough data packets before getting back a response to get rid of the bandwidth limitations imposed by the "reply" that you mentioned.

This is interesting. What sort of latency difference would we see, provided they work out the kinks, in something like the trans-atlantic data latencies? I know London to New York is important because of the stock exchanges, such that they're even building that new line to shave a few milliseconds off.

The article claims optic fibre (which is mostly what we use now?) travels at around two thirds the speed of light in a vacuum. This technology boosts that up almost to the full "speed of light".

So, you'd see about a 30% reduction in latency.

However, processing on endpoints and at every router in between would remain the same, so the actual real world improvement would be less than 30%.

I don't think it will be cost effective for long cables anytime in the forceable future. 30% just isn't good enough, and we could achieve similar results just by routing the cables in a more direct path.

But data-centres are another story. A 30% increase in speed for two servers sitting right next to each other would be a great speed boost, and this technology is probably much more realistic at short cable lengths.

In short runs the difference would be very sub nanosecond, which in CPU time would be very sub 1GHz. Not really worth it.

The speed of light in a fiber is something like 66% of the speed of light in a vacuum. The speed of light in air is close to the speed in a vacuum.

Suppose a fiber from San Francisco to Japan is around 7,500 miles or 12,000 km.

In a fiber, the delay of the fiber alone will be length/0.66*c=60ms. Round trip would be twice that.In a vacuum that would be 40ms, or 20ms less.

Traders can make billions by getting their trade in first after learning some key information, which is why all these extremely low latency routers are now being sold (in the nanoseconds). Saving 20ms, or 40ms round trip is a comparatively enormous amount of time, and gives a company's computer a chance to make quite a few trades before the competition.

It also gives you the chance to get in the first shot when playing WoW.

Ars, I'm disappointed in you. Propagation delay effects latency, not bandwidth. There are plenty of standard fiber tech that can do 1.48Tbps. That pretty much irrelevant to the story. Its not the selling point of this tech. The selling poing is shaving a few ns or maybe 1ms of an extremely long run.

High frequency traders are probably getting excited. Every one else, not so much.

Isn't that why the headline talks about the 99.7% of c, the first paragraph talks exclusively of latency, the second and third paragraphs talk exclusively of speed of light issues, the fourth mentions the science/engineering, the fifth talks about latency (again), and it's not until the sixth paragraph that it mentions bandwidth, when giving a broad outline of the test set-up that was being used?

Six paragraphs in feels pretty consistent with "not the selling point of the tech". I disagree that it's "pretty much irrelevant", since describing the experimental set-up is important. But the article is clearly putting the focus on latency/propagation, and I don't know why you think otherwise.

This is interesting. What sort of latency difference would we see, provided they work out the kinks, in something like the trans-atlantic data latencies? I know London to New York is important because of the stock exchanges, such that they're even building that new line to shave a few milliseconds off.

The article claims optic fibre (which is mostly what we use now?) travels at around two thirds the speed of light in a vacuum. This technology boosts that up almost to the full "speed of light".

So, you'd see about a 30% reduction in latency.

However, processing on endpoints and at every router in between would remain the same, so the actual real world improvement would be less than 30%.

I don't think it will be cost effective for long cables anytime in the forceable future. 30% just isn't good enough, and we could achieve similar results just by routing the cables in a more direct path.

30% for the transoceanic hop is huge, actually, and there are already transatlantic links that are basically following great circle routes, meaning they can't be made any more direct without boring five thousand kilometre tunnels.

But data-centres are another story. A 30% increase in speed for two servers sitting right next to each other would be a great speed boost, and this technology is probably much more realistic at short cable lengths.

In short runs the difference would be very sub nanosecond, which in CPU time would be very sub 1GHz. Not really worth it.

A quarter of a nano second starts to add up when you see that benefit a few hundred thousand times every second.

A lot of datacentres cannot use internal hard drives for various reasons. Connecting to your primary hard drive over a network cable causes performance issues that this would help solve.

Traders can make billions by getting their trade in first after learning some key information, which is why all these extremely low latency routers are now being sold (in the nanoseconds). Saving 20ms, or 40ms round trip is a comparatively enormous amount of time, and gives a company's computer a chance to make quite a few trades before the competition.

I think once a higher speed trans-ocean link is available, all such users will buy access, thus negating any advantage.

However, the real money is to be made in real estate. If you can get your office at the optimal distance from the exchange (or whatever), you may well have an advantage. Unless the exchange does something like the Frankfurt Stock Exchange, and enforce matched latency to all users.

Last I heard, people were putting their high frequency trading logic on the network card's firmware, so they didn't have waste time moving the data over to the CPU to be processed.

IANAN(etwork)E(ngineer), but I understand it as the sender of the data packet has to wait for a reply from the receiver before it will send the next data packet.

I have no clue if this is actually an issue with fiber or not, but latency and bandwidth are not strictly orthogonal.

IA(what passes for)ANE

Yes and no.

TCP, the most widely used protocol on the internet, does require acknowledgement of data but it isn't as bad as "send one packet, wait for one response." TCP has a "window" that it will use; if it uses up this window without getting acks it won't send any further packets.

So, your effective bandwidth on a TCP connection is directly related to the latency of the connection.

However, there are other protocols (UDP being the next most popular) that don't have the same semantics and/or do a better job of dealing with high latency.

But data-centres are another story. A 30% increase in speed for two servers sitting right next to each other would be a great speed boost, and this technology is probably much more realistic at short cable lengths.

In short runs the difference would be very sub nanosecond, which in CPU time would be very sub 1GHz. Not really worth it.

A quarter of a nano second starts to add up when you see that benefit a few hundred thousand times every second.

A lot of datacentres cannot use internal hard drives for various reasons. Connecting to your primary hard drive over a network cable causes performance issues that this would help solve.

But how can it add-up? This is latency we are talking about not bandwidth. If the improvement in transmission latency is less than a single CPU cycle, then the data will be processed at the same time it would have been anyway, so you haven't really sped anything up.

I see far more value in shaving 10's of ms off a long link than shaving inconsequential amounts off a short link.

Isn't that why the headline talks about the 99.7% of c, the first paragraph talks exclusively of latency, the second and third paragraphs [...]

But on the Ars homepage the entire summary of the article talks exclusively about bandwidth and implies the laws of phisics make it impossible to increase bandwidth beyond 1.5 terabits per second:

Quote:

Fiber cables made of air move data at 99.7 percent the speed of light1.5 terabits per second, with only the laws of physics slowing it down.

The summary is meant to be read in the context of the headline. To me, this makes fairly clear that it's a statement of propagation delays/latency, and it's talking about literally being slowed down by physics (as in, the signal cannot move at c because you can only move at c in a total vacuum).

30% for the transoceanic hop is huge, actually, and there are already transatlantic links that are basically following great circle routes, meaning they can't be made any more direct without boring five thousand kilometre tunnels.

It looks like the difference between c and .7c from New York to London is 18.6 ms versus 26.5 ms. Given that I'm getting 95ms real world round trip latency 8 ms each way is a pretty significant chunk.

I thought the laws of physics were more "guidelines" than actual rules.

Seriously though... I don't quite get it. Is the cable something like a hollow mirrored pipe, nothing down the middle and reflective on the inner surface?

Sort of, ever notice if you look at an angle at a window it can reflect light? Thats due to refraction. At a certain angle when light hits a change in material, refraction causes a complete reflection.

A fiber has 2 materials, a core and a clad. in this case the core is air, lihgt goes through the core bouncing off the clad. Light doesn't really travel sower then light, it travels a greater distance because its bouncing back and forth, thus seeming slower then it should be.

No the light really does travel slower in glass. In fact the difference in speed in different materials is why light bends when it crosses between different materials. In fiber optics the light really only bounces around a little bit, and even then only in multi-mode fiber. The single-mode fiber used in long haul networks has a small enough core that photons can really only take one path down the fiber. The dispersion that results from traveling down multi mode fiber is why the distance limitations are so low at high speeds. With 10Gbs on 50u multi mode you can only go a couple hundred meters. On single mode you can go 1000km without regeneration (you still need amplification at that distance though)

The summary is meant to be read in the context of the headline. To me, this makes fairly clear that it's a statement of propagation delays/latency, and it's talking about literally being slowed down by physics (as in, the signal cannot move at c because you can only move at c in a total vacuum).

Let's not forget that light always moves at c, the speed of light. It propagates slower in a medium (air, glass) because it is being absorbed and re-emitted. The refractive index of air is about 1.0003 while glass is around 1.5, so by having a core of air the light propagates faster through the cable because it is being absorbed and re-emitted far less often.

What does the speed of light have to do with bits per second? Nothing surely.

Also, how does the speed of light in glass/air compare to the speed over copper cables? A bit of research says it depends on the grade of copper. I wonder how fast copper is in theoretical/laboratory experiments similar to this one.

I'm... I'm a little scared here.

Copper = Electrical Impulses transmitted through copper wire. This requires heavily insulated lines to diminish loss and repeaters to boost the signals over long distances.

Glass Fiber = Light transmission via a focusing source (The Head End) through the cable with far less resistances and extremely fast transmission rates. Because the cost of fiber is high, the overall use of fiber is limited. Something Verizon discovered when they made a 100% fiber network and then got the bill (Thus why FiOS is no longer expanding until they can get enough money to start up again...).

Hollow Fiber, depending on the materials, may not only be faster, but due to the lack of a glass core may actually be cheaper than fiber optic connections.

The speed of light/electricity is extremely important for data transmission as it is what the data is carried on. Rather than being electricity fiber optic uses light to carry those bits to your house/network. If those bits can move faster than bandwidth and transmission rates go up. So the speed of light has a lot to do with the speed of your network.

Even with all these "100% Fiber" that's measured outside your house before it hits the Hybrid Fiber/Coax devices that translate the light back to electrical impulses for devices like modems and such to use. So in the end you're still using Coax - it's that in between portion that speeds things up though. (Granted you're not going to get a full 1.46tbps.)

But how can it add-up? This is latency we are talking about not bandwidth. If the improvement in transmission latency is less than a single CPU cycle, then the data will be processed at the same time it would have been anyway, so you haven't really sped anything up.

I see far more value in shaving 10's of ms off a long link than shaving inconsequential amounts off a short link.

First of all, you're assuming a CPU cycle is 1Ghz. A xeon CPU is up to 4.4Ghz these days.

Second, the CPU does not sit around waiting for the network link to respond, it goes on to do something else. Perhaps it will fire off another few thousand network requests before getting a response to the first one.

But wouldn't every curve lead a photon to interact with a reflective surface and slow down? If you had a straight shot through a vacuum you'd have 100% speed of light. If you have a straight shot through air apparently there are relatively few molecules of (what substance? O2 and N? or some special gas?)... so it's very fast.

But what about all those refractive events everytime a photon hits the walls of the cable? Does the number of refractive incidents created by curves and twists impact the transit time?

What really happens when a photon is reflected, and how is that different from when it transits a material?

Glass Fiber = Light transmission via a focusing source (The Head End) through the cable with far less resistances and extremely fast transmission rates. Because the cost of fiber is high, the overall use of fiber is limited. Something Verizon discovered when they made a 100% fiber network and then got the bill (Thus why FiOS is no longer expanding until they can get enough money to start up again...).

The cost of laying fibre is all OpEx, not CapEx. Fibre is cheap. Digging up roads and installing street cabinets is not.

But wouldn't every curve lead a photon to interact with a reflective surface and slow down? If you had a straight shot through a vacuum you'd have 100% speed of light. If you have a straight shot through air apparently there are relatively few molecules of (what substance? O2 and N? or some special gas?)... so it's very fast.

But what about all those refractive events everytime a photon hits the walls of the cable? Does the number of refractive incidents created by curves and twists impact the transit time?

What really happens when a photon is reflected, and how is that different from when it transits a material?

In multimode fibres, the photons hit the wall and are internally reflected, which causes some loss of signal and slower propagation.

In single mode fibres, the thing is a waveguide, so it works "as if" the photons go right down the middle, without any reflections at all. This is why single mode fibres have substantially lower losses.

I thought the laws of physics were more "guidelines" than actual rules.

Seriously though... I don't quite get it. Is the cable something like a hollow mirrored pipe, nothing down the middle and reflective on the inner surface?

Sort of, ever notice if you look at an angle at a window it can reflect light? Thats due to refraction. At a certain angle when light hits a change in material, refraction causes a complete reflection.

A fiber has 2 materials, a core and a clad. in this case the core is air, lihgt goes through the core bouncing off the clad. Light doesn't really travel sower then light, it travels a greater distance because its bouncing back and forth, thus seeming slower then it should be.

So the diameter of the fiber cables must be precisely calculated? It can't be too large, it will take the light longer to bounce from one mirror to another mirror and not too small either since it cause the light more bouncing to the receiving end? So anyone know the exact sizes of the diameter are these experimental fiber cables? Bigger the better or have to be the right size?

Glass Fiber = Light transmission via a focusing source (The Head End) through the cable with far less resistances and extremely fast transmission rates. Because the cost of fiber is high, the overall use of fiber is limited. Something Verizon discovered when they made a 100% fiber network and then got the bill (Thus why FiOS is no longer expanding until they can get enough money to start up again...).

Fiber itself is not very expensive. With the way copper prices are going, it won't be long before fiber is cheaper on a materials basis; if it's not already. The expensive part is running the cables from the CO (or neighborhood head end) to the home. It's really easy to do with new construction; you're either going to run copper wire or you're going to run fiber (and Verizon is still expanding FiOS FTTH -- but only in new construction.)

The industry estimates for a true, nationwide fiber-to-the-curb deployment are somewhere in the neighborhood of $100-200 billion. That's why the telecoms aren't exactly quaking in their boots at Google Fiber: it's a massively expensive undertaking, especially when there's no guarantee that the homes you wire up will subscribe.

[quote="abhi_beckert"]The article claims optic fibre (which is mostly what we use now?) travels at around two thirds the speed of light in a vacuum. This technology boosts that up almost to the full "speed of light".

So, you'd see about a 30% reduction in latency.

However, processing on endpoints and at every router in between would remain the same, so the actual real world improvement would be less than 30%./quote]

I did some back of the envelope calculations on the "youtube" thread the other day. On a reasonably well-run network, about 30% of my latency was attributable to the speed of light and the rest to other factors (presumably buffering and forwarding).

But data-centres are another story. A 30% increase in speed for two servers sitting right next to each other would be a great speed boost, and this technology is probably much more realistic at short cable lengths.

In short runs the difference would be very sub nanosecond, which in CPU time would be very sub 1GHz. Not really worth it.

A quarter of a nano second starts to add up when you see that benefit a few hundred thousand times every second.

A lot of datacentres cannot use internal hard drives for various reasons. Connecting to your primary hard drive over a network cable causes performance issues that this would help solve.

If data center engineers were really concerned about 0.25 ns, they would just use shorter fiber cables. 0.25ns is 3 to 5 inches in a standard optical fiber, and i'm pretty confident there is a lot more slack than that in the fibers at data centers.

xwindowsjunkie wrote:

So why bother with filling it with air? Take the air out and get the last little .3 percent.

Install the cable. Vacuum out all the air, fuse a transceiver chip to the center of the bore. Long empty tube = empty space, vacuum. Speed of light restored.

i think you have it backwards. Why bother vacuuming out all the air for a 0.3% latency improvement? That's a lot of additional complexity for not a lot of gain.

Pubert wrote:

Pfft. Quantum entangled transmission is coming. It'll render all of this obsolete. Betcha!

Quantum entangled communication still relies on a conventional communication channel. In fact, many of the entanglement experiments were done over optical fibers.

Entangled communication still requires some sort of signal to re-establish quantum connections when they get dis-entangled (is that a word?). Use the hyper speed NO-Fiber cable to reacquaint the qbits at each end. Then we'll be talking about re-establish speeds.