1.5 terabits per second, with only the laws of physics slowing it down.

Share this story

Researchers say they have created fiber cables that can move data at 99.7 percent of the speed of light, all but eliminating the latency plaguing standard fiber technology. There are still data loss problems to be overcome before the cables could be used over long distances, but the research may be an important step toward incredibly low-latency data transmissions.

Although optic fibers transmit information using beams of light, that information doesn't actually go at "light speed." The speed of light, about 300,000 km/s, is the speed light travels in a vacuum. In a medium such as glass, it goes about 30 percent slower, a mere 200,000 km/s.

The research team from the University of Southampton in England solved this problem by taking the glass out of the glass fiber. This results in a "hollow-core photonic-bandgap fibre," which is made mostly of air yet still allows light to follow the path of the cable when it twists and turns.

The methods used by the researchers result in data loss of 3.5 dB/km, an impressively low number considering its incredibly low latency. However, that data loss is still too high for long-range communications. For now, these cables won't be used to wire up Internet Service Provider networks or for transatlantic cables.

The cable uses wide-bandwidth channels to send 37 streams of 40 gigabits per second each, with an aggregate transmission capacity of 1.48Tbps. Even with the current rate of data loss, the researchers say the cable technology is adequate for "short reach low-latency applications," such as future exaflop-speed supercomputers and "mega data centres."

"For longer transmission distances, additional work is needed to further reduce surface scattering loss and to achieve the sub-0.2 dB km," the researchers wrote.

UPDATE: Although this wasn't described in the paper, one of the researchers told ExtremeTech that the cable's throughput actually goes up to 73.7Tbps, "using wave division multiplexing (WDM), combined with mode division multiplexing, to transmit three modes of 96 channels of 256Gbps."

Promoted Comments

This is interesting. What sort of latency difference would we see, provided they work out the kinks, in something like the trans-atlantic data latencies? I know London to New York is important because of the stock exchanges, such that they're even building that new line to shave a few milliseconds off.

The article claims optic fibre (which is mostly what we use now?) travels at around two thirds the speed of light in a vacuum. This technology boosts that up almost to the full "speed of light".

So, you'd see about a 30% reduction in latency.

However, processing on endpoints and at every router in between would remain the same, so the actual real world improvement would be less than 30%.

I don't think it will be cost effective for long cables anytime in the forceable future. 30% just isn't good enough, and we could achieve similar results just by routing the cables in a more direct path.

But data-centres are another story. A 30% increase in speed for two servers sitting right next to each other would be a great speed boost, and this technology is probably much more realistic at short cable lengths.

131 Reader Comments

This is interesting. What sort of latency difference would we see, provided they work out the kinks, in something like the trans-atlantic data latencies? I know London to New York is important because of the stock exchanges, such that they're even building that new line to shave a few milliseconds off.

The article claims optic fibre (which is mostly what we use now?) travels at around two thirds the speed of light in a vacuum. This technology boosts that up almost to the full "speed of light".

So, you'd see about a 30% reduction in latency.

However, processing on endpoints and at every router in between would remain the same, so the actual real world improvement would be less than 30%.

I don't think it will be cost effective for long cables anytime in the forceable future. 30% just isn't good enough, and we could achieve similar results just by routing the cables in a more direct path.

But data-centres are another story. A 30% increase in speed for two servers sitting right next to each other would be a great speed boost, and this technology is probably much more realistic at short cable lengths.

Yes, you nailed it. I'm glad your post was highlighted. One place where I could see technology like this used is cabling for multi shelf super computing systems that are very sensitive to latency.

It is unlikely to see wider use on general Internet paths because fiber today is supporting 5TB of data on DWDM over huge distances without amplification in the middle and that wouldn't be possible with this technology. Latency lost to not using fiber would be lost to the much more frequent need for copper amplification equipment for reconditioning and re-transmission of data.

So far wouldn't the loss on this prevent it from having a use? If you can't use it for long distances due to greater loss then you're never going to get much of a total drop in latency from using this over short distances when the long hops are still glass.

Engineer: We need to replace our Fiber Cable with hollow core cable to get a 30% improvment.Stock Trader: That will be great - worth $30 Million a month.CFO: How much will it cost?Engineer: Our solid core cable costs $2/Foot this new hollow core cable costs $4/foot. It will cost us $10 Million to replace all the current cable.Stock Tradaer: Great, we will make it up in 1/3 of a month.CFO: Let me run the numbers to verify that.CEO: You mean I am paying more for cable that has less in it?Marketing Manager: I wish I had thought of that.

Engineer: We need to replace our Fiber Cable with hollow core cable to get a 30% improvment.Stock Trader: That will be great - worth $30 Million a month.

I doubt stock traders would be all that interested in this, since high frequency traders are typically located in the same buildings or blocks as markets, and thus their round-trip latency is not strongly impacted by speed of light limitations.

Its mostly companies that need to send lots of data between continents, gamers, datacenters, etc that would be interested in this.

davolfman wrote:

So far wouldn't the loss on this prevent it from having a use? If you can't use it for long distances due to greater loss then you're never going to get much of a total drop in latency from using this over short distances when the long hops are still glass.

Yes it would. Your intuition is correct. However, the theoretical losses from fibers made in this way are order of magnitude less. The difference is mostly just due to the difficulty in manufacturing them. In principle, with enough engineering losses could be brought under control.

Everyone remembers when being an "HPB" in a game of quake was an instant clue that you were playing from far away. More direct fiber runs and higher speed interfaces on core hardware have helped a lot in the last 15 years, but one of the largest gains was from better fiber amplifier technology. The older amplifiers were basically "repeaters", seeing a pulse arrive, and sending one out. The newer ones inject light into the fiber, which is sent on track by the arriving remote pulse of light.

In a car race context, it would be the equivalent of replacing the traditional "pit stop" refueling with a guy spraying gasoline out of a hose across pit row; as you drive through it, your car picks up the fuel without losing any velocity (and just as importantly, not bursting into flames).

Looking inside one, it's basically a few hair-like loops of fiber with a laser the size of a AA battery clipped to it, amazingly simple, but of all the technologies I've worked with, it's the one I consider closest to magic.

I doubt stock traders would be all that interested in this, since high frequency traders are typically located in the same buildings or blocks as markets, and thus their round-trip latency is not strongly impacted by speed of light limitations.

If you have lower latency between two markets than anyone else you can arbitrage between them before anyone else knows there is a price mismatch. Such trading drove a new transatlantic cable recently (one that took a shorter route).

First of all, you're assuming a CPU cycle is 1Ghz. A xeon CPU is up to 4.4Ghz these days.

Not assuming anything, just using order-of magnitude numbers rather than exact. In reality, the latency for a NAS type situation is limited by the disk, and the communication protocol, not the physical network layer. Even fast SSDs have latency in the microsecond range. Decreasing the transmission latency by nanoseconds isn't going to give any noticeable improvement.

Okay, so instead consider applications that don't have a slow disk in the way, like a memory-mirroring sort of setup. Then the DDR latency is the limiting factor, which is some small double-digit multiple of the FSB clock cycle. So you are looking at around 10s of nanoseconds of latency there. Decreasing fiber latency by fractions of a nanosecond still won't help.

I was actually thinking a database server, where you might spend $30,000 or so on your RAM cache.

Most operations won't touch the disks and it's pretty well optimised to respond quickly.

I doubt stock traders would be all that interested in this, since high frequency traders are typically located in the same buildings or blocks as markets, and thus their round-trip latency is not strongly impacted by speed of light limitations.

If you have lower latency between two markets than anyone else you can arbitrage between them before anyone else knows there is a price mismatch. Such trading drove a new transatlantic cable recently (one that took a shorter route).

Sure, but such a cable helps everyone on the internet, not just specific traders, so there is no way for any one person to get an advantage and thus change the status quo. If you shift everyone's latency down by a constant offset, its basically irrelevant as far as traders are concerned since no one gains any advantage.

All traffic on the internet between NY and London isn't going to flow over Hibernia Atlantic's new cable. Only those willing to pay a premium for the reduced latency are going to use it, and that's going to include every trader who wants to arb between NY and London.

The cost of laying fibre is all OpEx, not CapEx. Fibre is cheap. Digging up roads and installing street cabinets is not.

I think the endpoints are also expensive, or perhaps that has changed now?

short-haul stuff (<30km or so) can be generic 1310 SFP's that don't cost much more than the usual in-building multimode stuff we're all used to. (under $1kUSD per end)

DWDM gear has fallen, but not nearly as quickly as the cost of fiber in metro areas. While I have seen it a few times at large end-users, it's still pretty complicated with support getting expensive. I've found it cheaper to buy additional fiber and use generic single-wavelength transmission.

This results in a "hollow-core photonic-bandgap fibre," which is made mostly of air

So what comprises the non-air bit?

Very thin glass walls A few years those 'photonic' cables were all the rage because people thought the increased efficiency had to do with some interference effect happening in the core of the cable, until someone said it was just the increased difference in index of refraction between the core and the outer wall. The small holes you see are usually so large that any interference effect with the wavelengths used in optical communication (around 1400 nm) is out of the question.

Engineer: We need to replace our Fiber Cable with hollow core cable to get a 30% improvment.Stock Trader: That will be great - worth $30 Million a month.

I doubt stock traders would be all that interested in this, since high frequency traders are typically located in the same buildings or blocks as markets, and thus their round-trip latency is not strongly impacted by speed of light limitations.

The problem is that you can't be in two different markets at the same time. If you want to trade in London's market using information received in NY, you actually have an advantage being in NY. We are talking about program trading where they make trades that result in small gains, but make them incredibly huge and incredibly fast, so that you still make a lot of money in the end. If you have a few milliseconds advantage, you can make trades in NY based on information that won't exist in London for 60ms, and make huge purchaces.

Here's a Wired article that talks about this:http://www.wired.com/business/2012/08/f ... ading/all/Project Express is spending $300 million to cut 5ms off the NY to London fiber route just for this purpose. You can bet these people are looking at this air-core fiber very closely.

What does the speed of light have to do with bits per second? Nothing surely.

Also, how does the speed of light in glass/air compare to the speed over copper cables? A bit of research says it depends on the grade of copper. I wonder how fast copper is in theoretical/laboratory experiments similar to this one.

Transmission density (Bits per second) is unrelated to transmission speed (propagation or movement of the wavefront)

Speed in optical cable is approx .68cSpeed in copper cable is approx .67cSpeed in this new tech is approx .97cThis new tech offers approx 142% of the propagation speed. This translates to a reduction in propagation delay of 30% (The faster rate is 70% of the propagation time of the slower rate)So if the delay on copper or optical is 10 nanoseconds then the delay on new tech is 7 nanoseconds ... a reduction of 3 nanoseconds in latency with no other changes.

You can get more bandwidth with optical cable than in copper cable at similar diameters. This increased bit density is what gives optical the edge in bits per second transmission. (The bit density is affected by the number of bitstreams that can be transmitted in parallel)

This is interesting. What sort of latency difference would we see, provided they work out the kinks, in something like the trans-atlantic data latencies? I know London to New York is important because of the stock exchanges, such that they're even building that new line to shave a few milliseconds off.

The article claims optic fibre (which is mostly what we use now?) travels at around two thirds the speed of light in a vacuum. This technology boosts that up almost to the full "speed of light".

So, you'd see about a 30% reduction in latency.

However, processing on endpoints and at every router in between would remain the same, so the actual real world improvement would be less than 30%.

I don't think it will be cost effective for long cables anytime in the forceable future. 30% just isn't good enough, and we could achieve similar results just by routing the cables in a more direct path.

But data-centres are another story. A 30% increase in speed for two servers sitting right next to each other would be a great speed boost, and this technology is probably much more realistic at short cable lengths.

I disagree. I recall an article recently where trading firms were going to pay a premium for lower latency connections over a new transatlantic cable. I also have clients who moved closer to the NYSE Datacenter or into it for millisecond improvements in latency since high frequency trading demands fast access. If someone manages to get a cable run transatlantic that reduces latency by even 5%, someone will pay to use it.

I thought the laws of physics were more "guidelines" than actual rules.

Seriously though... I don't quite get it. Is the cable something like a hollow mirrored pipe, nothing down the middle and reflective on the inner surface?

Yes. Although the "mirrored" surface is a change in refractive index that causes the light to "bounce" instead of crossing the boundary. Normal optical cable uses this effect also to keep the light in the waveguide portion of the cable. Air does not slow the light as much as glass does, so an air filled waveguide does not slow the signal as much as a glass filled waveguide does.

I thought the laws of physics were more "guidelines" than actual rules.

Seriously though... I don't quite get it. Is the cable something like a hollow mirrored pipe, nothing down the middle and reflective on the inner surface?

Sort of, ever notice if you look at an angle at a window it can reflect light? Thats due to refraction. At a certain angle when light hits a change in material, refraction causes a complete reflection.

A fiber has 2 materials, a core and a clad. in this case the core is air, lihgt goes through the core bouncing off the clad. Light doesn't really travel sower then light, it travels a greater distance because its bouncing back and forth, thus seeming slower then it should be.

So the diameter of the fiber cables must be precisely calculated? It can't be too large, it will take the light longer to bounce from one mirror to another mirror and not too small either since it cause the light more bouncing to the receiving end? So anyone know the exact sizes of the diameter are these experimental fiber cables? Bigger the better or have to be the right size?

There is no mirror persay. The difference between the refractive indices of the adjacent materials (core, cladding) creates the interface medium which promotes total internal reflection (by reducing the critical angle).

Why couldn't this be used in undersea cables? They currently use repeaters that are basically a section of laser material that is stimulated by a light source to just below lasing threshold. An incoming pulse of light pushes the material above that threshold, very briefly. The result is regeneration of the incoming light pulses.

I presume the air in the cable core is dry & dust-free. I would like to see a comparison of transmissivity of such air compared to that of the high-purity glass in ordinary undersea cables. Clean dry air is pretty darn transparent.

What does the speed of light have to do with bits per second? Nothing surely.

That's right. This is all about latency, not bandwidth. Latency matters because it's (a part of) the time taken from clicking your mouse, to seeing the result on your screen.

abhi_beckert wrote:

Also, how does the speed of light in glass/air compare to the speed over copper cables? A bit of research says it depends on the grade of copper. I wonder how fast copper is in theoretical/laboratory experiments similar to this one.

The speed of electrical signals in copper is comparable to that of light in a fibre-optic cable, but depends on the insulator. So, it ranges from 0.6c in a co-axial cable to almost the speed of light (in a bare cable). The problem with copper is bandwidth.

This is interesting. What sort of latency difference would we see, provided they work out the kinks, in something like the trans-atlantic data latencies? I know London to New York is important because of the stock exchanges, such that they're even building that new line to shave a few milliseconds off.

The article claims optic fibre (which is mostly what we use now?) travels at around two thirds the speed of light in a vacuum. This technology boosts that up almost to the full "speed of light".

So, you'd see about a 30% reduction in latency.

However, processing on endpoints and at every router in between would remain the same, so the actual real world improvement would be less than 30%.

I don't think it will be cost effective for long cables anytime in the forceable future. 30% just isn't good enough, and we could achieve similar results just by routing the cables in a more direct path.

But data-centres are another story. A 30% increase in speed for two servers sitting right next to each other would be a great speed boost, and this technology is probably much more realistic at short cable lengths.

on the other hand, the latency caused by the cable becomes significantly less relevant as the distance between the nodes decreaseswhat percentage of the latency between two servers sitting next to each other can be blamed on the slow cable between them as opposed to all the processing?

This is interesting. What sort of latency difference would we see, provided they work out the kinks, in something like the trans-atlantic data latencies? I know London to New York is important because of the stock exchanges, such that they're even building that new line to shave a few milliseconds off.

The article claims optic fibre (which is mostly what we use now?) travels at around two thirds the speed of light in a vacuum. This technology boosts that up almost to the full "speed of light".

So, you'd see about a 30% reduction in latency.

However, processing on endpoints and at every router in between would remain the same, so the actual real world improvement would be less than 30%.

I'm not sure if this makes sense. The article didn't say there was any extra overhead associated with using these over glass fiber, so the processing and routing times should be the similar. It doesn't make sense that the processing time should be any higher. If the processing time is the same, why wouldn't you see the 31% improvement in performance?

I wonder if there is any concern using these as undersea cables. I do not know how deep undersea cables are but they must be down at least two atmospheres. So, the cores would not only need air, they would need cclean compressed air. Depending on the structure, the loss of pressure could result in collapse of the air core. I guess this is just another engineering hurdle they will have to overcome.

Traders can make billions by getting their trade in first after learning some key information, which is why all these extremely low latency routers are now being sold (in the nanoseconds). Saving 20ms, or 40ms round trip is a comparatively enormous amount of time, and gives a company's computer a chance to make quite a few trades before the competition.

I think once a higher speed trans-ocean link is available, all such users will buy access, thus negating any advantage.

Obviously, but in that situation, no one will be able to afford to not use it.

However, processing on endpoints and at every router in between would remain the same, so the actual real world improvement would be less than 30%.

[..]

But data-centres are another story. A 30% increase in speed for two servers sitting right next to each other would be a great speed boost, and this technology is probably much more realistic at short cable lengths.

Is it me or there is a contradiction in your post? The shorter the "cable" distance, the smaller (in absolute terms) the gain since it is proportional (the 30%). Therefore the higher (proportionately) the impact of non-cable related delays, like processing, gate switching etc.

So I don't see a gain for data centers either which I suspect (I could be wrong) to be more influenced by bandwidth than sub-ns latencies. In any case, data centers are still relying on data stored in hard drives, or SSDs, or even RAM, all of which have data access latencies far higher I think, than what 30% of light speed could shave off of a short cable.

Where I do see some possible gain is supercomputing, HPC stuff. There, they build massive custom interconnects where latency is absolutely critical. Sure, cables are still short (similar to the data center case) but the latency impact is arguably higher.

This results in a "hollow-core photonic-bandgap fibre," which is made mostly of air

So what comprises the non-air bit?

Very thin glass walls A few years those 'photonic' cables were all the rage because people thought the increased efficiency had to do with some interference effect happening in the core of the cable, until someone said it was just the increased difference in index of refraction between the core and the outer wall. The small holes you see are usually so large that any interference effect with the wavelengths used in optical communication (around 1400 nm) is out of the question.

This is interesting. What sort of latency difference would we see, provided they work out the kinks, in something like the trans-atlantic data latencies? I know London to New York is important because of the stock exchanges, such that they're even building that new line to shave a few milliseconds off.

The article claims optic fibre (which is mostly what we use now?) travels at around two thirds the speed of light in a vacuum. This technology boosts that up almost to the full "speed of light".

So, you'd see about a 30% reduction in latency.

However, processing on endpoints and at every router in between would remain the same, so the actual real world improvement would be less than 30%.

I'm not sure if this makes sense. The article didn't say there was any extra overhead associated with using these over glass fiber, so the processing and routing times should be the similar. It doesn't make sense that the processing time should be any higher. If the processing time is the same, why wouldn't you see the 31% improvement in performance?

Because the total system latency is the sum of routing + processing + transmission latencies, reducing only one of those latencies by 30% doesn't reduce the overall system latency by 30%.

So why bother with filling it with air? Take the air out and get the last little .3 percent.

Install the cable. Vacuum out all the air, fuse a transceiver chip to the center of the bore. Long empty tube = empty space, vacuum. Speed of light restored.

If its just air there is no longer a waveguide. This physics of this fiber are different than a standard glass fiber with a high index core that works with total internal reflection.

This fiber contains the light in the core using a photonic band gap structure, light within a certain wavelength range will not propagate through the structure, which means it cannot leak out of the core (in the ideal case). The fiber is not ideal though and hard to manufacture compared to a typical glass fiber, so imperfections allow leakage and the reported loss. You won't see this replacing 'standard' fiber anytime soon.

your logic fails due to the logic of people with money. in 2012 1.5 billion was spent on a cable between london and tokyo. there were other paths available but this cable reduced latency by 60ms.

abhi_beckert wrote:

Ravenhaft wrote:

This is interesting. What sort of latency difference would we see, provided they work out the kinks, in something like the trans-atlantic data latencies? I know London to New York is important because of the stock exchanges, such that they're even building that new line to shave a few milliseconds off.

The article claims optic fibre (which is mostly what we use now?) travels at around two thirds the speed of light in a vacuum. This technology boosts that up almost to the full "speed of light".

So, you'd see about a 30% reduction in latency.

However, processing on endpoints and at every router in between would remain the same, so the actual real world improvement would be less than 30%.

I don't think it will be cost effective for long cables anytime in the forceable future. 30% just isn't good enough, and we could achieve similar results just by routing the cables in a more direct path.

But data-centres are another story. A 30% increase in speed for two servers sitting right next to each other would be a great speed boost, and this technology is probably much more realistic at short cable lengths.

There is another issue - it's air. @ 2.5 miles below the ocean surface the very properties that give this technology it's advantage is compressed until the outer walls of the cable collapse so tightly together as to nearly bond molecularly.

I just can't see how this technology could ever become a transatlantic fiber. Perhaps it could survive hanging a safe distance below surface buoys out of range of deep sea fishing hooks perhaps but still risky for anything dragging a deep line. It sounds ideal though for close proximity interface line between racks in a mega-data center. I'd also be interested in the heating cooling properties, if the ambient air is 120+ degrees.

There is another issue - it's air. @ 2.5 miles below the ocean surface the very properties that give this technology it's advantage is compressed until the outer walls of the cable collapse so tightly together as to nearly bond molecularly.

The choice of gas is not actually important. You just need something with relatively low refractive index. If your need to operate at pressures beyond which nitrogen maintains a low index, you can just use something else with a higher vapor pressure (for example, helium). This sounds hard, but remember the holes are measured in nanometers, so the volume of gas is actually quite reasonable.

diverbuoy wrote:

It sounds ideal though for close proximity interface line between racks in a mega-data center. I'd also be interested in the heating cooling properties, if the ambient air is 120+ degrees.

What does the speed of light have to do with bits per second? Nothing surely.

Also, how does the speed of light in glass/air compare to the speed over copper cables? A bit of research says it depends on the grade of copper. I wonder how fast copper is in theoretical/laboratory experiments similar to this one.

I'm... I'm a little scared here.

Copper = Electrical Impulses transmitted through copper wire. This requires heavily insulated lines to diminish loss and repeaters to boost the signals over long distances.

Glass Fiber = Light transmission via a focusing source (The Head End) through the cable with far less resistances and extremely fast transmission rates. Because the cost of fiber is high, the overall use of fiber is limited. Something Verizon discovered when they made a 100% fiber network and then got the bill (Thus why FiOS is no longer expanding until they can get enough money to start up again...).

Hollow Fiber, depending on the materials, may not only be faster, but due to the lack of a glass core may actually be cheaper than fiber optic connections.

The speed of light/electricity is extremely important for data transmission as it is what the data is carried on. Rather than being electricity fiber optic uses light to carry those bits to your house/network. If those bits can move faster than bandwidth and transmission rates go up. So the speed of light has a lot to do with the speed of your network.

Even with all these "100% Fiber" that's measured outside your house before it hits the Hybrid Fiber/Coax devices that translate the light back to electrical impulses for devices like modems and such to use. So in the end you're still using Coax - it's that in between portion that speeds things up though. (Granted you're not going to get a full 1.46tbps.)

It isn't all hyrbid fiber/coax. The ONT boxes (Optical Network Terminals) merely bridge the fiber optic connection to EITHER MoCA (Multimedia over Coax Aliance...stupid name for the medium, but it is basically ethernet over coax) or some variety of ethernet over "twisted pair" cabling. What one would normally refer to as 10BASE-T, 100Base-TX or 1000BASE-T (10Mb, 100Mb or 1000Mb/1Gb).

In the case of my house, mine is setup as 1000BASE-T...IE I have my connection to my ONT box over Cat5e connected to the RJ-45 port on the ONT. I do have a MoCA bridge else where within my network to translate over my Coax so that my TV set top box can talk to the internets...but that is because Verizon is stupid/incompetnent/could careless and can't/won't enable the RJ-45 port on their set top boxes, so you MUST use coax/MoCA if you want your set top box to get guide informaiton and video on demand.

This is not a default configuration anymore (it was very early in FIOS roll-out), but there is nothing beyond running a cable and a phone call to Verizon stopping any customer from rolling ethernet over Cat cabling rather than over Coax.

In my case I did actually notice a slight speed and latency improvement moving to Cat5e and my own router (Netgear 3500l) instead of Coax and the Actiontec abortion, I mean router/MoCA bridge. It ain't much, but I am averaging about 2ms less to local connections (local being within about 300 miles of me) and I am hitting around 81Mbps up and 36Mbps instead of the average of 76Mbps up and 35Mbps down.

This is about 30 different tests over several different days between the two.

In all likelihood most of that comes down to the router difference rather than the medium between the ONT and the router/LAN...but since your MoCA router options are basically two...you don't really have the option of plugging in "better" one if you stay with coax/MoCA.

This is interesting. What sort of latency difference would we see, provided they work out the kinks, in something like the trans-atlantic data latencies? I know London to New York is important because of the stock exchanges, such that they're even building that new line to shave a few milliseconds off.

The article claims optic fibre (which is mostly what we use now?) travels at around two thirds the speed of light in a vacuum. This technology boosts that up almost to the full "speed of light".

So, you'd see about a 30% reduction in latency.

However, processing on endpoints and at every router in between would remain the same, so the actual real world improvement would be less than 30%.

I don't think it will be cost effective for long cables anytime in the forceable future. 30% just isn't good enough, and we could achieve similar results just by routing the cables in a more direct path.

But data-centres are another story. A 30% increase in speed for two servers sitting right next to each other would be a great speed boost, and this technology is probably much more realistic at short cable lengths.

on the other hand, the latency caused by the cable becomes significantly less relevant as the distance between the nodes decreaseswhat percentage of the latency between two servers sitting next to each other can be blamed on the slow cable between them as opposed to all the processing?

3.3ns per meter of distance would be the induced latency. Which admittedly isn't much, but if you are talking cabling up a super computer...that can be pretty significant.

To quothe the wikipedia article "Column Address Strobe (CAS) latency, or CL, is the delay time between the moment a memory controller tells the memory module to access a particular memory column on a RAM memory module, and the moment the data from the given array location is available on the module's output pins. In general, the lower the CAS latency, the better."

CAS9 DDR3 1600Mhz memory has an actual latency of 11.25ns for first word latency from memory. Then you have to add in physical distance + memory controller latency and what have you to get the true latency. From what I know of it, the memory controller doesn't add in a huge amount and the RAM is generally located pretty darned physically close, so physical distance adds little. So call it the 11.25ns.

Now for a super computer, assuming 70% of C for the cabling and ignoring any network controller latency (which you would NEED to figure in), access to main memory might be 11.25ns there, but if it is trying to access memory on another server, located physically 1 meter away on an adjascent rack for a computation just completed, and the server had to WAIT to do the next operation before that transaction could be returned, and lets assume that the information being returned is extremely small in size (lets say a matter of 8 Bytes or something, IE 1 word) the difference between being able to access it from main memory and the server in an adjascent rack might be 5ns, or an increase of about 40%.

Of course network controller lag and the lag of the memory, memory controller, etc, etc in the other machine is going to introduce a heck of a lot more lag than the physical distance...but what if you needed to grab a transactions from the entire other end of the super computer, that might be 20m away, that is 100ns of lag. Now THAT is an appreciable amount of lag, especially if you are wasting CPU cycles sitting idle waiting for a result.

It isn't simply bandwidth within connections on a super computer that impact just how fast it can compute things, you also have software design (get things as distributed as possible relying as little on other CPU's datasets and computations as you possibly can), CPU/memory speeds, but you also have the latency of the connections between physical nodes to consider. Being able to reduce the physical distance latencies by 30% could have a pretty significant difference in speed.

Heck, even the distance of the RAM on a board can impact efficiency of a processor a fair amount. A 4Ghz processor would wait something like 40 or so clock cycles to retrieve something from main memory with CAS8 memory (DDR3 1600Mhz), not including memory controller latency and physical distance. A physical distance of 60mm (2.5 inches) adds an extra clock cycle to the lag, which granted is not a lot, but it is an increase of 2.5% in overall wasted clock cycles waiting on main memory. Yes, lots is done in L1 and L2 and L3 (for those that have them) caches, but those are pretty limited in size compared to main memory. Anything that can increase the efficiency. If you look at GPGPU computing in combination with the CPU, the likely distance between the discrete GPU and the CPU is likely to add in at least 3-5 clock cycles of latency compared to if the GPU was on die with the CPU. That isn't even taking in to account having to run through the northbridge and any latency IT might add or the fact that being on die the iGPU can potentially have shared cache with the iGPU, or the issue with non-unified memory between the dGPU and the CPU resulting in having to possibly transmit to the CPU something and then in to main memory and possibly having to reaccess it then from main memory.

Moving to optical interconnects ON BOARD a computer could even speed things up. Not likely to happen any decade soon, but it might. Beyond potential advances in bandwidth, which would be the primary motivating factor, reduced latency would not be a terrible thing either.