Technology for quantum key distribution helps make sense of single photons.

The goal of new telescopes is usually to resolve more details at greater distances. The most straightforward way to do this is to simply make telescopes larger. Unfortunately, the simple weight of the mirrors makes this a path fraught with difficulties. Despite the difficulties, there are consortiums attempting to do just this.

An alternative is to combine the light from different telescopes. The interference between the light fields captured by separate telescopes creates an image that has details that would ordinarily be resolved by a telescope with a size similar to the separation between the telescopes. This seems simple, but the light fields are very weak and losses involved in transporting and combining the light fields are such that separations of a few hundred meters are today's limits. A trio of Canadian researchers has proposed using technologies being developed for quantum key distribution to greatly extend the allowable distance between telescopes.

How does interference help?

Imagine we have a pair of objects very close to each other, but at a great distance from our two telescopes. According to the individual telescopes, there is actually only a single object, because they do not have sufficient resolving power to separate the pair. When we use the two telescopes together, though, things change. The light that travels to the first telescope has to travel a slightly different distance to the light that travels to the second telescope. That difference depends on the direction from which the light comes, which is slightly different for the two objects. When the light from the two telescopes is mixed, the brightness of the pattern depends on that path difference: if it is an integer number of half wavelengths, we get a bright pattern, and as we move away from that magic number, the pattern dims and eventually disappears.

When we mix the light from the two telescopes, we can introduce an additional delay to the light from one telescope, so that we can choose to get a bright pattern. In our case, with two objects, each one produces its own repeating pattern of bright and dark patterns as the delay is varied. The mixture of those two patterns tells us that there are two objects, not one. It also tells us the angular separation of the two objects. Hence our two (rather poor) telescopes have resolved objects that would normally only be separated by a much more expensive telescope.

That's already awesome—why add quantum-y stuff?

Assuming that the light from stars is purely classical, I was a little puzzled at how the physics behind quantum key distribution might help in this respect. It turns out that as you crank up the resolution of the telescope system, you see more features, but the total amount of light doesn't increase that much, so the total number of photons per mode (or feature) goes down until it is much less than one. This makes starlight very much like a quantum light source.

But with so few photons, it's impossible to build up the interference patterns used to separate objects. Indeed, every photon counts, so the last thing you want to do is throw them into an optical fiber and have them absorbed by a defect in the fiber. Instead, a Canadian team of researchers has proposed to use entangled photons to perform the interference measurement at each telescope without having to transport the starlight.

Quantum entanglement

Quantum entanglement is one of the most misused concepts around. Entanglement is delicate, rare, and short-lived. At its heart, quantum entanglement is nothing more or less than a correlation between two apparently separate quantum objects. Having discovered that, you might ask "so what is all the fuss about?" The answer lies deep in quantum mechanics.

Read more…
The basic idea is that, somewhere between the two telescopes, we generate entangled photons on Earth that are sent to a randomly chosen telescope over a fiber optical cable whose length can be varied. At each telescope, the generated photon is mixed with the starlight on a beam splitter. Half the time, the stellar photon and the generated photon go to the same telescope, and the result is thrown away. The other half of the time, the two photons go to different telescopes, resulting in one of two detectors clicking at each telescope. The interference pattern is built up by looking for simultaneous clicks at pairs of detectors as a function of the fiber optic cable length.

This scheme has the advantage that the photons sent between the telescopes are generated here on Earth and can be easily replaced. Indeed, single photons can be sent over lengths of about 70-100km reliably at the moment, a massive increase over the few hundred meter limitation in stellar interferometry. But the scheme automatically throws away half of the stellar light—enough to make any astronomer cry. And single photon sources are simply not very good at the moment; they don't produce useable photons at the rate required. Indeed, the researchers acknowledge this, and calculate that the single photon source should produce photons at about 150GHz to end up with a sensitivity comparable with today's stellar interferometers. For reference, current single photon sources are in the MHz range.

So why is this interesting if it's so far away from a practical implementation? The technology that the team is proposing is exactly what people are working on in quantum information technology, but the requirements are much more stringent for stellar interferometry. These ideas, then, act as a technology driver for those working on the practical side of quantum information technology.

This is probably a dumb question, but I've always wondered why the interference has to be done in real time? Why not have the data recordings synchronized to some extremely accurate atomic clock, the same kind and tuned the same at each location. The data would later be combined, using the sync codes, and could be done at leisure. This means the separation of the telescopes could be much greater and you don't have to worry about photons arriving at the detector at the same moment. You also don't need to introduce exotic methods such as quantum entanglement. I must be missing something here, seems like too simple of a solution.

This is probably a dumb question, but I've always wondered why the interference has to be done in real time? Why not have the data recordings synchronized to some extremely accurate atomic clock, the same kind and tuned the same at each location. The data would later be combined, using the sync codes, and could be done at leisure. This means the separation of the telescopes could be much greater and you don't have to worry about photons arriving at the detector at the same moment. You also don't need to introduce exotic methods such as quantum entanglement. I must be missing something here, seems like too simple of a solution.

I think that for this to work, you would need to know the instantaneous phase information of the light at all times over the course of the integration. For a single telescope, all that is recorded is an image of the integrated photon intensity with no easy way to obtain the phase information (in fact, I believe you need interferometry to even obtain the phases). As mentioned in the article, interferometry works by adjusting two (or more) telescope beam path lengths such that the relative phases are just right to produce a desirable interference pattern, which is then recorded and used in part to determine the separation of two otherwise unresolvable objects. Beam path adjustments need to be accomplished in real time due to changing position of the object on the sky (telescopes need to follow and adjust the paths in order to keep the path length difference constant) as well as instrument flexure from changing environmental changes. We're talking micron-level precision, depending on the wavelength of the light. And the only way to monitor the relative phase information is to actually interfere the light before it reaches the detector.

I think that for this to work, you would need to know the instantaneous phase information of the light at all times over the course of the integration. For a single telescope, all that is recorded is an image of the integrated photon intensity with no easy way to obtain the phase information (in fact, I believe you need interferometry to even obtain the phases). As mentioned in the article, interferometry works by adjusting two (or more) telescope beam path lengths such that the relative phases are just right to produce a desirable interference pattern, which is then recorded and used in part to determine the separation of two otherwise unresolvable objects. Beam path adjustments need to be accomplished in real time due to changing position of the object on the sky (telescopes need to follow and adjust the paths in order to keep the path length difference constant) as well as instrument flexure from changing environmental changes. We're talking micron-level precision, depending on the wavelength of the light. And the only way to monitor the relative phase information is to actually interfere the light before it reaches the detector.

Edit: I guess I'm saying what about "asynchronous phase detection for interferometry". If you can capture a large enough amount of phase data on the front end it might work. I did a google search but didn't have any hits specifically about this in the first few pages.

The interference pattern is built up by looking for simultaneous clicks at pairs of detectors as a function of the fiber optic cable length.

How do they know it's simultaneous if there's a distance separating the telescopes? Or is there some minimum distance before relativity starts to become an issue?

I think the photo relays are equidistant between the detectors and the interferometer. Because the speed of light is constant, and the distance between the telescopes is known, they can tell which photons are simultaneous vs which are not.

This is probably a dumb question, but I've always wondered why the interference has to be done in real time?...

This is not a dumb question at all. It turns out recording the light phase at multiple telescopes is exactly what is done at radio frequencies, and is why they can use telescopes located all over the world (they literally record the information to disk with an atomic timestamp and FedEx the disks to a central location to interfere later).

While there are technical reasons this is easier at radio, it turns out there is a quantum limit too. Radio relies on the number of photons per quantum mode being high (as deftly mentioned in the article) and the photons clumping (Bose-Einstein statistics). As one gets to infra-red and optical, the number of photons per mode for a typical astronomical source falls below one. In this limit it is more effective (sensitive) to interfere photons first then record them, vs. recording their phase then interfering. Basically, there is a transition between the maximum quantum allowable sensitivity at one photon per mode, below that you combine photons then detect, above it you can detect photon phase first and interfere later.

As one gets to infra-red and optical, the number of photons per mode for a typical astronomical source falls below one.

Is this a limitation of our collecting instruments (the telescope mirrors) or is this a result of the sources themselves (e.g. we are just two far to get sufficient numbers of photons)?

I wish I had more than a cursory understanding of quantum mechanics, I think I grasped your overall point but I was unsure what a quantum mode was. A google search lead me here: http://en.wikipedia.org/wiki/Normal_mode but the math was a bit too heavy. I only got as far as differential equations in college and didn't do so well on my first shot, and that was a decade ago If I could do it all again I'd be way more serious in my studies and try to pursue a career in physics...

"For those of you who have absolutely no understanding of Quantum anything, this is another problem that has largely been solved by Magic. LOOK AT THIS PICTURE!!!!!!!!!!"

Tho seriously...if I read it right ... they use two equally powerful telescopes at slightly different distances from the object to be view, delay the passage of light thru one of them in order to create a brighter, clearer image then either of the single telescopes could create....

I know I am missing something here because that sounds more or less like a fancy version of a how a pair of eyes work.

Anyone able to dumb it down more for me? I was under the impression we already did this with regular earthbound telescopes and radio scopes, minus the slow down the light trick....

EDIT:

Ok re-read it. We are doing what we thought, its just the system to do it loses light and resolution, more or less.

So they are using entangled particles to measure the the light that comes thru, and compare them..

Chris Lee / Chris writes for Ars Technica's science section. A physicist by day and science writer by night, he specializes in quantum physics and optics. He lives and works in Eindhoven, the Netherlands.