Miniature bouncing tennis balls reveal cellular interiors

Imaging a cell by floating a particle through it and seeing what it bounces off.

I admit it, I love my job(s). I love doing science, and I love reporting science. In particular, I love it when my expectations are confounded, as they recently were in a paper I read. In this particular paper, I was expecting to see some nice results, not to learn anything truly new, since the authors have been working on this for a long time.

What I found were results that are still a bit preliminary. But the authors also introduced me to a whole new idea, one that everyone but me probably knew about. It turns out that there are people who use the random motion of little beads in cells to map out cellular interiors. Simply introducing quantum goodness to the measurement process can draw the inside of a cell with a precision that blows away most optical microscopes. Add in a dash of over-enthusiasm on the part of the authors, and you get a bit of research I can really get excited about.

Randomness generates a map

Imagine that you want to explore a house. A house consists of rooms with furniture, light fittings, curtains, and other things. Unfortunately, you are not allowed to look inside the house, but you can track magic tennis balls. These tennis balls bounce around inside the house constantly. When they land on something soft, like a bed, or get tangled up in the curtains, they still bounce, but the bounces are much smaller. Hard objects make for big bounces.

As you track the average location of the tennis balls, you map out the walls, beds, curtains, doors, and windows (the tennis balls that exit the window never return, as is often the case in real life). By looking at the statistics of the tennis balls' motion on short time scales, you can distinguish when it was in proximity to soft furnishings vs. hard walls. With good enough measurements, you could distinguish rooms with carpeted floors from hardwood floors.

It turns out that you can do the same thing in a cell. A small bead will be subject to the same forces that drive Brownian motion. As a result, it will diffuse around the cell, bouncing off membranes, passing through gaps, and getting trapped in confined spaces. By analyzing the motion of the particle at different timescales, you can figure out the mechanical environment that the bead finds itself in. When it is surrounded by lots of proteins, it will feel confined and diffuse more slowly, while in open space (surrounded by water) it will diffuse faster. This analysis can then be mapped back to cellular components, creating an image of the cellular interior.

It sounds great, but, in practice, you would be waiting a very long time to get that image. To speed things up, you trap the particle using a laser. The particle will still jiggle about, and the size of the jiggling motion still reflects the cellular environment, but you get to set the average position using the laser beam. To map out the cell, you simply raster scan the laser over the cell, and at each location, calculate the statistics of the particle's random motion. This doesn't take nearly so long as relying on purely random motion.

The key to this technique, though, is determining the particle position as accurately as possible as a function of time.

A quantum locator

To understand how the researchers improved their position detector, we need to understand a little bit about light. Light always comes in something called a mode. A mode defines the electric field amplitude, phase, and polarization as a function of time and space. When light encounters the bead in the optical trap, it scatters—and by scattering, we mean that some of the light goes from one mode to another. Our positional accuracy is determined by how well we can separate out the different modes and detect them.

This job is made easier because not every aspect of the mode is sensitive to the particle position, so we only really care about the spatial mode—that is, the shape of the laser beam after it is scattered. In principle this sounds easy. We make a light beam that is in a single spatial mode. When this scatters from the particle, other modes show non-zero intensity, making our measurement incredibly sensitive (detecting light against a dark background is always the best measurement).

This is where quantum mechanics gives us a good kicking. The amplitude and phase of the electromagnetic field form a pair that are subject to the Heisenberg uncertainty limit, meaning that we cannot know both to arbitrary accuracy. So when a laser produces light, it has a certain amount of natural noise in the phase and amplitude. The amplitude noise shows up as light in different spatial modes, reducing the accuracy of our position sensor.

What quantum mechanics taketh away, it also gives back, but in the most difficult manner possible. The joint uncertainty in the phase and amplitude cannot go below a certain limit, but that doesn't mean that you can't make one property much more noise-free than the joint minimum. The price you pay is that the other property becomes very noisy. This is called squeezed light.

The researchers, from Australia, used a light source that was amplitude squeezed, meaning that the spatial mode was much cleaner compared to that from a good laser, giving a boost of the signal-to-noise in the particle tracking accuracy (this comes at the cost of the light's phase, which becomes very noisy).

At the end of the day, the results were not too impressive. They showed that, for particles in water, they could resolve the position some 14 percent better than possible using an ordinary laser. In real terms, that 14 percent corresponds to a positional accuracy of around 10nm, which is about as good as some popularadvanced microscopy techniques, and much better than that from ordinary imaging. Unfortunately, they don't actually do any imaging because, in their experimental setup, the researchers were limited to 1D line scans.

As a result, this paper has the dubious distinction of being the first imaging paper I have read that didn't have any images.

The big deal comes from the prospects. Squeezing is measured by comparing the actual noise to that of a coherent light source—a coherent light source operates such that the noise in the phase and the noise in the amplitude are both as small as allowed by quantum mechanics. In this experiment, the noise in the amplitude was reduced by a factor of two. However, in other experiments, squeezing has reduced the levels of noise much further. Should these be implemented in an imaging setup, then we could expect imaging resolution improvements that are way beyond those reported here.

Unfortunately, there is a long way to go before that occurs. This is because it matters over what time period the squeezing occurs. On the microsecond scale, the amplitude noise can be reduced by a factor of 10 or more, while diffusion in the cell happens on the timescale of milliseconds. It is much more difficult to get any significant squeezing at these longer timescales. The fact that the researchers managed to get a factor of two is impressive. I am sure things will get better, but I don't imagine that this will happen very soon.

Then, there is the question of interpreting the results. Let's imagine that you can get 0.1nm resolution. Well, a small protein has a diameter of ~3nm, so you are forced to interpret the results in terms of not just the position of the protein, but also its orientation and shape. Yet, the number we actually have, the diffusion rate, is only a single number. Interpreting a highly variable environment in terms of a single number (even one that varies in space) seems a situation ripe for confusion.

Chris Lee / Chris writes for Ars Technica's science section. A physicist by day and science writer by night, he specializes in quantum physics and optics. He lives and works in Eindhoven, the Netherlands.