This website uses cookies to deliver some of our products and services as well as for analytics and to provide you a more personalized experience. Click here to learn more. By continuing to use this site, you agree to our use of cookies. We’ve also updated our Privacy Notice. Click here to see what’s new.

About Optics & Photonics TopicsOSA Publishing developed the Optics and Photonics Topics to help organize its diverse content more accurately by topic area. This topic browser contains over 2400 terms and is organized in a three-level hierarchy. Read more.

Topics can be refined further in the search results. The Topic facet will reveal the high-level topics associated with the articles returned in the search results.

Abstract

In super-resolution imaging techniques based on single-molecule switching and localization, the time to acquire a super-resolution image is limited by the maximum density of fluorescent emitters that can be accurately localized per imaging frame. In order to increase the imaging rate, several methods have been recently developed to analyze images with higher emitter densities. One powerful approach uses methods based on compressed sensing to increase the analyzable emitter density per imaging frame by several-fold compared to other reported approaches. However, the computational cost of this approach, which uses interior point methods, is high, and analysis of a typical 40 µm x 40 µm field-of-view super-resolution movie requires thousands of hours on a high-end desktop personal computer. Here, we demonstrate an alternative compressed-sensing algorithm, L1-Homotopy (L1H), which can generate super-resolution image reconstructions that are essentially identical to those derived using interior point methods in one to two orders of magnitude less time depending on the emitter density. Moreover, for an experimental data set with varying emitter density, L1H analysis is ~300-fold faster than interior point methods. This drastic reduction in computational time should allow the compressed sensing approach to be routinely applied to super-resolution image analysis.

Figures (8)

Fig. 1 Schematic depiction of STORM imaging using compressed sensing. A subset of fluorescent
emitters (red dots; top left) from a labeled sample (bottom left) are activated
stochastically. The activated emitters are close enough that the individual emitters are not
distinguishable in a 4 pixel × 4 pixel conventional image (top left). Using compressed
sensing, a high resolution grid of fluorophore locations is reconstructed (top right) from
the low resolution image (top left) under the constraint that this reconstruction contains
the smallest number of emitters that can reproduce the measured conventional image up to a
given accuracy. This process is repeated and the individual reconstructed frames are summed
to produce the final high resolution reconstruction (lower right) of the original sample.

Fig. 2 One-dimensional (1D) illustration of the properties of the solution space exploited by
L1-Homotopy. (a) A simulated 1D image with two emitters before (blue) and after (red)
convolution with a Gaussian point-spread-function. (b) Results of the L1H analysis of the
image shown in (a) displayed as a kymograph of the amplitude of each up-sampled pixel, i.e.
potential fluorophore location (rows), as a function of the homotopy parameter,
λ. Note that as λ decreases, increasingly
favoring accuracy over sparsity, the single initial peak splits into two peaks, representing
two fluorophore localizations. (c) The amplitude of two adjacent up-sampled pixels, e.g.
emitter locations, as a function of λ for the left emitter (top) and the right emitter
(bottom). The amplitude of these pixels, depicted in red, blue, green and black symbols,
corresponds to the pixels marked by red, blue, green and black arrows in (b). The amplitude
of these pixels are piece-wise linear functions of λ. The
break-points (dashed grey lines), where the slopes change, correspond to the addition,
removal, or movement of a possible emitter, i.e. where the amplitude of a pixel changes from
zero to non-zero or vice versa (marked by gray circles). Note that “movement”
of an emitter from one up-sampled pixel to another is actually accomplished by removing it
from the solution and then adding another emitter at a new location.

Fig. 3 Analysis of a sub-region of the image by L1-Homotopy. The wide-field fluorescence image
detected by the camera is first divided into partially overlapping, sub-regions of 7 ×
7 camera pixels in size (a camera pixel is represented by a black box). One such region is
shown here. In the reconstruction of this region, we allow emitters to be placed within an 8
× 8 camera pixel space, corresponding to a 1/2 pixel extension in each direction from
the 7 × 7 camera pixel region, and the emitter positions are allowed on a finer,
8-fold up-sampled grid (red dots). To further limit errors that might arise due to edge
effects, only emitter localizations located within a central 5 pixel × 5 pixel block
defined by the thicker black line are kept. Multiple of these non-overlapping, 5-pixel wide,
up-sampled images are then stitched together to form the final STORM image.

Fig. 4 L1-Homotopy produces nearly identical reconstructions to interior point methods. (a-c;
left) Simulated, high-density, 7 × 7 camera pixel single-frame conventional image. The
locations of individual emitters are marked with the red crosses. (a-c; middle) The CVX
reconstruction of the emitter locations. (a-c; right) The L1H reconstruction of the same
images. The arrows in (c) highlight a rare difference between the two solutions. (d)
Histogram of the distance between an emitter in the CVX reconstruction and the nearest
emitter in the L1H reconstruction for the same simulated data. The histogram is plotted for
three different emitter densities. (e) The average distance between every emitter in the CVX
solution to the nearest emitter in the L1H solution as a function of emitter density. (f) The
average fractional difference in the number of emitters found in the CVX solution and the L1H
solution as a function of emitter density. (g) The average percent difference between the L1
norm of the CVX solution and the L1H solution. (f) The average percent difference between the
residual image error of the CVX and the L1H solution. (e) – (f) Both the mean (blue)
and median (red) are provided. Error bars represent standard deviation measured directly
(blue) or estimated from the inter-quartile range (red).

Fig. 5 Comparison of the reconstructed emitter density and localization error derived from L1H,
CVX, a single-emitter fitting algorithm, and a multi-emitter fitting algorithm. (a) The
density of reconstructed emitters as a function of the density of simulated emitters for a
single-emitter fitting algorithm (green squares), a multi-emitter fitting algorithm (blue
pluses), L1H (red crosses), and CVX (black diamonds). The dashed black line has the slope of
1. (b) The XY localization error for each algorithm labeled as in panel (a). The two panels
in (a) and (b) cover different density ranges.

Fig. 6 L1-Homotopy reconstructs images with a substantially higher speed than interior point
methods, but is slower than the emitter fitting algorithms. (a) The average analysis time for
a 256 × 256 camera pixel image as a function of emitter density for the two compressed
sensing algorithms CVX (black diamonds) and L1H (red crosses) as well as a multi-emitter
fitting algorithm (blue pluses) and a single-emitter fitting algorithm (green squares). (b)
The ratio of the analysis time per frame for CVX to L1H as a function of emitter density.

Fig. 7 L1-Homotopy analysis of experimental STORM data. (a) A sub-area of a single frame of a
high-emitter-density STORM data set acquired from Alexa-647-labeled microtubules in BS-C-1
cells. Individual molecules found with L1H (green circles), CVX (blue crosses), and a
single-emitter localization algorithm (red crosses) are plotted. (b) Average analysis time
per frame (256 × 256 camera pixel) for L1H and CVX estimated from the analysis time
for the first 10 frames of a 5000 frame STORM movie of microtubules in a BSC-1 cell. CVX
takes 340-fold longer than L1H to analyze these frames. (c) The full reconstructed STORM
image of this data set using a single-emitter localization algorithm. (d) The same data set
reconstructed with L1H. The red arrows and arrow heads indicated two regions with high and
low microtubule density, respectively. (e) A zoom-in of the area outlined by the red box in
(c). (f) A zoom-in of the same area outlined by the red box in (d). The image reconstructed
by the L1H compressed sensing algorithm is much smoother than that by the single-emitter
localization algorithm because roughly 4-fold more fluorophores are localized by the L1H
algorithm. Scale bars in (c,d) are 10 µm and 1 µm in (a, e, f).

Fig. 8 A comparison of different analysis methods on simulated STORM data. (a) A single frame of
the STORM movie. (b) The true locations of the emitters used for the simulation. (c) The
STORM image reconstructed using a single-emitter fitting algorithm that can only handle
sparse emitter densites (d) The STORM image reconstructed using a multi-emitter fitting
algorithm, DAOSTORM, that can handle moderate-to-high emitter densities. (e) The STORM image
reconstructed using the deconSTORM algorithm. (f) The STORM image reconstructed using the L1H
algorithm. All scale bars are 500 nm. The simulated STORM movie had an average density of 5
emitters/μm2 and was 200 frames long.