Publications

Issue Archive

Using Adaptive Optics to Perform Deep-Tissue Cell Imaging

Monday, 01 July 2013

Page 1 of 2

In biological imaging, the object of interest is not always in plain sight. This is especially true when performing microscopy of cells buried deep within tissues. The tissues covering the cells tend to bend and scatter the light passing through them, which prevents acquiring sharp images of the cells below. To overcome this, researchers at a leading science campus developed a novel way to use adaptive optics in microscopy.

Adaptive Optics Imaging

Figure 1. A simplified schematic of the adaptive optics microscope. The Vision Development Module processes data and outputs adaptive optics parameters to the SLM via video out port. Multiple DAQ modules are connected via a RTSI cable, increasing channel count without sacrificing tight synchronization.Adaptive optics is a technique used to correct imaging aberrations. It is most famously used in astronomy where ground-based telescopes use deformable mirrors to correct for the atmosphere bending and scattering starlight. However, in biological microscopy, the technique can be adapted to correct for biological aberrations by initially taking many images to characterize the aberrations that each part of the light beam experiences as it travels through the sample. A spatial light modulator (SLM) is then used to divide the light beam into sections and to steer the parts back into sharp focus.

To help create the instrument used in this technique, researchers used Coleman Technologies Inc. (CTI) to assist in building the microscope’s control platform. CTI is a National Instruments Alliance Partner with extensive expertise in scientific application development and serves numerous universities, foundations, and medical device and pharmaceutical companies. With several scientists with PhDs on staff, CTI is experienced in developing advanced imaging systems and custom image analysis software.

Synchronization, Speed, and Growth

The two-photon microscope creates images by raster scanning the focus of a laser in a 2D plane within the sample (Figure 1). Each position of the laser during the raster scan corresponds to one pixel in the final image, and the value of each pixel corresponds to the fluorescence detected from the sample at that position. To measure the fluorescence, the microscope uses confocal geometry and photomultiplier tubes (PMTs). To scan many images per second, we needed to synchronize the laser control and PMT data acquisition at megahertz sampling rates.

Two NI PCI-6115 and one NI PCI- 6110 S Series DAQ devices proved to be a good match for this task. These devices deliver true simultaneous output, which was needed to control laser intensity and position and to trigger signal recovery electronics. These S Series devices also offer true simultaneous input, which was needed to read data from PMTs at the required rates. In addition, the large onboard memory of the S Series devices was especially valuable. This feature was used to store complex output waveforms directly on the DAQ device prior to each scan, which guaranteed output instead of relying on streaming waveform points through the limited bandwidth of the PXI bus during a scan.

Figure 2. A LabVIEW Program Used to Control all Microscope Operations.The ability to link multiple S Series DAQ devices using the RTSI bus was also vital. The RTSI system simplified synchronization to a point where it was almost indistinguishable whether we were using several DAQ devices or a single DAQ device. Increasing the channel count was as simple as adding another device, which gave the researchers the luxury of expanding the microscope’s capabilities as needed.

To run the hardware, we developed a custom LabVIEW program (Figure 2). This program runs the two-photon microscope, analyzes the acquired data, and optimizes the adaptive optics to correct for the inherent imaging aberrations of the samples. LabVIEW excelled as a development environment by simultaneously providing the following:

• High-performance data analysis with native multicore support;

• Rapid prototyping and algorithm configuration for adaptation to new and changing requirements;