Download

The integration of the Multiview Reconstruction and the BigDataViewer is available through the Fiji Updater. Simply update Fiji and the Multiview-Reconstruction pipeline will be available under ' Plugins ▶ Multiview Reconstruction ▶ Multiview Reconstruction Application'. The source code is available on GitHub, please also report feature requests & bugs there.

Introduction & Overview

The Multiview Reconstruction software package enables users to register, fuse, deconvolve and view multiview microscopy images (first box). The software is designed for lightsheet fluorescence microscopy (LSFM, second box), but is applicable to any form of three or higher dimensional imaging modalities like confocal timeseries or multicolor stacks.

Interactive viewing and annotation of the data is provided by integration with Tobias Pietzsch's BigDataViewer. Both projects share a common XML data model to describe multiview datasets.

History

This software package is the successor to the SPIM Registration package. While the SPIM Registration will continue to live within Fiji for the time being, we will mostly offer support only for this new software package. It has all the functionality the SPIM Registration offered, but is much more flexible and supports many more types of registration, fusion and data handling.

Examples

What does 'multiview' mean exactly?

When we speak of multiview datasets we generally mean that per timepoint there n image stacks, which could be different:

Channels

Illumination Directions

Rotation Angles

Note that even if n=1, i.e. there is only one stack per timepoint, this software can be used to stabilize the timeseries (drift-correction).

Two examples of multiview datasets reconstructed with this software package are shown as YouTube videos below. Both datasets were registered using the bead-based registration (Nature Methods, 7(6):418-419) and the Multiview Deconvolution (Nature Methods, 11(6):645-648), which are part of this software package. Please check out the Citation section for information on how to cite this software package.

The first video shows a developing Drosophila embryo expressing His-YFP in all cells. The entire embryogenesis was acquired using the Zeiss Demonstrator B. The top row shows the multiview deconvolution of this seven-view dataset, the lower row the content-based fusion.

Lightsheet fluorescence microscopy

Lightsheet fluorescence microscopy entered the world of modern biology in 2004 when Selective Plane Illumination Microscopy (SPIM) was published. It allows in toto imaging of large specimens by acquiring image stacks from multiple angles with high spatial and temporal resolution. Many impressive variations and extensions have been published, some of them using new variations or entirely new naming schemes:

OpenSPIM

Digital scanned laser light-sheet fluorescence microscopy (DSLM)

DSLM and structured illumination (DSLM-SI)

Two-photon versions of SPIM and DSLM

MuVi-SPIM

Two-photon light sheet microscopy (2P-LSM)

Bessel-Beam illumination and DSLM

Ultramicroscopy

Orthogonal-plane fluorescence optical sectioning microscopy (OPFOS)

Multidirectional selective plane illumination microscopy (mSPIM)

Thin-sheet laser imaging microscopy (TSLIM)

...

During the 2012 Lightsheet fluorescence microscopy meeting in Dublin, organized by Emmanuel Reynaud, it was voted to summarize most of these developments under the name of Lightsheet fluorescence microscopy (LSFM).

The second video shows a fixed C. elegans larvae in L1 stage expressing Lamin-GFP and stained with Hoechst. The four-view dataset was acquired using the Zeiss Lightsheet Z.1 microscope. It illustrates the increase in resolution that can be achieved through multiview imaging combined with multiview deconvolution. The top row shows one of the four input views, the bottom row the result of the multiview deconvolution.

Detailed Tutorials

Using this software package consists of several steps. Please note that this software is more flexible and that this order is just a suggestion of how to use it in a more-or-less standard case.

Once the dataset is defined, you might want to resave all the image data as HDF5 (to be able to view it using the BigDataViewer) or simply as TIFF to enable fast loading of the image data. Also note that those two formats are the only ones that allow to extend the dataset/XML with newly fused data.

Based on an existing dataset definition (XML file), the first step is typically to find interest points in the images that will be used for registration. In this step it is possible to look for multiple types of detections, e.g. fluorescent beads, nuclei or membrane markers. All of them can be stored in parallel.

Translation-invariant matching using any kind of detections (an approximate knowledge of the rotation is required, e.g. 45 degrees around the x-axis. Check the Tools-Section for how to provide approximate transformations.)

Precise matching using the Iterative-Closest Point (ICP) algorithm (the dataset needs to be aligned using for example any of the above methods).

The different views (or timepoints) can be aligned using Translation, Rigid or Affine transformation models. We also support regularized transformation models developed by Stephan Saafeld.

Alignment over time can be performed in different ways

If there is no drift, every timepoint can be simply registered individually. Please note that it is possible to register each timepoint and later on treat it as a rigid unit. In this way it is possible to first register each timpoint using an affine transformation model and consecutively stabilize over time just using a translation model.

A reference timepoint that is individually registered first can serve as basis for all other timepoints. This kind of alignment usually only works with external landmarks like fluorescent beads.

Alternatively, it is possible to align timepoints using a sliding window of +-n timepoints, in which all views are matched against each other. This will work on any kind of detections.

In general, it is possible to stack up as many rounds of transformations as you want. They will be summarized in a list transformations that are concatenated to one single affine transformation before fusion/deconvolution/viewing. A typical list of transformations looks like:

Once the dataset is entirely aligned it can be fused or deconvolved into a single image per timepoint and channel. Deconvolution requires the knowledge of point spread functions (PSF's), which can be extracted from matched beads directly or can be provided by the user.

Alternatively, there is no need to fuse the data and you can interactively look at the data using the BigDataViewer if you resaved it as HDF5. You can still do that now after the registration is complete.

Tools

Apart from this potential processing outline there are many tools available to help process multiview timelapse datasets.

This tool can be used to apply any kind of transformations to individual views, or all views at once. This allows the user to specify a know rotation of different acquisition angles around an axis or to simply re-orient the entire dataset after the registration is complete.

Detections as identified by Detect Interest Points can be visualized. It is possible to visualize all detections or only those that are found to be corresponding with other detections and were therefore used for registration. This helps to identify potential misalignments if corresponding detections are not equally distributed around the sample as they should be. One can also load the input view at the same time to overlay the detections with the image data

Video Tutorials & Scientific Talks

This 30-minute talk by Stephan Preibisch covers the theory behind registration of multiview lightsheet microscopy data and it also quickly addresses the problem of multiview fusion & deconvolution.

Tutorial: Fiji Multiview Lightsheet Reconstruction Software

This one hour tutorial by Stephan Preibisch covers the basic usage of this multiview reconstruction software for Fiji. Documentation, source code, bug reports and feature requests can be found on SourceForge.

Downloading example dataset

There is a 7-angle SPIM dataset of Drosophila available for download here. Other datasets can be provided upon request.

System requirements

Multi-view SPIM datasets are typically rather large, therefore it is recommended to use the registration plugin on a computer with a lot of RAM. The minimal requirement for the example dataset is at least 4Gb of memory however we recommend an 16Gb+ system, ideally at least 64Gb and a CUDA capable graphics card. You may need to increase the Fiji memory limit by going to Edit->Options->Memory & Threads.

Cluster processing

See the dedicated page describing an automated workflow for processing SPIM data from Lighsheet.Z1 and OpenSPIM on the MPI-CBG cluster.