Longitudinal Stream (FS 5.1, 5.2, 5.3 and 6.0)

The largest differences between this release and previous versions:

Individual time points are resampled to the base/template space for the -long runs. This reduces variability even further and simplifies many of the algorithms as all input data is in correspondence across time.

The workflow below has not changed! All changes are internal.

Manual edits are incorporated and in many cases transferred between the cross, base and long runs.

1. Background

Compared with cross-sectional studies, a longitudinal design can significantly reduce the confounding effect of inter-individual morphological variability by using each subject as his or her own control. As a result, longitudinal imaging studies are getting increased interest and popularity in various aspects of neuroscience. The default FreeSurfer pipeline is designed for the processing of individual data sets (cross-sectionally), and thus not optimal for the processing of longitudinal data series. It is an active research area at the Martinos Center for Biomedical Imaging, how to obtain robust and more reliable cortical and subcortical morphological measurements by incorporating additional (temporal) information in a longitudinal data series.

The longitudinal scheme is designed to be unbiased with respect to any time point (TP). Instead of initializing it with information from a specific time point, a template volume is created and run through FreeSurfer. This template can be seen as an initial guess for the segmentation and surface reconstruction. The FreeSurfer cortical and subcortical segmentation and parcellation procedure involves solving many complex nonlinear optimization problems, such as the deformable surface reconstruction, the nonlinear atlas-image registration, and the nonlinear spherical surface registration. These nonlinear optimization problems are usually solved using iterative methods, and the final results are known to be highly sensitive to the selection of a particular starting point (a.k.a. algorithm initialization). It is our belief that by initializing the processing of a new data set in a longitudinal series using the processed results from the unbiased template, we can reduce the random variation in the processing procedure and improve the robustness and sensitivity of the overall longitudinal analysis. Such an initialization scheme makes sense also because a longitudinal design is often targeted at detecting small or subtle changes. Additionally to the template new probabilistic methods (temporal fusion) were introduced to further reduce the variability of results across time points. For these algorithms it becomes necessary to process all time points cross-sectionally first.

The longitudinal processing scheme is coded in the recon-all script via the flags "-base" (template creation) and "-long" (longitudinal runs).

3. Workflow Summary

Note: All subjects and all time points are set up in a single SUBJECTS_DIR, which will also contain the base and the longitudinal runs after processing.

Step 1. cross-sectionally process all time points with the default workflow (tpN is one of the timepoints):

recon-all -all -s <tpNid> -i path_to_tpN_dcm

Step 2. create an unbiased template from all time points for each subject and process it with recon-all:

recon-all -base <templateid> -tp <tp1id> -tp <tp2id> ... -all

can be started once all norm.mgz files are available from the cross sectional processing of the individual timepoint (step 1). The <templateid> represents a new average template for this subject and the name needs to be chosen by the user. A directory with this name will be created in $SUBJECTS_DIR. This <templateid> needs to be passed to the longitudinal processing of each timepoint of this subject below (step 3). Note, the '...' means that more timepoints can be used as "-tp <tpNid>". It is possible to pass only a single time point (e.g. for subjects that dropped out, see below).

Step 3. "-long" longitudinally process all timepoints:

recon-all -long <tpNid> <templateid> -all

The longitudinal processing scheme is coded in the recon-all script via the "-long" flag. These runs will be much faster than the cross sectional or template runs above. This step produces output subject data containing <tpNid>.long.<templateid> in the name (to help distinguish from the default stream).

Step 4. Compare results from step 3:E.g. calculate differences between <tp1id>.long.<templateid> and <tp2id>.long.<templateid>. Do not compare to the template, as that is merely a blurry guess of where things are located in the specific subject, used only for the initialization of the longitudinal runs.

In the following, we describe the two processing streams. We assume that the longitudinal series has time-points tpN (tp1, tp2, ...). All of these have already been processed cross sectionally using the standard FreeSurfer recon-all (step 1). First we discuss the construction of the template (step 2) and then the longitudinal processing of tpN (step 3).

4. Creation of Template (recon-all -base)

In step 2 the unbiased template is created for each subject using information from all of its time points. It is possible to start the template construction once the norm.mgz of all time points are available from step 1. After the template is fully processed it is used as an initial guess to initialize many steps in the longitudinal runs (step 3). Some results will not be initialized, but directly copied from the subject-template (brain mask, linear Talairach map, estimated total intracranial volume eTIV). The assumption is that head size does not change across time.

Special situations:

Pediatric data violates the fixed head size assumption. In the presence of substantial longitudinal growth, the current pipeline may fail. Yet we have found that it is very robust with respect to limited head size differences. In this type of data it is essential to check the template/base image for skull strip errors and meaningful surfaces. Also the eTIV is fixed across time in the final longitudinal directories. In order to get eTIV measures for each time point, read that data from the aseg.stats in the cross sectional directories.

Single time-point subjects: Because of drop-outs it can happen that some individuals were scanned only at a single time point. Instead of removing this valuable information, it is possible (e.g. in the linear mixed effects models LME) to include this data. For that it, however, becomes necessary to run the single time points through the longitudinal stream, to ensure that also those images undergo the same processing steps. This can be done by simply passing a single time point in the base creation step. See "Statistical Analysis of Longitudinal Neuroimage Data with Linear Mixed Effects Models. J.L.Bernal-Rusiel, D.N.Greve, M.Reuter et al. NeuroImage 66:249-260, 2012" for a discussion.

The following paragraphs will explain details on what happens inside the recon-all -base. Only the differences to the regular cross sectional recon_all are discussed. Unless you are interested in the detailed changes, you may stop reading here.

4.1. Template Initialization (-base-init)

The template is created in the -base-init block of recon-all. This is achieved by mri_robust_template, which constructs an unbiased mean or median (default) norm_template.mgz volume together with the transforms that align each tp's norm.mgz volume with the template (see also mri_robust_register which is used by mri_robust_template to construct the maps). The longitudinal scheme later requires aligning the image data of tpN to the template, thus all time points are aligned to an unbiased common space with the command:

where the input --mov <tpsvols> is a list of the time point's norm.mgz files and the output --lta <tpsltas> a list of the LTA registrations files that take each tp to the template and --template ... norm_template.mgz the median image. The LTA maps are stored in <templateid>/mri/transforms/<tpNid>_to_<templateid>.lta. Also the inverse maps are needed and constructed by

Since all data sets come from the same subject, these rigid registrations with 6 dof (translation,rotation) are sufficient to get a good alignment between the (intensity normalized) images (i.e. norm.mgz). The registrations and its inverse will be used to transfer information between time points mutually and between time points and the template in the longitudinal stream. The routine mri_robust_template uses robust statistics to automatically detect outliers and align the rest of the image in an optimal manner. The median template is unbiased with respect to any timepoint and therefore perfectly suited as a template to produce initializations for several steps in the longitudinal runs (see below).

After the registrations and the norm_template.mgz volume are created, the orig.mgz images from all TPs are mapped to the template location and averaged (again the default is the median image) to produce the <templateid>/mri/orig/001.mgz image. This image is then processed cross-sectionally with the standard FreeSurfer stream, mainly to obtain the talairach.xfm, nu.mgz and T1.mgz image, however very early we switch over to the norm_template.mgz.

It is possible to check the registrations and template creation either using the norm.mgz or orig.mgz of all time points. These images are first mapped to and resampled in the space of the template with:

The resampled files <tpNid>_to_<templateid>.norm.mgz can than be compared with each other or with the template's norm_template.mgz using e.g. tkmedit -f file1 -aux file2. If you want to compare aseg.mgz files, make sure, you use nearest for the resample type:

Except for the following steps, the template is processed with the default cross sectional stream:

4.2. Normalization (-normalize)

Output the control points (ctrl_vol.mgz) and bias field (bias_vol.mgz).

4.3. Skull Strip (-skullstrip)

The brainmask.mgz are mapped (with nearest neighbor) from the cross sectional runs and averaged. This is similar to a logical OR (union) as it will only exclude voxels that are non-brain in almost all TP.

4.4. EM (GCA) Registration (-gcareg)

Use the norm_template.mgz instead of the nu.mgz for the Talairach registration. The resulting registration talairach.lta should be checked as it will be used in the longitudinal stream and will influence all time points.

4.5. CA Normalize (-canorm)

Use the norm_template.mgz instead of the nu.mgz for the normalization. This ensures that the norm_template is correctly normalized.

(internal note: this step might not be necessary, should be checked if norm_template.mgz is already sufficiently normalized)

5. Longitudinal Stream (recon-all -long)

5.1. Input (-i)

Copy the orig/00?.mgz from the cross sectionals.

5.2. Motion Corrections (-motioncor)

Map 00?.mgz to base space and average there in one step to create orig.mgz. The transforms are available from the cross sectional step (if not, because an older version or the fsl flag was used, they are recreated with mri_robust_register).

5.3. NU Intensity Correction (-nuintensitycor)

Same processing as in cross sectional stream.

5.4. Talairach (-talairach)

Copy the talairach.xfm from the template (keeps edits).

5.5. Normalization (-normalization)

Default: Map and use control points control.dat from the cross sectional, if the file exists (e.g. if manual edits were made to the cross). If the control points file exists in the long run it will not be overwritten (to preserve potential edits done to the long).

5.6. Skull Strip (-skullstrip)

Copy the brainmask.mgz from the template to the current TP. Use it to mask the T1.mgz to obtain the final brainmask. Manual edits should be done in the base/template.

5.7. EM (GCA) Registration (-gcareg)

Copy the talairach.lta from the template. Edits should be done in base/template.

(internal note: in the future maybe use the nu.mgz's from all TPs to construct the registration simultaneously (in the -base))

5.8. CA Normalize (-canorm)

The normalization is initialized with the aseg.mgz of the template copied to the current TP. Thus all TPs use similar control points for the normalization.

(internal note: in the future maybe find the control points by looking at all nu.mgz simultaneously (in the -base))

5.9. CA Nonlinear Registration (-careg)

Uses the talairach.m3z from the template as initialization. Also different flags are used:

5.10. CA Nonlinear Registration Inverse (-careginv)

5.11. Remove Neck (-rmneck)

Same processing as in cross sectional stream.

5.12. EM Registration (with skull but no neck) (-skull-lta)

Same processing as in cross sectional stream.

5.13. CA Label (-calabel)

Copy the linear transformation of this TP (cross) to base/template from the template into local transform directory. Then create aseg.fused.mgz by mapping and incorporating segmentation information of all TPs (probabilistic voting). Finally uses the fused aseg as initialization to mri_ca_label to construct the final labels. This will indirectly incorporate aseg edits from the cross sectionals. Furthermore, the intensity scaling factors are passed from the template.

(internal note: In the future maybe labeling all TPs simultaneously?)

5.14. Normalization 2 (-normalization2)

Same processing as in cross sectional stream.

5.15. Mask Brain Final Surface (-maskbfs)

Changes only concern manual edits: If no brain.finalsurfs.manedit.mgz file exists, check if it exists in the cross sectional run of this time point and map/copy edits from there.

5.16. WM Segmentation (-segmentation)

Changes only concern manual edits: If not edited in -long, copy edits and deletions from cross (default) or template (if -uselongbasewmedits is specified).

5.17. Cut/Fill (-fill)

Same processing as in cross sectional stream.

5.18. Tessellate (-tesselate)

Skip, because we will take initial surface (orig) from template/base anyway.

5.19. Orig Surface Smoothing 1 (-smooth1)

Skipped.

5.20. Inflation 1 (-inflate1)

Skipped.

5.21. QSphere (-qsphere)

Skipped.

5.22. Automatic Topology Fixer (-fix)

Skipped.

5.23. Final Surfaces (-finalsurfs)

Copy and use ?h.white and ?h.pial from the template to initialize white, pial and orig surfaces in current TP. This also ensures that all surfaces are indirectly registered (the vertex numbers agree) across all time points.