MVPA Meanderings

Thursday, February 8, 2018

This is a tutorial for making a surface version of a volumetric (NIfTI) image (e.g., a ROI) for visualizing with the Connectome Workbench. This post replaces a tutorial I wrote back in 2013, and assumes a bit of familiarity with Workbench. If you're using Workbench for the first time I suggest that you complete the tutorial in this post before trying this one.

First, a warning: as advertised, the steps in this post will make a surface version of a volumetric image. However, the surface version will be an approximation, and likely only suitable for visualization purposes (e.g., making an illustration of a set of ROIs for a talk). If you have an entire dataset that you want to prepare for surface analysis (e.g., running GLMs), you need different procedures (e.g., SUMA, FreeSurfer). Again, I suggest the directions in this post (wb_command -volume-to-surface-mapping) be used cautiously, for quick visualizations, and accompanied by careful confirmation that the mapping produced a reasonable result.

needed files

Before we can make a surface version of a volumetric image, we need to know what it's aligned to, so that we can pick the proper corresponding surface template. Recall that gifti surface files (*.surf.gii) are sort of the underlay anatomy for surface images (e.g., in my Getting Started post on Workbench we load surf.gii files to get a blank brain), so we'll need gifti surface files that will work with our volumetric image (and to serve as the underlay when we're ready to plot the converted volumetric ROI).

For this demo, we'll use fakeBrain.nii.gz (it should let you download without signing in; this is the same NIfTI as shown in other posts), which is aligned to MNI. One MNI dataset with the necessary surface files is the HCP 1200 Subjects Group Average Data; this post describes the files and gives the direct ConnectomeDB download link.

The HCP 1200 Subject Group Average download contains multiple *.surf.gii for each hemisphere, including midthickness, pial, and very_inflated. We can use any of these for visualization in Workbench, but which we pick for the volume to surface conversion does make a difference in what the resulting surface image will look like. It seems best to start with the midthickness surface for the conversion, then try others if the projection seems off.

using wb_command

The wb_command -volume-to-surface-mapping function does the conversion. wb_command.exe (on my Windows machine; file extension may vary) should be in the same directory as wb_view.exe, which you use to start the Workbench GUI. Don't double-click wb_command.exe - it's a command line program. Instead, open up a command prompt and navigate to the directory containing wb_command.exe (on my machine, /bin_windows64/). If you type wb_command at the prompt it should print out some version and help information; if not, check if you're in the correct directory, and try ./wb_command if you're on a linux-type system.

Now we're ready: we give the function our input NIfTI (fakeBrain.nii.gz), our surface gifti (S1200.L.midthickness_MSMAll.32k_fs_LR.surf.gii), the output file we want it to make (demoL.shape.gii), and the options for it to use (-trilinear). Since surface gifti files are just for one hemisphere, we have to do the command twice, once for each. (I included the full path to each file below; update for your machine.)

We now have demoL.shape.gii and demoR.shape.gii, surface versions of fakeBrain.nii.gz, which can be viewed in Workbench (or other gifti-aware programs). Check the surface projection carefully: does the ROI align properly with the anatomy? If not, try a different .surf.gii or fitting option (e.g., -enclosing) in the wb_command call; these can make a big difference.

Below the jump are the surface images from the above commands, plotted on the S1200 midthickness in Workbench.

Wednesday, January 17, 2018

Quite a few of the posts over the last year or so have arisen from things that catch my eye as I review the SMS/MB4 images we're collecting in our ongoing project, and this is another. For quick comparison, I make (with knitr; we may give mriqc a try) files showing slices from mean, standard deviation, and tSNR images for participants, runs, and sessions.

Some participants have obvious bright crescent-shaped artifacts in their standard deviation images (the examples above are from two people; both calculated from non-censored frames, after completing the HCP Minimal Preprocessing pipeline). Looking over people and runs (some participants have completed 6 imaging sessions, over months), people have the crescents or not - their presence doesn't vary much with session (scanning day), task, or movement level (apparent or real).

They do, however, vary with encoding direction: appearing in PA phase encoding runs only. Plus, they seem to vary with subject head size, more likely in small-headed people (large-headed people seem more likely to have "ripples", but that's an artifact for another day).

Playing with the contrast and looking outside the brain has convinced me that the crescents do align with the edges of ghost artifacts, which I tried to show above. These are from a raw image (the HCP Minimal Preprocessing pipelines mask the brain), so it's hard to see; I can share example NIfTIs if anyone is interested.

So, why do we have the bright ghosts, what should we do about it, and what does that mean for analysis of images we've already collected? Suggestions are welcome! For analysis of existing images, I suspect that these will hurt our signal quality a little: we want the task runs to be comparable, but they're not in people with the crescent: voxels within the crescent areas have quite different tSNR in the PA and AP runs.

Wednesday, January 10, 2018

In a few recent posts I've shown images of the mean and standard deviation (calculated across time for each voxel), for QC tests. These are easy to calculate in afni (example here), but the 3dTstat command I used includes all timepoints (TRs), unless you specify otherwise. As described previously, we've been using a threshold of FD > 0.9 for censoring high-motion frames before doing GLMs. Thus, I wanted to calculate the mean and standard deviation images only including frames that were not marked for censoring (i.e., restrict the frames used by 3dTstat). This was a bit of a headache to code up, so R and afni code are after the jump, in the hopes it will be useful for others.

Wednesday, November 29, 2017

The previous post describes a method for assigning arbitrary values to surface MMP parcels via GIfTI files. Tim Coalson kindly pointed me to another method, which I'll demonstrate here. Both methods work, but one or the other might be easier in particular situations.

In the previous post I used matlab and ran wb_command from the command prompt (on Windows, by opening a command window in the directory with wb_command.exe, then using full paths to the input and output files). Here, I use R, and call wb_command from within R using its system() function. You may need to adjust system for other operating systems, or simply replace it with print and copy-paste the full line to the command prompt.

Monday, November 27, 2017

This tutorial describes a method for plotting arbitrary per-parcel values (such as from an analysis) on the surface. For example, let's say I want to display MMP parcels 1, 10, and 15 (only) in red, or (more usefully) to assign continuous numbers to each parcel, and then display parcels with larger numbers in hotter colors.

This post describes a method using matlab and creating GIfTI files; see the next post for a method using R, wb_command functions, and creating a CIFTI file. Both methods work, but one or the other might be more convenient in particular situations.

I'll be using the MMP in this example; if you want to follow along, download a copy of the S1200 Group Average Data Release; I put mine at d:/Workbench/HCP_S1200_GroupAvg_v1/. The MMP template is named Q1-Q6_RelatedValidation210.CorticalAreas_dil_Final_Final_Areas_Group_Colors.32k_fs_LR.dlabel.nii. (If you're not sure how to use the files in the S1200 release, try this tutorial to get started.)

I have the values to assign to the parcels in a text file with 180 lines (one line for each MMP parcel). For this tutorial, let's do the simple example of assigning a color to parcels 1, 10, and 15 only. An easy way to do this is to make a text file with 1s on these rows and 0s on all the other rows. I prefer R, but since the GIfTI library is in matlab, here's matlab code for making the little text file:

You can now to plot these GIfTIs in Workbench (see this post if you're not sure how); I plotted them on the S1200 Group Average (MNI) anatomy:

I clicked to put a marker in the left visual parcel. The value at this vertex is 1, as assigned (green lines). I loaded in the MMP atlas as well (blue lines), so it tells me (correctly!) that the marker is in the L_V1_ROI.

Thursday, November 9, 2017

I was encouraged by the images in the previous post: it looked like the gradient and banding ("washing out") artifacts were not visible in the MB4 people. I need to revise that a bit: I do have bands and gradients in at least some MB4 people, though to a much lesser degree than in our MB8 datasets. How much this affects our task analyses is still to be determined.

On the left is the same MB4 person and run from the previous post (27 September 2017), voxelwise standard deviation over time of a task fMRI run, after going through the HCP minimal preprocessing pipelines. I was glad to see that the vessels were brightest (as they should be), though concerned about the frontal dropout. The person on the right is another person scanned at MB4 (doing the same task; same encoding, scanner, etc.); same preprocessing and color scaling for both.

The vessels clearly don't stand out as much in the person on the right. It's hard to tell in the above image, but there's a gradient in the standard deviation and tSNR images, with better signal on the edges than in the center of the brain. Below is another run from the person on the right, tSNR calculated on the images ready to go into afni for GLM calculation (so through the HCP minimal preprocessing pipelines, plus smoothing and voxelwise normalizing). This tSNR image is shown at six different color scalings; it's easier to see interactively (download the NIfTI here), but hopefully it's clear that the darker colors (lower tSNR) spreads from the center of the brain to the edges, rather than uniformly.

Here is the same person and run again, with the standard deviation (first two) and tSNR (right pane) of the raw (no preprocessing, first and lower) and minimally preprocessed (right two) images. I marked a banding distortion with green lines, as well as the frontal dropout. The banding is perfectly horizontal in the raw image, at 1/4 of the way up the image (see larger image), which makes sense, since this is an MB4 acquisition. I included the larger view of this raw image since all three banding artifacts are somewhat visible; in our dataset the inferior band is generally the most prominent.

The banding and gradient artifacts are certainly less visually prominent in our MB4 than our MB8 images, but they are present in some MB4 people. I haven't systematically evaluated (and probably won't be able to soon) all of our participants, so don't have a sense of how often this occurs, or how much it impacts detection of task BOLD (which is of course the key question).

Below the jump: movement regressors for the two runs in the top image; the person with the banding and gradients had very low motion; less than the person from the September post.

These tests were inspired by the work of Benjamin Risk, particularly his illustrations of banding and poor middle-of-the-brain sensitivity in the residuals (e.g., slides 28 and 29 of his OHBM talk). He mentioned that you can see some of the same patterns in simple standard deviation (stdev) images, which are in this post.

Here is a typical example of what I've been seeing with raw (not preprocessed) images. The HCP and MB8 scans are of the same person. The HCP scan is of the WM_RL task; the other two are of one of our tasks and the runs are about 12 minutes in duration (much longer than the HCP task). These MB8 and MB4 runs have low overall motion.

Looking closely, horizontal bands are visible in the HCP and MB8 images (marked with green arrows). None of the MB4 images I've checked have such banding (though all the HCP ones I've looked at do); sensible anatomy (big vessels, brain edges, etc.) is brightest at MB4, as it should be.

Here are the same runs again, after preprocessing (HCP pipelines minimal preprocessing pipelines), first with all three at a low color threshold (800), second, with all three at a high color threshold (2000).

The HCP run is "washed out" at the 800 threshold with much higher standard deviation in the middle of the brain. Increasing the color scaling makes some anatomy visible in the HCP stdev map, but not as much as the others, and with a hint of the banding (marked with green arrows; easier to see in 3d). The MB4 and MB8 runs don't have as dramatic a "wash out" at any color scaling, with more anatomic structure visible at MB8 and especially MB4.The horizontal banding is still somewhat visible in the MB8 run (marked with an arrow), and the MB4 run has much lower stdev at the tip of the frontal lobe (marked with a green arc). tSNR versions of these images are below the jump.

These patterns are in line with Todd et. al (2017), as well as Benjamin Risk's residual maps. I'm encouraged that it looks like we're getting better signal-to-noise with our MB4 scans (though will be investigating the frontal dropout). Other metrics (innovation variance? GLM residuals?) may be even more useful for exploring these patterns. I suspect that some of the difference between the HCP and MB8 runs above may be due to the longer MB8 runs, but haven't confirmed.