Support

Meta

This is mostly a quick note to remind myself how to do this, since I spent about 30 minutes today “remembering” (synonymous with googling) how I got this to work last time! That said: Once upon a time, many of us would connect to a remote (usually linux) server and run graphical programs over the network. That usually meant connecting to a box like so:

ssh -X -Y myuser@myserverbox.mydomain.com

Where the -X and -Y were donating SSH forwarding for X11. Side note: usually you had to also enable this on the server in /etc/sshd_config under the very readable setting “X11Forwarding”. In addition to running the programs remotely, you also had the option of choosing between Direct Rendering and Indirect Rendering. Direct Rendering was when your computer received the information to render the images directly. Whereas Indirect Rendering had the server render the images and beam them over to your computer. Not surprising, Direct Rendering is usually faster. For a quick refresher on what all of the general terms are, checkout this awesome post.

Either way, we could then do things like run a remote X11 application and have it pop open on our computers:

xclock

With the advent of things like VNC and NoMachine, where we can actually view the remote computer’s desktop and interact with things, many of us transitioned away from SSH X11 Forwarding. Nevertheless, some users continue to persist and occasionally point out that things don’t always work the way they used to – particularly if you love Apple Macs as much as I do. Recent changes in the XQuartz project may have broken some of this functionality as of 2.7.9. But never fear, with the release of 2.7.10 (and now 2.7.11) you can reenable this functionality by running this command in the terminal:

defaults write org.macosforge.xquartz.X11 enable_iglx -bool true

While I’m not sure it’s necessary, you can also then export a variable to “force” indirect rendering:

export LIBGL_ALWAYS_INDIRECT=1

And finally, as you could have guessed, AFNI and SUMA can both be used with X11 Forwarding! SUMA in particular seems to appreciate indirect graphics, but as always, your mileage may vary. If you find that this is too slow for you, I’d recommend mounting the remote server using FUSE (Mac version here) and SSHFS. You can then run AFNI/SUMA locally while accessing remote data. Obviously you can also use Samba (SMB) or other networking tricks too.

For most users of imaging software, we use the skull stripping program included in our distribution. If you’re an AFNI user, you probably use 3dSkullStrip, if you use FSL then it’s BET, and of course Freesurfer has their own in recon-step 1. A few years ago, I advocated for using Optibet and even wrote an entirely AFNI based version of the pipeline.

If you want to get all of the benefits of an Optibet like pipeline as well as doing your whole-brain normalization via a combination of linear and nonlinear warps, look no further than the “recently” introduced script in the AFNI distribution @SSwarper. This program takes your original T1 anatomical image and does a robust skull stripping, as well as normalizes to the MNI152 brain via both a linear and nonlinear warp. It even generates helpful output images to double check the output like the one below!

So check it out, it’ll also make your afni_proc.py commands even more efficient, because you can feed the output to the AFNI superscript. Helpful if you tend to run afni_proc.py with multiple options, you won’t have to rerun the normalization step over and over again (which can cost up to about an hour each time)!

If you’re reading this post either you’re a loyal fan (yay!) or you’ve run Freesurfer (possibly in parallel) and processed your diffusion data in (hopefully TORTOISE) and then Tracula (also possibly in parallel) and you’re wondering where do you go from here. I’m glad you’re here, that’s probably me talking to myself because I read my own blog to remind myself how to do things from past analyses!

So you’ve got your Freesurfer folder and your Tracula folder. The first step is to export all of your data into text files (one per tract is easiest) for all of your subjects. I tend to use a quick bash script to do this, relying on the Freesurfer/Tracula tools tractstats2table to do the hard work (rename Subject* to the prefix of your subjects):

Now that you’ve exported all the numbers to text files, the next step is often to combine all of those outputs into a single file for data analysis. You might have already had the great idea of using “cat” to combine the files via the command line, and then you’d be disappointed that it’s not that easy because the header of the first column (the one with your subject numbers) is the tract name, but there’s no dedicated column for the tract name (at least not in Freesurfer 6.0).

So here’s some handy R code:

library(tidyverse) #if you don't have tidyverse, you should.
setwd('/path/to/tracula/folder/output') #folder with text files from bash script
allFiles <- list.files(pattern='.txt') #get a list of all the text files
#functions make the world go 'round:
#this one reads in each file, makes a column with the tract name
#and renames the first column to "Filename"
readDTI <- function(x) {
tmp <- read.table(x, header=T)
tmp$tract <- names(tmp)[1] #get the tract name and make it a column
names(tmp)[1] <- 'Filename' #now make the first column a useful name
tmp$Filename <- as.character(tmp$Filename) #useful if you want to manipulate the header later
return(tmp)
}
#this will run your function on all the text files you found with list.files()
alltracts <- lapply(allFiles, readDTI) #lapply returns a list
#this will return a single data.frame object with all the tracts
tracula_data <- do.call("rbind", alltracts)
#this writes the whole thing to a file on your computer
write.table(tracula_data, "MyTraculaData.dat", sep="\t", row.names=F, col.names=T, quote=F)

And just like that, you have a single tab-delimited text filled with all your data for 18 tracts (9 per hemisphere) for all of your subjects) for reading into your favorite stats program, or sending to co-authors.

A while back, I posted about how to use TORTOISE 2.0/2.5 for preprocessing Diffusion Weighted Images (DWIs), create tensors, and then do blip-up blip-down correction. All of those steps relied on using a graphical user interface (GUI) through a program called IDL to tell TORTOISE what you wanted it to do. This was often a tedious process, as it required someone to click buttons for every single subject, even when all subjects had been collected in the same manner. Enter TORTOISE 3.0, a new suite of command line programs for doing what TORTOISE 2.0/2.5 did, but faster and more scriptable!

First thing’s first, go download TORTOISE 3.0 (it’s now split into DIFF_PREP and DIFF_CALC downloads). The next thing you’ll need to do is add TORTOISE to your path. I personally like to put all of my MRI-related applications in one folder. So for me to add TORTOISE to my path (on my Mac), I would do the following:

You’ll then need to close the shell and open a new one. Or you can “source ~/.bash_profile” to load the changes into your current shell. Now you’re all set. Let’s look at some data! If you’re not already organizing your data according to the BIDS spec, I would highly suggest you start! It makes your file structures regular and really makes it considerably easier for new people to start working on your project! For now, let’s assume that my data is in some kind of BIDS-like format as follows:

The first thing that we need to do is skull-strip our T1 and T2 images and it generally helps if you put them into some type of axilized orientation (something approximating AC-PC alignment would be good). We can do this easily in AFNI, first start by skull stripping both T2 and T2 images:

You probably feel like you’ve already had your typing workout for the day! But that’s just getting the files setup the way that we need them to be setup! Now we’re ready to actually use TORTOISE! The initial processing in TORTOISE has two steps: 1) importing the NIFTI files, and 2) running DIFFPREP. One random bit, TORTOISE doesn’t seem to (currently) play with gzipped files, so we expand that:

Let’s just review the options, the -i is the input file, the -p is the phase encoding (AP would be vertical, LR would be horizontal), the -b is the b-values, and the -v is the b-vectors.

Now that we have the data imported, you’ll notice that there’s a folder in the current directory (sub_001/dwi) named sub_001_dwi_proc. In this folder, we’ll actually execute the DIFFPREP step (it’ll copy some temporary files into the folder that you execute it from and I like to keep my file directories as clean as possible!

Again let’s review the options: the -i is the input list, the -o is the output name (here I like to end with DMC to keep with the naming convention of TORTOISE 2.0/2.5), the -s is my structural file (this is a T2 image), and the –reg_settings is your registration setting file that is located in your home directory’s DIFF_PREP_WORK folder. You could of course have called yours something different and be a rebel.

While the differences aren’t always astounding from just looking at the images, I feel like I have to put some brain imaging in every post, so here you go!

From here you can use AFNI’s FATCAT programs (part 3 is particularly relevant) to fit your tensors, and do ROI-to-ROI analyses. You can also use Freesurfer (easily run in parallel) and TRACULA (also able to be done in parallel). Stay tuned for next time, I’ll run through the next step of DR_BUDDI if you have blip-up and blip-down data. We’ll then tie in the full AFNI pipeline for more analysis possibilities.