Main Menu

At the Olin Center, we originally had a half dozen Solaris computers that we used for fMRI processing. Storage and processing were done on the same box, and data moved between servers only if we ran out of space on one. A couple years ago we got a linux cluster with 28 nodes, 8 CPUs each, and a separate 100TB storage array. The whole cluster is controlled by Sun Grid Engine and I adapted our fMRI processing to fit that.

Since we've run more than 7000 studies, with more than 30,000 functional data series, we have a lot of data to process. Automatic batch processing was the best solution. Simply, I built a system to treat the processing of each subject as a cluster job, and created a manager to submit jobs to the cluster.

This allowed us to preprocess (realignment using INRIalign) and runs stats on a batch of 1500 subjects in one week.