Tag Archives: fmri

When I started my masters degree, I didn’t entirely know what I was taking on. I chose to study cognitive neuroscience because I knew this was an area which presently receives a lot of funding for PhD research, and also because it seemed like a robust, scientific approach to psychology. After a few short months I have come to discover that research in this area is something that I really enjoy, and that is largely down to the opportunities to program and develop computerized research tools.

At the same time, the Raspberry Pi Foundation have announced and launched the new Raspberry Pi 2: a credit card sized motherboard which can be run as a standalone personal computer. I’m not going to go in to all the ins and outs of the Raspberry Pi (or Pi, for short), you can find more about it by visiting their website at www.raspberrypi.org. Instead, I want to talk about what it brings to the study of psychology.

Anybody who has studied psychology as a science will know that a lot of research is made up from undergraduate students sitting in dark rooms performing mundane tasks while their reaction times were being measured. This has been terrific for the creators of Matlab, whose IDE has facilitated many a psychology-button-pressing extravaganza. Even so, there is a new kid on the block who is gradually gathering momentum, and that is python.

One of the modules we undertook this spring semester was programming in python, something I hadn’t done before. I was keen to do at least some of this on the Pi, so as to justify my impulsive buying of it. My lecturer found kind of cute (that would be “cute”, except that he didn’t say it aloud). I persisted non the less, until I found that at least for graphics the Pi weren’t altogether compatible with the PsychoPy module we were using. Unfortunately it does not support Open GL graphics, so it actually struggles with button pressing experiments. But it was a start, it wet my appetite, and it was fun.

The real fun has commenced as I’ve begun my dissertation project. We’re doing some work on spatial cognition (how space is represented in the brain) using functional magnetic resonance imaging (fMRI), and part of this involves calculating different test statistics on 3d data on a voxel by voxel basis (voxel = volumetric pixel). There is lots of python modules optimized for flattening, analyzing and reassembling these datasets and we’re using them to build novel analyses which haven’t been done before. On this frontier of neuroimaging research, the raspberry pi stands gallant as my building platform and testing station to produce these scripts.

I’ve been using the module ‘minepy‘ to calculate the Maximal Information Coefficient (MIC) for each voxel in a set of datasets. The modules install seamlessly on the pi from the different repositories, and scripts can be written elegantly through the ‘spyder‘ IDE. I’m most excited most of all about the sheer size of the data we are working with. Each dataset is 64x64x26 voxels, which means 106496 calculations. On the Pi, each one takes about a second, and (when I close the GUI) all in all it takes about 2 hours. For me to run this on all 48 of our scans, that would take about 4 days. For processing the whole lot, fortunately we can pass our script to the cluster computer at York, which (assuming it is coded correctly) should polish it all up in around 16 hours.

It looks like something out of a movie, but its real!

What I really like about this is how real it all is. I’ve taken on many little projects here and there over my years, but none of them have ever really meant anything. Sure, I learned a lot, but I wanted to put it into action. Here, when those lines of text fly up my screen for 2 hours, I look forward to the output for reasons more than just knowing it worked. It also gives me the opportunity to work on my projects at home using a linux software environment, which great since Python runs natively in Linux.

This also ticks another box for the raspberry pi foundation. Universities and hospitals around the world have powerful workstations and supercomputers which they use to process neuroimaging data. That is very sensible, considering the sheer volumes they work with. But can it also be done on a £25 printed circuit board in Billy-whizz’s basement? Yes… It can!

I read a paper today in the British Medical Journal presenting a compelling case for a randomized controlled trial to measure the effectiveness of parachute use as a medical intervention to prevent injury when jumping from an aeroplane (Smith & Pell, 2003).

Their reasoning was as follows:

No such trial has been done before

The perception of the success of the parachute is based solely on anecdote

Natural history studies show that free fall without does not always result in an adverse outcome, therefore there is not a 100% chance of mortality when not using a parachute

There is also not a 100% chance of survival when using a parachute

In all other medical interventions not having a randomized controlled trial would be unacceptable

Current industry trials are subject to selection and reporting bias, as well as conflicts of interest.

All in all, there seems to be plenty of reasons why a randomised controlled trial should be carried out, so as to determine more appropriately the efficacy of the parachute… Except for the fact it would be stupid. And that, leads me on to the post-mortem Atlantic salmon.

The blood-oxygen-level-dependent (BOLD) signal experienced a significant change during the presentation of stimuli compared to at rest (t(131) > 3.15, p(uncorrected) < 0.001). An 81mm3 cluster of active voxels was identified on the Salmons brain cavity, and another smaller cluster was identified in the Salmons dorsal spinal column.

After the same t-tests were done using Benjamimi-Hochberg and family-wise corrections, both with a type 1 error rating of 0.05, no significant results were found. With those error ratings adjusted to 0.25, there was still no brain activity in Mr Salmon.

So, we have two daft studies here, both of which tickled me. It shows the importance of the correct use of advanced (and less advanced) statistics. It’s important to have a firm grasp of the statistical techniques you are working with, or you can very easily end up with errors. This is especially so in the case of the Salmon where such a large number of fine readings are recorded that without an element of realism being considered during the analysis, obvious statistical errors pop up.

And of course, the same level of realism is required at the opposite end of the spectrum, when you ask if we really need experimental control at all. I am impressed with how statistics allow researchers in the social sciences to measure population variables with precision. I don’t even pretend for one minute that I have an in depth understanding of all the methods covered even at undergraduate level, but I have learnt the value of getting stats right. It is important to properly reflect on what statistical tools are genuinely necessary to answer the research question, and of the ones you choose to use, it is important to understand the intricate details.