Cognitive Sciences Stack Exchange is a question and answer site for practitioners, researchers, and students in cognitive science, psychology, neuroscience, and psychiatry. It's 100% free, no registration required.

I'm outfitting a small behavioural lab with a fiscal-austerity-induced budget. I'm considering trying out a Raspberry pi for individual participant systems. My tentative plan is to run linux and code experiments in python.

Does anyone have any experience with this?

My primary concern is whether the processor and video capabilities of the Pi will make it accurate enough in terms of timing. Will the Pi have substantial delay / variability between a call to present complicated visual stimuli (like arrays of Gabors) and the time stimuli actually appear on the screen? When a keyboard or mouse button gets pressed, what will be the delay / variability between the actual response time and the recorded response time?

Hi Jeromy, I have indeed asked the same question on the raspberry pi stackexchange. So far it seems like the pi should work find. But it would be great to hear of first hand experiences with this application... as for experiments, things like visual search, stroop task.
–
appositiveJan 19 '13 at 13:17

I have tried to limit the parameters of the question a bit to make it more focused and answerable. If these aren't the type of characteristics you are looking for in your experiments, please don't hesitate to edit your own requirements in. I think that the question being just "what does your lab do?" makes it less constructive. As it is, it looks a bit like a "list" question now, but if you narrow down your requirements enough, it should be fine.
–
Chuck SherringtonJan 21 '13 at 19:55

Chuck, thanks for your help. I'm learning what's expected of the questions here. Your edits moved the topic away from a question about the raw capability of the system to a question about parameters. So I've edited it myself to make things a little clearer and more specific. Thanks also for the link to the dsp.se question.
–
appositiveJan 21 '13 at 21:20

2 Answers
2

So I've had a chance to try out the RPi for this purpose. Short answer: it works great (with some limitations).

The RPi does not support OpenGL. I approached this system with the idea of using a python environment to create and present experiments. There are two good options for this that I know of, opensesame and psychopy. Psychopy requires an OpenGL python backend (pyglet), so it won't run on the Rpi. Opensesame gives you the option of using the same backend as PsychoPy uses but has other options, one of which does not rely on openGL (based on pygames). This 'legacy' backend works just fine. But the absence of openGL means that graphics rely solely on the 700 mHz CPU, which quickly gets overloaded with any sort of rapidly changing visual stimuli (ie. flowing gabors, video, etc.).

The RPi does have a very good video card (for a $25 computer) that supports OpenGL ES. Riverbank software provides python bindings for OpenGL ES (pogles), so there is the possibility for hardware acceleration in python. This has not currently been implemented on PsychoPy or Opensesame. It probably won't happen anytime soon, because there is currently an additional limitation on this system: there's no way to use OpenGL ES in the linux windowing environment (xwindows). This will probably be developed in the medium-term. But currently even a lightweight version of xwindows on the RPi is noticeably clunky and slow (overclocking the CPU helps with this). OpenGL can be used on the Pi through CPU emulation (via Mesa)... but this so heavily overloads the CPU that it's effectively useless.

So the RPi is not well suited for displaying rapidly changing visual stimuli (ie. flowing gabors, video). And PsychoPy effectively doesn't run. But Opensesame runs fine with the non-openGL 'legacy' backend. For a manual RT experiment involving the presentation of static images, this setup running on the Pi will have much the same timing resolution as the same setup running on any other computer.

And it will get better, probably pretty quickly. OpenGL ES support in xwindows should come pretty quick, and once this is available it will be possible to use OpenGL ES python bindings currently under development to make backends for PsychoPy and OpenSesame. These will support fluid moving stimuli and video, and free up the CPU for other tasks. My personal hope is that this will free enough resources to allow the RPi to interface with other systems... like an eye-tracker computer or an eeg amp.

But for now it seems just fine for basic no-video psychophysics. And it's very, very cheap... even factoring in the cost of a small DVI monitor you should be able to get a data collection system up and running for less than 100 euro.

There will be necessary a delay, the actual question is how long will it be and what delay is acceptable for you.

The first question could be answered by knowing the complexity of your program, the language used (Python is slow), and the power of your hardware (the Pi should not be very powerful).

So, if you have a code example that could help. Just an idea, but you can also ask the video game community which could be good at optimizing such things ;)

Considering the second question, I don't know literature on current production software capacities. Psycho-physic literature could help as well as asking more details to open source developers such as those of: