The complexity of the brain means that imaging produces vast amounts of data. Currently, neuroscientists sort the valuable data from the less valuable, sometimes weeks or years after first gathering the images. Researchers must therefore decide on a small number of variables for testing before the actual brain scanning starts, even though they might later realise that other conditions would have been more appropriate. This leaves a relatively narrow scope of potential research questions per study, and limits how far the findings can be applied to various patients.

Ultimately, we as humans are not unbiased enough to do justice to the sheer amount of information collected by brain imaging techniques. However, using AI techniques while collecting brain data at the same time will greatly improve the reliability of the findings.

Researchers can also unintentionally introduce human errors to their work while looking for meaning and patterns in data, such as being more likely to publish studies with positive results than negative or neutral results.

Now, the authors of a new opinion piece, published in Trends in Cognitive Sciences, have argued that using machine learning whilst gathering brain data produces more accurate results much quicker than current methods, and may counterbalance any bias or other flaws brought into the study by human researchers.

Romy Lorenz, final year PhD student from Imperial’s Department of Medicine and lead author of the paper, said: “These issues could be causing the reproducibility crisis in cognitive science today, where researchers find they cannot reproduce the same results as previous studies despite following the same methods. This can even occur in well-known scientific theories where human error has crept in.”

Researchers tend to explore previously collected data in various ways, by using different analysis techniques or statistical methods, until they find a desired pattern in the data. But the authors say that analysing data in real-time is much more effective and efficient.

Lorenz said: “Ultimately, we as humans are not unbiased enough to do justice to the sheer amount of information collected by brain imaging techniques. However, using AI techniques while collecting brain data at the same time will greatly improve the reliability of the findings. It allows researchers to test many more experimental conditions in order to find the best one for a given research question. It will also mean that researchers cannot introduce flaws in the data anymore as the analysis runs in parallel with the brain scanning.”

In a previous study, the programme designed the best way to activate the visual area (yellow) and deactivate the auditory area (blue). This took less than six minutes per participant, compared to over two hours with conventional methods.

Machine learning enables computers to learn from data to make predictions, which are used to inform and improve subsequent decisions. The technique is central to technology such as smartphone personal assistants like ‘Siri’, self-driving cars, and speech recognition systems.

The researcher creates and programs rules into the computer so that it can reason like a human. The rules let the computer identify patterns in a dataset, and then focus on those patterns without the need for further human instruction.

In doing so, the computer can learn from and make decisions based on this data, in the same way a researcher would, but much faster and without bias.

Dr Robert Leech, senior author of the study from Imperial’s Department of Medicine, said: “Much like a human, machines programmed this way can use past experiences to improve their performance in future. The big difference, however, is that the machine will be able to do this much faster.“

“Importantly, the technique could be applied to many different fields. In the lab, we know it as ‘the automatic neuroscientist’, but it could easily become the automatic radiologist, the automatic psychologist and so on.”