Using AI To Image Brain Tissues In 3D At Nanometer Resolution

August 4, 20183 min read

New recurrent convolutional neural network can automatically map neurons in the brain.

It provides 10 times more accurate results than previous automated techniques.

The human brain contains more than 86 billion neurons, and a more or less equal number of other cells. Because of the complex internal structure, mapping a brain is computationally intensive and tedious task.

A high resolution, detailed imaging of a single cubic millimeter of brain tissue can produce over 1,000 terabytes of data. It helps us better understand how our brain functions. Usually, the process involves imaging brain tissues in three-dimensions at nanometer resolution through electron microscopy, and then examining it to track neurites and detect each synaptic connection.

To accelerate the process, scientists at the Max Planck Institute of Neurobiology (Germany) and Google have built a deep-learning model that can automatically map neurons in the brain, with greater accuracy than previous neural networks.

Use watershed-like algorithm to group together image pixels that aren’t separated by a boundary.

In 2015, authors started working on an alternative method: a recurrent convolutional neural network (CNN) capable of combining these 2 steps. It begins from a particular pixel position and iteratively fills an area by predicting which pixels are part of the same component. Since then, they have been trying to apply this CNN to large-scale datasets while maintaining high-accuracy.

How They Measure Accuracy Of CNN?

Researchers have developed a new metric what they call ‘expected run length’ for measuring how far they can trace a neuron (starting from a random point) before making any error.

This metric measures the amount of space between failures made by the CNN. You can relate the metric to some biological parameters, for example, average path length of neurons in various parts of the nervous system.

They applied the recurrent CNN to image the brain of a zebra finch bird, using serial block-face scanning electron microscopy. Then they used ‘expected run length’ to measure the progress within one million cubic micron, and discovered that the algorithm worked far better than traditional approaches.

Researchers segmented each neuron in a tiny portion of the brain, and the errors made by the algorithm were fixed manually. Eventually, they were able to examine neural connections and figure out how zebra finch birds sing and learn their song.

They trained the CNN on thousands of 2D images, using NVIDIA Tesla GPUs and TensorFlow accelerated by CUDA deep learning framework. When these images are stacked on top of another, they produce a 3D picture.

﻿

According to the team, it would have taken approximately 100,000 hours to label the sample of only one millimeter cube. Meanwhile, the CNN trained and completed the task in 7 days. Also, it yielded 10 times more accurate results than previous automated techniques.

What’s Next?

Researchers plan to continue to enhance the performance of the algorithm, with an objective of developing a fully automated synapse-resolution CNN. They have made the code available on GitHub along with WebGL-based viewer for volumetric datasets, to help larger research communities build similar, more efficient methods.