Deep learning has led to significant advances in artificial intelligence, in part, by adopting strategies motivated by neurophysiology. However, it is unclear whether deep learning could occur in the real brain. Here, we show that a deep learning algorithm that utilizes multi-compartment neurons might help us to understand how the brain optimizes cost functions.

The brain’s ability to associate different stimuli is vital for long-term memory, but how neural ensembles encode associative memories is unknown. Preliminary neurobiological evidence that a supervised learning model shapes neural population activity.

Neuroscience has focused on the detailed implementation of computation, studying neural codes, dynamics and circuits. In machine learning, however, artificial neural networks tend to eschew precisely designed codes, dynamics or circuits in favor of brute force optimization of a cost function, often using simple and relatively uniform initial architectures. Two recent developments have emerged within machine learning that create an opportunity to connect these seemingly divergent perspectives.

Igor Stagljar likens the process of commercializing his ground-breaking research into cell membrane proteins – which has yielded hundreds of new targets for drug-makers seeking cures for cancer and other deadly diseases – to building a highly automated tesla factory.