Despite the progress made in artificial intelligence over the past few years, deep learning software still lags far behind the pattern recognition and learning capabilities of the mammalian mind. Where a human might be able to recognize an apple after seeing just a couple apples, even the most sophisticated deep learning software has to review hundreds of thousands of apples to identify one.

A $28 million grant awarded by the Intelligence Advanced Research Projects Activity (IARPA) hopes to change that by studying how the brain perceives patterns and applying that research to AI software. The grant’s recipients – Harvard University’s John A. Paulson School of Engineering and Applies Sciences (SEAS), Center for Brain Science (CBS), and the Department of Molecular and Cellular Biology – plan to focus their efforts on uncovering how ours brain s are so good at learning and pattern recognition and, from there, design algorithms to analyze and interpret patterns in images and text.

To accomplish this task, researchers plan to reverse engineer the brain by mapping the activity of its visual cortex as it analyzes patterns. The analysis will be an unparalleled quantity – processing over a petabyte of data, equivalent to 1.6 million CDs worth. This wealth of information will then be used to map the 3D brain and develop computer algorithms that can function at similar efficiency as the mammalian brain.

Project leader David Cox (assistant professor of molecular and cellular biology, and computer science) calls the project “a moonshot challenge” and equates it to the Human Genome Project – the international scientific research effort to unravel our expansive genetic code.

IARPA is a subset of the US government’s Office of the Director of National Intelligence.

Starting in Cox’s lab, the researchers will train lab rats to recognize a number of objects on a computer screen, while recording their visual neruons’s activity user laser microscopes built specifically for this project. The microscopes – built by partners at Rockefeller University – will help analyze what alters in the animals’s brains as they learn. Finally, a one-cubic millimeter of the rats’s brains will be removed and ultra-thin slices will be imaged and analyzed by a molecular and cellular biology lab run by Professor Jeremy Lichtman.

Lichtman calls the effort “an amazing opportunity to see all the intricate details of a full piece of cerebral cortex.”

Once all that data is compiled it’s computer science professor Hanspeter Pfister’s job to reconstruct the brain in three dimensions in order to understand how the visual cortex neurons connect to each to each other and transfer information. “We will reconstruct neural circuits at an unprecedented scale from petabytes of structural and functional data,” he says in a post on the Harvard website. “This requires us to make new advances in data management, high-performance computing, computer vision, and network analysis.”

Cox calls the project huge but vocalizes the overall importance of such an expansive task: “One of the most exciting things about this project is that we are working on one of the great remaining achievements for human knowledge — understanding how the brain works at a fundamental level.”

With all the connections data collected and the 3D models created, computer science researchers will develop algorithms for learning and pattern recognition that may be used to detect cyberattacks, read MRI scans, and even drive vehicles. But above all, this project may help intelligence agencies process and analyze the troves of data they have flowing through their servers.

On Monday, The White House announced plans to co-host four upcoming public workshops on various AI topics to "spur public dialogue on artificial intelligence and machine learning and identify challenges and opportunities related to this emerging technology." Spearheaded by the Office of Science and Technology Policy, the workshops will be rolled out over the next few months (May to July) and will cover topics including implications in law and government, as well as the social and economic impacts. Workshop co-hosts include academic and non-profit institutions, as well as the National Economic Council. In addition, a new National Science and Technology Council (NSTC) subcommittee on machine learning and artificial intelligence will meet for the first time next week. The NSTC is currently working to leverage AI and machine learning technology in a variety of government services.

A few weeks ago, Chinese software company Baidu released key parts of a key artificial intelligence/ speech recognition algorithm into the realm of open source, following in the footsteps of Facebook and Google last year.

Episode summary: This week, AI in Industry features Jeremy Barnes, Chief Architect at Element AI. Jeremy talks about the common mistakes some businesses might make while adopting AI to solve broad business problems. He also sheds light on the problem areas that could raise the market value of businesses through AI adoption, hiring the right talent with the right combination of subject matter expertise and business experience, and the business and technical aspects executives should consider before contemplating the adoption of AI.

A few weeks ago, Chinese software company Baidu released key parts of a key artificial intelligence/ speech recognition algorithm into the realm of open source, following in the footsteps of Facebook and Google last year.

Stay Ahead of the Machine Learning Curve

At Emerj, we have the largest audience of AI-focused business readers online - join other industry leaders and receive our latest AI research, trends analysis, and interviews sent to your inbox weekly.