There's an ongoing debate among neuroscientists, cognitive scientists, and even philosophers as to whether or not we could ever construct or reverse engineer the human brain. Some suggest it's not possible, others argue about the best way to do it, and still others have already begun working on it.

Regardless, it's fair to say that ongoing breakthroughs in brain science are steadily paving the way to the day when an artificial brain can be constructed from scratch. And if we assume that cognitive functionalism holds true as a theory — the idea that our brains are a kind of computer — there are two very promising approaches worth pursuing.

Interestingly, the two approaches come from two relatively different disciplines: cognitive science and neuroscience. One side wants to build a brain with code, while the other wants to recreate all the brain's important functions by emulating it on a computer. It's anyone's guess at this point in time as to who will succeed and get there first, if either of them.

Advertisement

Before we take a deeper look into these two approaches, however, it's worth reviewing what our friend Alan Turing had to say about brains.

The Church-Turing Hypothesis

Given that scientists are looking to model the human brain in digital substrate (i.e. a computer), they're having to work in accordance to a rather fundamental assumption: computational functionalism. This goes back to the Church-Turing thesis which states that a Turing machine can emulate any other Turing machine. Essentially, this means that every physically computable function can be computed by a Turing machine. And if brain activity is regarded as a function that is physically computed by brains, then it should be possible to compute it on a Turing machine, namely a computer.

So, if you believe that there's something mystical or vital about human cognition you're probably not going to put too much credence into these two approaches.

Or, if you believe that there's something inherently unique about intelligence that can't be translated into the digital realm, you've got your work cut out for you to explain what that is exactly — keeping in mind that any informational process is computational, including those brought about by electrical and chemical reactions. Minds are what brains do, so it's not too implausible to suggest that minds are what computers can do, too.

Rules-based artificial intelligence

One very promising strategy for building brains is the rules-based approach. The basic idea is that scientists don't need to mimic the human brain in its entirety. Instead, they just have to figure out how the "software" parts of the brain work; they need to figure out the algorithms of intelligence and the ways that they're intricately intertwined. Consequently, it's this approach that excites the cognitive scientists.

Some computer theorists insist that the rules-based approach will get us to the brain-making finish line first. Ben Goertzel is one such theorist. His basic argument is that other approaches over-complicate and muddle the issue. He likens the approach to building airplanes: we didn't have to reverse engineer the bird to learn how to fly.

Essentially, cognitive scientists like Goertzel are confident that the hard-coding of artificial general intelligence (AGI) is a more elegant and direct approach. It'll simply be a matter of identifying and developing the requisite algorithms sufficient for the emergence of the traits they're looking for in an AGI. They define intelligence in this context as the ability to detect patterns in the world, including in itself.

To that end, Goertzel and other AI theorists have highlighted the importance of developing effective learning algorithms. A new mind comes into the world as a blank slate, they argue, and it spends years learning, developing, and evolving. Intelligence is subject to both genetic and epigenetic factors, and just as importantly, environmental factors. It is unreasonable, say the cognitive scientists, to presume that a brain could suddenly emerge and be full of intelligence and wisdom without any actual experience.

This is why Goertzel is working to create a "baby-like" artificial intelligence first, and then raise and train this AI baby in a simulated or virtual world such as Second Life to produce a more powerful intelligence. A fundamental assumption is that knowledge can be represented in a network whose nodes and links carry "probabilistic truth values" as well as "attention values," with the attention values resembling the weights in a neural network. There are a number of algorithms that need to be developed in order to make the whole neural system work, argues Goertzel, the central one being a probabilistic inference engine and a custom version of evolutionary programming. Once these algorithms and associations are established, it's just a matter of teaching the AI what it needs to know.

Whole brain emulation

Neuroscientists aren't entirely convinced by the rules-based approach. They feel that something is being left out of the equation, literally. Instead, they argue that researchers should be inspired by an actual working model: our brains.

Indeed, whole brain emulation (WBE), the idea of reverse engineering the human brain, makes both intuitive and practical sense. Unlike the rules-based approach, WBE works off a tried-and-true working model; neuroscientists are not having to re-invent the wheel. Natural selection, through excruciatingly tedious trial-and-error, created the human brain — and all without a preconceived design. They say there's no reason to believe that we can't model this structure ourselves. If the brain could come about through autonomous processes, argue neuroscientists, then it can most certainly come about through the diligent work of intelligent researchers.

When talking about WBE it's important to distinguish between emulation and simulation. Emulation refers to a 1-to-1 model where all relevant properties of a system exist. This doesn't mean re-creating the human brain in exactly the same way as it resides inside our skulls. Rather, it implies the re-creation of all its properties in an alternative substrate, namely a computer system.

Moreover, emulation is not simulation. Neuroscientists are not looking to give the appearance of human-equivalent cognition. A simulation implies that not all properties of a model are present. Again, it's a complete 1:1 emulation that they're after.

A number of critics point out that we'll never completely emulate the human brain on account of the chaos and complexity inherent in such a system. Others disagree. As researchers from Oxford University have pointed out, we will not need to understand the whole system in order to emulate it. What's required is a functional understanding of all necessary low-level information about the brain and knowledge of the local update rules that change brain states from moment to moment. What is meant by low-level at this point is an open question, but it likely won't involve a molecule-by-molecule understanding of cognition. And as Ray Kurzweil has revealed, the brain contains masterful arrays of redundancy; it's not as complicated as we currently think.

In order to gain this "low-level functional understanding" of the human brain, neuroscientists will need to employ a series of interdisciplinary approaches (most of which are currently underway). Specifically, they're going to require advances in:

Computer science: The hardware component has to be vastly improved. Scientists are going to need machines with the processing power required to host a human brain. They're also going to need to improve the software component so that they can create algorithmic correlates to specific brain function.

Microscopy and scanning technologies: Scientists need to better study and map the brain at the physical level. Brain slicing techniques will allow them to visibly study cognitive action down to the molecular scale. Specific areas of inquiry will include molecular studies of individual neurons, the scanning of neural connection patterns, determining the function of neural clusters, and so on.

Neurosciences: Researchers need more impactful advances in the neurosciences so that they can better understand the modular aspects of cognition and start mapping the neural correlates of consciousness (what is currently a very grey area).

Genetics: Scientists need to get better at reading our DNA for clues about how the brain is constructed. It's generally agreed that our DNA will not tell us how to build a fully functional brain, but it will tell us how to start the process of brain-building from scratch.

Essentially, WBE requires three main capabilities: (1) the ability to physically scan brains in order to acquire the necessary information, (2) the ability to interpret the scanned data to build a software model, and (3) the ability to simulate this very large model.

WBE may be the right approach, but it's not going to be easy. Nor is it going to be quick. This will be a multi-disciplinary endeavor that will require decades of data collection and the use of technologies that don't yet exist. And importantly, success won't come about all at once. This will be an incremental process in which individual developments will provide the foundation for overcoming the next conceptual hurdle.

Time-frames

Inevitably the question as to "when" crops up. Unfortunately, we are still quite a ways off. Kurzweil's prediction of emulating the brain by 2030 seems uncomfortably short — that's only 18 years away. Moreover, his analogies to the human genome project are unsatisfying. This is a project of much greater magnitude, not to mention that we're still likely heading down some blind alleys. Similarly, Goertzel's predictions of success via the rules-based approach within the next decade or two seems overly optimistic — though arguably not impossible given his "learning AI" approach.

A more likely scenario would see us code or emulate the human brain in about 50 to 75 years. Possibly 100. That said, this is an exceptionally difficult thing to predict given the crudeness of the neurosciences on the one hand, and the rate of accelerating change on the other. The year 2050 is a kind of black hole when it comes to predictions.

Lastly, it's worth noting that, given the capacity to re-create a human brain in digital substrate, we won't be too far off from creating considerably greater-than-human intelligence. Computer theorist Eliezer Yudkowsky has claimed that, because of the brain's particular architecture, we may be able to accelerate its processing speed by a factor of a million relatively easily. Consequently, predictions as to when we may achieve greater-than-human machine intelligence will likely co-incide with the advent of a fully emulated human brain.