Kaufmann: Cracking the brain code and creating true AI

In 1955, computer scientist John McCarthy first coined the term “artificial intelligence.” At the time AI was a vague reference to computers that could manifest human-like behavior and pass the famous Turing Test.

In the decades that passed, we’ve been constantly thinking the age of AI is a few years or a decade away.

And we’ve always been wrong. Today, artificial intelligence is used to describe many things and is still as confusing as it was in the 1950s—if not more confusing. Manufacturers use it as a marketing term to sell dumb products that perform repetitive tasks in a predictable fashion. Developers use it to refer to machine learning algorithms, software that analyze data and define their own behavior as opposed to following static rules. Science fiction writers and filmmakers use it to depict a future where super-smart robots will either enslave, destroy, or serve us.

However, true AI still remains elusive—and yet we still believe that it’s just around the corner. For this month’s interview, I had a chance to catch up with neuroscientist and AI expert Pascal Kaufmann, who shared some insights about what’s wrong with current approaches to developing AI, and what it takes to create human-level artificial intelligence.

Kaufmann, who is the cofounder of Switzerland-based AI startup Starmind, has a background that combines over a decade of artificial intelligence and cyborg research—including work on the first cyborgs on the DARPA project. He believes that even with the AI boom, computers will always pale in comparison to the power of the human brain. Kaufmann has maintained since his early days within the field of AI that we will only achieve “true AI” by cracking the brain code. Only then will we understand how the brain functions and we will see the limitless potential of AI, from cyborgs to artificial organisms—but how do we get there?

Alan Turing (23 June 1912 – 7 June 1954) is widely considered to be the father of theoretical computer science and artificial intelligence.

True artificial intelligence does not exist yet

“‘Human level AI,’ ‘True AI,’ or ‘Hard AI’ refers to a level of artificial intelligence which is totally autonomous in its ability to think, act, and mimic human-level intelligence and ingenuity—AI that you cannot distinguish from biological intelligence as we know it,” Kaufmann said when I asked him what he means by true AI. “Human Level AI is the type of AI which is often depicted in science fiction movies and books, and which we have not achieved yet.”

Whatever else that is being presented as AI today, Kaufmann adds, is “narrow AI.” This is what we see today in chatbots, software that beats humans at chess, pocket calculators, or any system tailored to solve a given narrow task. This is a view that other experts share. Some have gone as far as classifying narrow AI as “augmented intelligence” or “intelligence augmentation” in order to avoid confusing it with the dystopian vision that AI has accrued over the years.

“Most of what we call ‘narrow AI’ today is nothing more than the intelligence of a programmer (or a team of programmers) condensed into source code, trying to foresee what may happen, and applying statistical procedures to optimize outcomes,” Kaufmann says. “A watch, to give an example, features a lot of human intelligence that ultimately was assembled to give rise to a device able to measure time. As such it is more a product of intelligence, not actual intelligence.”

Programmers are very intelligent people, Kaufmann explains, but the intelligence of an individual programmer or the limited knowledge base of a single person cannot become the foundation for AI development. “I think that AI may not be built or further developed by an individual human brain,” he says. “Ultimately, we need to unite and bundle our human intelligence for a real breakthrough in AI.”

We should stop comparing the human brain to computers

Frankenstein

We humans always like to compare the functionalities of our brain and body to the latest technology, whether it’s pulleys and gears or the steam engine. This leads us to try to understand and replicate the human brain in ways that aren’t necessarily correct. An interesting example is Frankenstein, the story of the young scientist who tried to create an artificial human being using methods that are absolutely absurd.

Today, the most advanced technology is computer software, or more specifically artificial neural networks, software that has been named after the inner-workings of the human brain. However, Kaufmann stresses, the real mechanisms that govern the human mind are fundamentally different from any software that has so far been created.

“While in a classical artificial neural network, brain cell A is connected with brain cell B through one thick or thinner connection, often several hundreds of connections at differing lengths and strengths exist between the two biological brain cells,” Kaufmann says.

These kinds of relations work perfectly well in the biological world, but don’t make sense in the world of engineering, Kaufmann points out.

Other differences exist as well. An average human brain cell fires in the range of 20 Hertz, while the most mediocre CPUs today operate at a rate of several gigahertz, hundreds of times faster.

“The brain, however, outnumbers our fast CPUs by the vast number of brain cells and synapses (connections between brain cells),” Kaufmann says, rendering it capable of doing things that still dismays computer scientists.

It is also unclear how and where memories are stored, Kaufmann points out. “Is it in the firing patterns of brain cells, are there certain memory proteins or do we even need to dive deep into the sub-atoms space to take into account sub-quantum effects?”

Human intelligence is not about big data

Current AI systems rely heavily on huge data sets and computing power. In contrast, the human brain is able to learn and start making decisions based on very small amounts of information, Kaufmann says.

“The ability to generalize a finding and to apply it to new and as-of-yet novel situations is an unresolved challenge in AI today,” he says. “Human-like AI is not about processing large amounts of data but a programmatic ability to process, understand, and learn from the small data around it, as humans are able to.”

An example is Google’s AlphaGo, the AI that bested the world’s human champion at the ancient game of Go. The algorithm behind the software was able to teach itself and master the game by reviewing data from thousands of matches playing against humans. A later version of the game took the concept a step further and taught itself by playing against itself hundreds of millions of times in rapid succession. This is a feat that is impossible for the human brain.

“Humans can willingly cheat in a game of go, they can apply principles from the game to solve totally unrelated challenges, they can write a poem about the game, they can decide they are bored,” Kaufmann says. “Humans can take the small data of the game and instantly apply it to other things entirely.”

AlphaGo can only play Go, by the rules.

In EX MACHINA (2015), a breathtaking AI called Ava, manages to fatally deceive humans, playing on their emotional weaknesses.

“When you need 300 million pictures of cats in order to say it is a cat, a horse or a cow, I do not consider this very intelligent nor interesting,” Kaufmann says. Instead, it would be much more interesting to learn what a cat is based on one encounter, by interacting with the cat and by extracting principles, he believes. “We don’t really know today how artificial learning works best and which ingredients we are missing. Some say it is the richness of the sensory modalities, some say it is just computational power, others say that we are lacking the basic understanding of how intelligence can emerge. It is a big advantage to be able to learn from less data and even be able to make predictions without any data at all.”

Neuroscience will crack the code of the human brain

So how do we achieve human-level AI? “We may find interesting principles of intelligence while studying the brain and the underlying biological processes, a field we call neuroscience,” Kaufmann says. “I refer to the understanding of the brain in a way that we may build an artificial device showing similar, or even superior capabilities, as ‘cracking the brain code.’”

So far, the brains of animals and humans are good candidates for being the substrate of intelligence or intelligent behavior, Kaufmann says. There is a high chance, however, that intelligent behavior is also possible without brain cells, i.e. such as a single cell organism that can show a variety of behavioral traits.

“I would not exclude the possibility that studying genes and the underlying regulatory networks may lead us to very similar principles that we can also find in the brains of higher animals, but so far this is just an interesting and unproven hypothesis,” Kaufmann says. “Ultimately, nature is a great and inspiring teacher we can and should learn from.”

Not everyone is convinced that we can—or should—build an AI that is on par with human intelligence. And it’s still too early to say whether neuroscience will be the key to unleashing the power of general AI. But understanding how the human mind works is a crucial first step.

“While a computer is fairly well understood, the brain harbors a number of secrets, a fact that turns neuroscience and AI into some of the most exciting research fields of our time,” Kaufmann says. “We do not need to understand the role and purpose of every cell in the brain, but to understand the fundamental principle of how our minds work and what that essence of intelligence is. I like to compare this to Newton and the apple—the first step to understanding the very complex sciences around the cosmos started when we first began to understand the principles of gravity. Once we understand the patterns and principles of the brain we can use that understanding and apply it to develop human-like AI.”

TechTalks Newsletter

ABOUT US

At TechTalks, we examine trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for.
The idea is to be able to make the most out of the benefits provided by new tech trends and to minimize the trade-offs and costs.