There are currently two ambitious projects straddling artificial intelligence and neuroscience, each with the aim of building big brains that work. One is The Blue Brain Project, and it describes its aim in the following one-liner:

“The Blue Brain Project is the first comprehensive attempt to reverse-engineer the mammalian brain, in order to understand brain function and dysfunction through detailed simulations.”

The second is a multi-institution IBM-centered project called SyNAPSE, a press release which describes it as follows:

“In an unprecedented undertaking, IBM Research and five leading universities are partnering to create computing systems that are expected to simulate and emulate the brain’s abilities for sensation, perception, action, interaction and cognition while rivaling its low power consumption and compact size.”

Oh, is that all!

The difficulties ahead of these groups are staggering, as they (surely) realize. But rather than discussing the many roadblocks likely to derail them, I want to focus on one way in which they are perhaps making things too difficult for themselves.

In particular, each aims to build a BIG brain, and I want to suggest here that perhaps they can get the intelligence they’re looking for without the BIG.

Why not go big? Because bigger brains are a pain in the neck, and not just for the necks that hold them up. As brains enlarge across species, they must modify their organization in radical ways in order to maintain their required interconnectedness. Convolutedness increases, number of cortical areas increases, number of synapses per neuron increases, white-to-gray matter ratio rises, and many other changes occur in order to accommodate the larger size. Building a bigger brain is an engineering nightmare, a nightmare you can see in the ridiculously complicated appearance of the dolphin brain relative to that of the shrew brain below – the complexity you see in that dolphin brain is due almost entirely to the “scaling backbends” it must do to connect itself up in an efficient manner despite its large size. (See http://www.changizi.com/changizi_lab.html#neocortex )

If the only way to get smarter brains was to build bigger brains, then these AI projects would have no choice but to embark upon a pain-in-the-neck mission. But bigger brains are not the only way to get smarter brains. Although for any fixed technology, bigger computers are typically smarter, this is not the case for brains. The best predictor of a mammal’s intelligence tends not to be its brain size, but its relative brain size. In particular, the best predictor of intelligence tends to be something called the encephalization quotient (a variant of a brain-body ratio), which quantifies how big the brain is once one has corrected for the size of the body in which it sits. The reason brain size is not a good predictor of intelligence is that the principal driver of brain size is body size, not intelligence at all. And we don’t know why. (See my ScientificBlogging piece on brain size, Why Doesn’t Size Matter…for The Brain?)

This opens up an alternative route to making an animal smarter. If it is brain-body ratio that best correlates with intelligence, then there are two polar opposite ways to increase this ratio. The first is to raise the numerator, i.e., to increase brain size while holding body size fixed, as the vertical arrow indicates in the figure below. That’s essentially what the Blue Brain and SyNAPSE projects are implicitly trying to do.

But there is a second way to increase intelligence: one can raise the brain-body ratio by lowering the denominator, i.e., by decreasing the size of the body, as shown by the horizontal arrow in the figure below. (In each case, the arrow shifts to a point that is at a greater vertical distance from the best-fit line below it, indicating its raised brain-body ratio.)

Rather than making a bigger brain, we can give the animal a smaller body! Either way, brain-body ratio rises, as potentially does the intelligence that the brain-body combo can support.

We’re not in a position today to understand the specific mechanisms that differ in the brains of varying size due to body size, so we cannot simply shrink the body and get a smarter beast. But, then again, we also don’t understand the specific mechanisms that differ in the brains of varying size! Building smarter via building larger brains is just as much a mystery as the prescription I am suggesting: to build smarter via building smaller bodies. And mine has the advantage that it avoids the engineering scaling nightmare for large brains.

For AI to actually somehow take this advice, though, they have to answer the following question: What counts as a body for these AI brains in the first place? Only after one becomes clear on what their bodies actually are (i.e., what size body the brains are being designed to support) can one begin to ask how to get by with less of it, and hope to eek out greater intelligence with less brain.

Perhaps this is the ace in the AI hole: perhaps AI researchers have greater freedom to shrink body size in ways nature could not, and thereby grow greater intelligence. Perhaps the AI devices that someday take over and enslave us will have mouse brains with fly bodies. I sure hope I’m there to see that.

Related articles

Comments

Do you have any idea (proposed mechanism) of how a large body would cause a brain to be "stupider"? I suppose that this connects with the issue of what "body" even is for an AI. Perhaps large bodies require more brainpower simply to control basic functions (breathing, muscle coordination, etc), which detracts from higher cognitive functions.

Also, the comparison of dolphin and shew brains suggested that they have comparable intelligence. Is that true?

Finally, I'll toss out another speculation on the connection between encephalization quotient and intelligence. Perhaps to begin with, the mammalian body plan dictates a standard proportion of brain to body, and deviations from this standard ratio reflects more or less selection for intelligence. If increases in brain size contribute a little to increased intelligence, but most of the improvements come from improvements in brain structure or neuron functioning, then a slightly larger than expected brain could be an indicator that the other properties of the brain have been altered in response to selection for more intelligence.

On your first question... No. I do not. That was the topic of this post here.

Dolphins are actually more intelligent. I didn't try very hard to match their encephalization quotient in my quick grab of brains. But it makes no difference to the look, because a smarter shrew-sized creature would still have a similarly smooth brain.

the mammalian body plan dictates a standard proportion of brain to body

But that's the rub. We'd like to understand why the mammalian body plan would dictate that.

I'm still stuck on the lack of a working definition for intelligence. Despite everything I've ever heard, I can't find a single thing that would allow anyone to know when they've achieved it (forget the Turing test).

There's a number of measures that have tried to get at facets of animal intelligence over the years, although even the "how smart do they seem to me" is not without merits. On the latter, ordering animals by EQ tends to create an ordering that seems "roughly right" to one's intuitions, whereas doing so by brain size gets nothing intuitively right. My suspicion is that "how smart they seem to me" is much richer and dead-on than any of the specific measures that have been used, which also tend to show that EQ is the best correlate.

I would certainly agree with that, however it becomes more difficult when one is assessing intelligence within a particular species and more especially when one is contemplating building an intelligent machine.

Specifically the question becomes, how do you distinguish between real intelligence versus an emulated intelligence?

A few quick points on brains. Beyond the stage of a few neurons controling a simple response to a stimulus, brains are heavily compartmentalised.

Broadly, if an animal has more muscle spindles it needs more neurons to control them. If it has different muscle types or functions - fast, slow, heart, lung etc. - it needs specialised neurons. It then needs neurons to coordinate the neurons. This relates nervous mass directly to muscle mass.

Of course, muscles need anchor points and levers, so that brings in bone / cartilage mass.

The muscles and neurons need energy, so that brings in the mass of a digestive system and fats.

All in all, neural mass is always going to track the rest of bodily mass within close limits.

The upper bounds of body mass are set less by gravity, more by heat disposal. Beyond a certain size, liquid-cooled brains become quite fashionable.

As to intelligence, it is merely the ability to compare A and B and to perform some action based on the currently non-trivial outcome of that comparison. Apples and cannon balls are both spherical if we ignore trivial differences, but apples fired at wooden ships are not terribly effective.

Alan Turing supposed that a computer with eyes, ears, hands etc would be some undefined 'more' intelligent. Certainly, a blind computer will never get a job as an interior designer.

Intelligence, I would say, always boils down to having a means of comparing A with B, having a mechanism to act on the result and having enough brain power to get bored with doing it.

Comparing A with B is at the root of all measures of intelligence - that is why we commonly classify people as unintelligent if they don't seem able to tell which is shinola.

All in all, neural mass is always going to track the rest of bodily mass
within close limits.

Something along those lines is probably right. But the problem with hypotheses of that kind is that it would appear to predict that larger animals should have brains with disproportionately large somatosensory and motor areas. But they don't. E.g., the primary somatosensory and motor areas scale up in size as predicted for the "typical" area in mammals. (Strangely, it is primary visual areas that I have data showing scale up disproportionately quickly.)

And, indeed, larger brains are more compartmentalized, and I've predicted and shown that the number of compartments (at the level of "cortical areas") scales as about the square root of the total number of neurons, which extrapolates to on the order of 150 or so for human. But most of the variation in compartmentalization tracks body-size driven variation in brain size.

On "intelligence", that sounds fairly reasonable. It probably wouldn't be sufficiently general to cover a lot of the notions of "more intelligent" that a computer scientist or logician might want -- and I'd suggest there is no perfect and completely general such notion -- it could turn out to be sufficient for most animals.

Alan Turing showed that all problems can be reduced to a simple algorithm. Writers about his 'Turing machines' gloss over the need for such a machine to actually detect the presence of a 1 or 0. Such ability is always assumed.

It turns out that much more circuitry is needed to discriminate between two states than to compare any two such discriminations.

The logic, using arbitrary voltages, is:

Is x < 0.3 ?If yes - treat as '0'

Is x > 3.0 ?If yes - treat as '1'

repeat for y

After the first 8 steps - which are hidden even from computer logic designers - we can do the logic:

If x = y then step a
If x > y then step belse step c.

It may well be that any computer program can be modelled as a bunch of Turing machines, but neglect of the underlying physical mechanisms leads to oversimplification, to say nothing of thermal problems.

I am convinced that the notion that only brains can do brainy things is wrong. Based on the fundamentals of logic, you could make a concrete computer answer questions rationally.

It's just a matter of time, scale and efficiency that makes me prefer to experiment with silicon.

One problem I have with those approaches to AI is that they seem to assume that the brain is just a highly complicated circuit. However, fMRI scans show that brain regions function more at the level of emergent behaviour. The assumption that neurons interact with each other only at their points of contact seems to me false. Every neuron also creates a small electromagnetic field and can therefore interact with neighbouring neurons with which it may not have a neurochemical connection. Therefore any simulation would have to include such non-contact interactions. Maybe they do, but it then means an enormous amount of calculations and projects to build artifical biological brains might be on a more fruitful track.

Also, the issue of inputs-outputs has already been mentioned. Without the rest of the sensory nervous system what would such a "brain" be doing? I'm starting to think of the nervous system as one whole gigantic web, not just a brain. Would a brain with no peripherals start to grow sensory neurons? Computer simulations of a system we still understand poorly strike me as doomed to failure. They may, of course, discover other interesting things along the way.

It seems tom me that what isn't 'diagnosed' is what the brain is good atIt has 2 essential ingredients - pattern making and pattern matchingThe origins of computers were difference enginesOK, so that's differentiation i.e. the process by which different things are recognisedWhere was the integration? - the process which builds an inner/outer parallel imageThen, the similarity function - not necessarily the same as the 'And' function of logic, merely a 'set' function - groupings of more than 2 similar things, giving rise to numberI also became aware of another functionThe 'I-don't-know-what-to-do-with-that-right-now-so-I'll-throw-it-back-there' functionNot exactly a discard function, but perhaps the origin of a layering of priorities, like building stacks of plates, each plate being a plane of inner space existence

I've built many an image of jumping from plate to plate exploring the contents and been surprised sometimes at seeing things assumed forgotten

It's a bit like the game of talking to 'yourself sitting in a chair', whilst standing in front of an empty chair, then changing role, and being 'the person sat in the chair' and responding, to a person in front of you...It's very easy to see both sides of an argument! LOL

You mean there are no papers on the EM field around a current-conducting neuron?

I am ready to be corrected on this, but it is my understanding that neurons conduct through membrane potentials and ion channels. Neurons are not wires - they don't conduct by a flow of electrons - hence no EM field. However, non myelinated and demyelinated nerve fibers can be subject to cross-talk.

A passing thought: AI might really take off, if only Intel would make a chip that reacts to being dunked in testosterone. Anyone ready for a nice game of skynet?

It's even worse, Mark. Neurons don't only fire as a response to received stimuli. It costs a lot of energy to maintain the -60mV rest potential, so sometimes it just slips, and causes an accidental, spontanious, involuntary, meaningless spike. This slipspike might be induced or enhanced by all kinds of environmental circumstances: EM fields from neighbouring neurons indeed, but also blood sugar fluctuations, ion concentrations, pH, temperature, even radioactive decay or the occasional interaction with neutrinos, to please the fans of QM consciousness, you name it.

Although in themselves meaningless, these spikes might interfere with or modulate meaningful patters, or even combine with each other to generate real original, creative, wild notions. These will be discarded if not in corcordance with existing, previously (....at infinitum) constructed knowledge and experience, but might be selected for if they please the brain's fancy or imagination.

You might call this free will, it's all causal, but not very, and rather unpredictable, especially for another brain, observing the behaviour generated by the former, ruminating, brain.

Afterthought, Mark
It has always puzzled me why people are looking for artificial intelligence in a machine, by programming rules of growth into its capabilities - why is that artificial, surely its just human intelligence without sleep?
There have been some interesting studies of EM fields, particularly on an alkylating agent, and the effect of, for example, a zapper, by Dr Hulda Clark in the field of cancer research
http://curezone.com/upload/PDF/Cure_for_all_Diseases_Hulda_Clark_.pdf
It seems that one possible area of cancer research needing further examination, is the very changes of electrical potential on the molecules, both in the body and those ingested
We appear to have far more electromagnetic effects than we are made aware of, and apparent benefits from EM zapping
One only has to look at the sodium-potassium balance to see EM effects at a basic level
Aitch

Do you have the original data and/or original reference for the brain size v body size scatter plot? I've looked through your publications on this topic and cannot find it, either because I've missed it or I'm looking in the wrong place.

That's not my plot. Plots like that are old, mostly from Jerison. That one I just ripped off
the internet. I do have loads of raw data, though, from a wide variety of brain and body mass measures over the 20th century. If you're interested, I can email you several raw Excel files with loads of data; some from other labs, and some my own compilations of many papers. changizi@yahoo.com

Rather than making a bigger brain, we can give the animal a smaller
body! Either way, brain-body ratio rises, as potentially does the
intelligence that the brain-body combo can support.

Physicists have a name for such a minimalistic form of intelligence: a Boltzmann brain.

If such a minimalistic intelligence is possible, the question arises why we (overly complicated beings with huge bodies that parasite on the intelligence of our brains) are here, and not some miniature Boltzmann brains with the same or better intelligence.

That leaves us with the question of how to define the intelligence of a Boltzmann brain. How could one describe a life form as being intelligent, if it can not manifest itself via some amount of 'body' (read: I/O devices)?

I remember this from years ago! Yes, the overall entropy appears to be increasing but this is highly heterogeneous with pockets (large pockets) where entropy is decreasing. The paradox is, I think, resolved in that stochastic processes are not wholly random insofar as the universe has many feedback loops. I recall simulations some years back that are still relevant to complexity theory now where random universes with random feedback rules produced a significant number of 'ordered' universes. If you've ever played around with cellular automata you'll see what I mean! :-)

Mark -- the Wikipedia article is probably not the clearest explanation on this subject. You might want to have a look at this blogpost by Sean Carroll.

In summary, the 'Boltzmann brain paradox' reasoning roughly goes as follows:1) we at SB.com think about the universe, life and everything2) there is an 'arrow of time': entropy increases3) the laws of physics are time reversible4) both 2) and 3) can hold only if we (in fact the whole universe) is evolving from a state with surprisingly low entropy5) this state must represent a fluctuation away from equilibrium (equilibrium = state of maximal entropy)6) this fluctuation must have been big enough to create an entropy low enough to lead to life forms intelligent enough to worry about the origin of the universe ( 1)7) large fluctuations away from equilibrium are exponentially less likely than smaller fluctuations8) combining 6) and 7) we conclude that it is highly likely that the fluctuation was minimal: just large enough to lead to a universe with intelligent creatures9) intelligent creatures formed in a universe resulting from a minimal departure from equilibrium represent a minimalistic form of intelligence: just some kind of brain without further additions and complications in the simplest possible universe allowing such life form - the Boltzmann brain10) SB.com is a bunch of Boltzmann's brains living in a minimalistic universe, or:10) we have a paradox