Posted
by
Zonk
on Saturday April 28, 2007 @12:28PM
from the i-have-some-business-in-nevada-see-you-on-the-net dept.

Mordok-DestroyerOfWo writes "Researchers from the IBM Almaden research lab and the University of Nevada have created a simulation of half a mouse brain on the BlueGene L supercomputer. 'Half a real mouse brain is thought to have about eight million neurons each one of which can have up to 8,000 synapses, or connections, with other nerve fibres. Modelling such a system, the trio wrote, puts "tremendous constraints on computation, communication and memory capacity of any computing platform."' Although there's more to creating a mind than setting up the infrastructure, does this mean that we may see a system for human mental storage within our lifetimes?"

Uh oh. . no semicolon. . if you can even get that to compile you better hope that mouse never has to deal with trapped cheese:-p Also, are you sure its a good idea to have the mouse (if the cheese is not trapped) to eat it, squeak, then immediately squeak again? Is that really necessary? I think you should GPL this and let the genetic algorithm of thousands of developers with thousands of ideas tweak it for the optimum behavior.

Half a real mouse brain is thought to have about eight million neurons

and

the researchers created half a virtual mouse brain that had 8,000 neurons

How can it be half a mouse brain if it has 1/1000 the number of a real half mouse brain? Their simulated neurons also had less synapses than the real thing. So is the 8000 a typo, or am I missing something?

Just to follow up, according to this article [businessweek.com], Blue Brain*, utilizing a 22.8 teraflop supercomputer, manages to simulate around 10,000 human neurons. I have no idea whether human neurons are significantly more complex than mouse neurons, or whether we just have more of them, but if the latter then maybe the 8000 isn't a typo after all?

It depends on how you simulate the neuron.If you model it as a black box that sums up inputs and fires if you're over a threshold you can simulate a whole whack of them. If you model it in excruciating detail you might need a supercomputer for each one. If you believe Penrose that quantum mechanical effects are important in neurons then you can't even properly model one with a current supercomputer.

And then there are the connections. Different types of neurons have different numbers of connections. And

How can it be half a mouse brain if it has 1/1000 the number of a real half mouse brain? Their simulated neurons also had less synapses than the real thing. So is the 8000 a typo, or am I missing something?

Near the end they say "Imposing such structures and getting the simulation to do useful work might be a much more difficult task than simply setting up the plumbing".

What did the author mean by that? If they are not simulating any of the actual neural structures in the mouse brain, does it mean they are just simulating a more or less random neural network with eight million neurons? I have seen reports of simulations of actual brain structures in more primitive animals years ago.

Until they can, as they say, "add structures seen in real mouse brains" there's nothing to see here, move along...

I always thought it was fascinating how nature has been able to "grow" super computers (our closest analog to brains) and we have been unable to build anything even close to emulating their capabilities. Perhaps, there is a limitation to a mind's ability to understand how itself works. I think that if a person were to have absolute knowledge of how his or her own mind worked, it may just drive that person to madness when he or she realizes the mechanics of it reduce his or her thoughts and actions to mean

I always thought it was fascinating how nature has been able to "grow" super computers (our closest analog to brains) and we have been unable to build anything even close to emulating their capabilities. Perhaps, there is a limitation to a mind's ability to understand how itself works.

Or perhaps it's cause Nature has had 4 BILLION years....... and we've had about 50.... Just perhaps....

For the last ten years, you've really been electing a bunch of PET 3032s, Apple Is and ZX-80s. The speech synthesis was by Superior Software and the suits by US Gold. Sometime in the next few months, we are due to be attacked by a large number of mutant camels, the road system already having degenerated into a maze of twisty passages, all alike.

If you like the fancy terms, here's the (only 1 page and a cover sheet) pdf the Research report [modha.org] or, better yet here's Modha's blog [hostingprod.com] with about the same info.

For more information on the Blue Brain Project [bluebrain.epfl.ch] which appears to be the same, or atleast a strikingly similar project but from switzerland, click...err, that link I just placed! Here also [spiegel.de] is a good article to learn more about blue brain. It seems much more detailed than the BBC's snippit.

Groups of neurons started becoming attuned to one another until they were firing in rhythm. "It happened entirely on its own," says Markram. "Spontaneously."

Insights like these are absolutly amazing. It's all such facinating research, but I can help feel a twinge of sorrow for the poor thing.

the main purpose of the artificial brain, say its creators, is to make new types of experiments possible. For example, what happens when damage is inflicted on certain types of cells whose function still isn't determined? How many cells can be switched off until the behavior of the surviving cells around them becomes erratic, or the entire circuit breaks down?

The poor thing is just circuits and reactions, I know, but I feel sorry that it's literally being torn apart and rebuilt all the time. It's odd, I don't feel this way in similar experiments with real mice; I guess I have a soft spot for computers...

I just found and read the actual paper, too; now I don't have to post the link.
(It ought to be a Slashdot requirement that when you post a story about something,
you have to link to the real source, not just some news site or blog link.)

This isn't really about simulating a mouse brain. This is more like running a synthetic benchmark to demonstrate that if they had the wiring diagram for a mouse brain, IBM Almaden has enough CPU power on hand to simulate it. But they don't have a mouse brain wiring diagram; they're just exercising the simulator with some random set of connections.

This isn't really anything dramatic. It appears to only differ from what they were already doing with Blue Gene I think a year ago in that now they've made some optimizations to their firing/communication algorithms to be less resource intensive (and correspond less directly to what occurs physically), allowing for simulation of more neurons and firings.They don't seem to be simulating any neuroanatomy beyond interconnected neurons, and the initial interconnection pattern is just artificially generated.

It's cool that we can create the basic scale of the infrastructure of a (half) mouse brain - but if we're really going to simulate a brain, we need the ability to read the contents of a real one in order to verify our simulation. Otherwise, we have little basis for saying that input X gives the sensation of movement, and would have effect/output Y in terms of changed state/response.

I wonder what the current state of neuron state reading is - would we ever theoretically be able to read the state of a brain beyond the external outputs? Could we ever get a sinlgle state that would be the 'ROM' of a person's memories and mental state, that you could place in a simulation and have that person's memories 'wake up' in a simulation? I wonder how close we could get.

>>You're jumping ahead of us. You'd have to emulate sensory organs in order to sense "movement".Actually, you just need to be able to read the outputs from a sensory organ. There's no rule against testing a simulated brain with a real eye's outputs. You can either record the outputs and send them through to the simulation later, or have realtime IO to a real eye. Same with equalibrium, and other data sources. Oddly enough, it's likely many, many, many orders of magnitude simpler for us to provide

Developing simulations involves using abstractions and simplifications to deal with the fact that we can't handle the computational complexity of quantum-level simulation of an entire mouse brain.

I've seen far too many papers where people make a "simulator" for a system, without demonstrating that the simulator has any real connection to reality, and then make grandiose claims about the real system that they're simulating, based on simulation results.

Call me a cranky old computer scientist, but someone simulating a brain isn't particularly noteworthy. Showing that the simulator is accurate enough to shed light on the ways that brains work, or that the simulated mouse brain can achieve things that we have difficulty achieving with traditional computer software, and I'll be excited.

Fair point. Although my main intention was to show the the (necessary) use of abstractions when modeling potentially introduces modeling errors. I wasn't really trying to say that a quantum-level simulation would be the gold standard of accuracy.

Imposing such structures and getting the simulation to do useful work might be a much more difficult task than simply setting up the plumbing.

For future tests the team aims to speed up the simulation, make it more neurobiologically faithful, add structures seen in real mouse brains and make the responses of neurons and synapses more detailed.

It's not that this isn't noteworthy, it's that mammalian brains are incredibly complex. I would be curious to see if they could faithfully reproduce a fish or reptile brain at this point.

to paraphrase Jack Nicholson in "The Departed": "You all are [on your way out]. Act accordingly."Advances in nanotech will obsolete the human brain and body probably within fifty years. So if you're younger than forty, you'll probably see it. If you're between forty and sixty, you might or might not depending on how close you are to the upper end of the range and whether you can take advantage of life extension technologies over the next twenty years or so. If you're over sixty - arrange for a suspension co

No one is forcing you to read the textbooks that explain how your brain work. In any case, a bound on complexity was already achieved when we figured out we were made out of atoms, and how many of them.

In any case, a bound on complexity was already achieved when we figured out we were made out of atoms, and how many of them.

Not necessarily - without meaning to get too metaphysical. Cells replenish themselves using atoms from external sources (ultimately). The human body replenishes its cells regularly, such that every seven or so years you are a completely different being - in a sense.

This is of course very simplified, and the whole process is much more elaborate and not entirely understood. That's a

In the simulation of the mouse brain, IBM is making a big assumption: the brain operates only in the domain of Newtonian (a.k.a. classical) physics. So, the IBM programmers just encode the simple physical laws (governing the flow of electrical energy) in the C language.

However, there is an alternate theory of consciousness, based on quantum physics [quantumconsciousness.org]. It is inherently non-deterministic and cannot be modeled in a computer.

Hence, IBM's big assumption may be wrong. However, at least, the IBM experiment will tell us whether the operation of the brain is strictly Newtonian. If this artifical brain behaves differently from a mouse brain, then we would know that non-Newtonian physics is crucial to the operation of a flesh-and-blood brain.

However, at least, the IBM experiment will tell us whether the operation of the brain is strictly Newtonian. If this artifical brain behaves differently from a mouse brain, then we would know that non-Newtonian physics is crucial to the operation of a flesh-and-blood brain.

Very good point, but I think you have it half-wrong. Because we can't exhaustively compare their model vs. reality, we can't consider the Newtonian assumption fully validated by experiment. But a disagreement between the model and re

However, there is an alternate theory of consciousness, based on quantum physics [quantumconsciousness.org]. It is inherently non-deterministic and cannot be modeled in a computer.

I think the biggest argument against this is that synapses do not work on the atomic level. They are made of atoms, but quantum states do not seem to overtly affect organic matter at cellular level.

Of course I could be wrong about this, but since decisions are usually the next best move [wikipedia.org] it could simply be a matter of weighting what the "intelligence" applies to his rules as next best move.

The problem with General Artificial Intelligence is that "the next best move" is often open ended and too many possible choices often give our current computation a run for its money unless its put into some form of predefined rules.

The reason humans do so well is because we have certain criteria encouraging us to do things (hunger, pain, altruism, fear, etc etc)

Hence, our general intelligence goals aren't that complex (usually... to feel good about oneself and one's life) and that our true intelligence is being able to recognize things that improve upon that given a set amount of rules we know.

Which makes us very deterministic.

Even rebelling against the crowd can often be very predictable in humans.

You made a number of spurious statements to support your thesis that IBM made a big assumption:
It is possible that brain activity occurs via the microtubules, but this has not been well shown.
Quantum physics is not *efficient* to simulate on modern computers, as the non-deterministic aspects tend to drive the model exponential. This does not prevent extremely large deterministic computers from modelling inefficiently, nor does it prevent prevent Quantum Computers from modelli

Yep, exactly, and I totally consider it a truly bizarre assumption that Penrose holds there.
I am forced to assume that it is important for his notion of identity, to have a free will that is capable at least of thinking whatever it is possible to think. He likely refines this formally as the ability to 'prove what is provable' - since if we *couldn't* prove certain things that are actually provable, then we clearly wouldn't have the ability to think whatever was thinkable, or possibly to th

Hence, IBM's big assumption may be wrong. However, at least, the IBM experiment will tell us whether the operation of the brain is strictly Newtonian. If this artifical brain behaves differently from a mouse brain, then we would know that non-Newtonian physics is crucial to the operation of a flesh-and-blood brain.

I'm pretty all we'd know (if it behaves differently) is that there is
some sort of difference between the operation of the simulated and real
versions. We wouldn't necessarily know that, out o

IBM is making a big assumption: the brain operates only in the domain of Newtonian (a.k.a. classical) physics... there is an alternate theory of consciousness, based on quantum physics. It is inherently non-deterministic and cannot be modeled in a computer.

Well, talk about big assumptions... I did two semesters in quantum physics as part of my electronics engineering degree. There I learned a bit about this "quantum" stuff that so many people throw around so easily.

The first thing that must be understood is that quantum effects appear in *very* small dimensions only. Quantum computing experiments must be performed under extreme conditions, a tiny fraction of a degree above absolute zero, just to get a quantum entanglement of a few bits for a perceptible amount of time. There's no way one could obtain quantum effects beyond normal chemical reactions in a human cell.

Roger Penrose, who started this "quantum consciousness" theory is a mathematician, not a physicist. He did it probably as a response to the evolving research on neural networks, such as the one mentioned in this article, based on a philosophycal uneasiness about the idea of us having a deterministic brain. He has been debunked by quantum physicists many times since he published his book.

Yet, he needs not worry. We can have a brain that's fully deterministic at a microscopic level without doing away with free will, if we assume that our brains operate in non-linear conditions [wikipedia.org].

Besides, it's not as if we had to reproduce exactly the working of living beings to emulate them. Airplanes are able to fly higher and faster than any bird without flapping their wings. At this time, we are like aircraft engineers were in the 1890s. Perhaps we will be able to find better mechanisms than used in natural brains for processing thoughts.

We can have a brain that's fully deterministic at a microscopic level without doing away with free will, if we assume that our brains operate in non-linear conditions [wikipedia.org].

No, we can't. Chaos doesn't allow for a causal or non-deterministic effect of counsciousnes. It seimply means that the final state of the system cannot be predicted based on initial conditions, usually because these initial conditions can't be measured precisely enough. However, all the steps in the process are still completely deterministic. There is no more need or room for free will in a deterministic and chaotic brain than there is in complex meterological system.
Or said another way, in which step in

With the continual, exponential increases in computing power that we are getting, in about 25-30 years we should have the capacity to simulate human brains. And yes, this does have a lot of consequences for how a lot of people view themselves... but already we know that we don't have free will (we make decisions before we are aware of them, for example), and we already have lots of support for reductionist viewpoints. Simulations are just an extension of that.

Given enough late-night TV and phone-in games shows, in 25~30 years the average human should have become sufficiently simple that the contemporaneous human brain could be simulated by some shiny pebbles and lines drawn in the sand.

If you want more solid arguments for this, read The Singularity is Near, by Ray Kurzweil. He makes a convincing argument.

Having read "The Age of Spiritual Machines", I'd be surprised to see
Kurzweil make a convincing argument. He's a smart guy who has invented
some cool things (not the least of which was the Kurzweil K250 -- basically
the first keyboard with a realistic piano sound), but having started knowing
about his inventions and going to reading that book, I was very disappointed.
His argument tha

You sir, have hit the nail mostly on the head. Lately we humans are discovering something new about ourselves almost daily. The genetic link to why some of us have body clocks that are slower than others is one, genetic links to everything from sexuality to diseases. We are learning slowly that we really aren't that complex. We just didn't know that yet.The short answer to the original question is no. The reason is that the methods used to implement the models is incapable of truly mimicking the human brain

We are learning slowly that we really aren't that complex. We just didn't know that yet.

This is kind of like how we used to think living things spontaneously came into being, and how life was driven by a mysterious essence. Now we know it's simply trillions upon trillions of interacting cells reading from a database of genetic code and transcribing it into proteins, reacting oxygen to produce energy using intricate membranes and switching genes on and off during growth using hormones travelling down blood vessels, protected by an immune system that learns about different bacteria and viruses throughout life, all protected by a skin that constantly grows, sheds and repairs itself.

We used to think that the liver was responsible for anger, and the heart was responsible for love, because those are the things that seemed to react when we felt those emotions. But boy did those bafflingly complex notions fly out of the door when we discovered emotion is due to having a mass of billions of interconnected...

I could go on and on and I have a very simplified laymans view of how the whole thing works.. I don't know how you can say we're starting to realize how simple we are, we're realizing how complex we are.

GM foods, by the way, haven't had their actual genomes modified, they have new genes added that create new proteins that can do things like attack insects. It's nothing as complicated as actually changing an existing gene in a useful way, which would be much more difficult because of the ways genes interact in so many ways.

Unlikely, given that we are really no where close to even understanding completely everything about our complex brains.

Do we even want to, wouldn't that take away some of the mystery behind humans. Afterall if we can figure ourselves out then doesn't that mean that we aren't really all that complex?

wouldn't that also give us perfect explanations of people's actions making situations predictable violating free will?

afterall if society is ultimately chaotic in terms of our understanding, then wouldn't this be the ultimate control?

Don't be afraid to know more. It's coming if you want it or not. It doesn't mean a thing about free will: did you ever believe that your free will belong to your "ghost" or something? You are the sum of your parts and the interaction between them. Nothing scary about this.

As for the "mental storage" - simulating a brain doesn't mean much about mental storage. Knowing and simulating an Intel chip in a program doesn't mean you can crack open an already produced Intel chip unit and hack few more cores in it.

All these act as peripheral devices to our brain, and we should expect tighter integration between the brain and those (for example a wire projecting video directly in your cortex), but nothing that "expands" the brain structure at such a low level as is hinted in the summary.

did you ever believe that your free will belong to your "ghost" or something? You are the sum of your parts and the interaction between them. Nothing scary about this.

I don't know who you are and how you operate, but most people who speak this way are materialists and came up with this idea while sitting behind their wide-screen TV eating pizza. The idea of you being the sum of your parts and actually experiencing the process directly are two entirely different things. Have you laid on your back in the gr

That's all very poetic and nice, but it doesn't speak to the question of free will at all.We can observe our cells and see that they behave in a deterministic way. We can observe the chemical's they are made of and see that they behave in a deterministic way. We can observe the signals sent between our neurons and see that they behave in a deterministic way. Face it, we behave in a deterministic way. There is nothing wrong with that fact. It takes nothing away from the beauty and the complexity of what we a

I don't know who you are and how you operate, but most people who speak this way are materialists and came up with this idea while sitting behind their wide-screen TV eating pizza. The idea of you being the sum of your parts and actually experiencing the process directly are two entirely different things. Have you laid on your back in the grass and felt the blood course through your veins, and the palpitations of the heart, recognizing how fragile the system is? Have you sick with a disease that actually af

As for the "mental storage" - simulating a brain doesn't mean much about mental storage. Knowing and simulating an Intel chip in a program doesn't mean you can crack open an already produced Intel chip unit and hack few more cores in it.Plus, we already make very good use of tools to expand our mental storage: starting with notes, diaries, databases, computer knowledge systems, customer relationship programs, photos albums etc. etc.

So was I the only one who read "system for mental storage" as meaning the transference of a human conciousness into a computer?

So was I the only one who read "system for mental storage" as meaning the transference of a human conciousness into a computer?

That's just as unlikely. People used to computer technology know that the hardware structure and the software state are two completely different things. This is why you can build a model of the hardware, feed it the state, and bang, you have a Gameboy emulator (or whatever).

But with biology, those two are intermixed. Brain saves information by changing the connections and structure itself. This means that you can build a model of a generic human brain, run it, and you have full blown AI.

But you can't feed it the state of any human being. As every human being has different "wiring", hence won't "play" in your model.

Someone mentioned Smalltalk. Smalltalk kinda works like a brain in that regard. State is structure is state.

Yeah, I understand all that, but the ability to simulate a generic brain would give you the ability to simulate any arbitrary brain, assuming you could find a way to scan it's structure. Though the process would likely involve causing brain death to prevent changes to the brain while you were scanning...

No, he was referring to Goedel's theorem whereby any sufficiently complex system is unable to describe itself. Thus, being able to understand/describe ourselves completely would mean that we are not very complex. I hold the opposite view, i.e. we will not be able to describe ourselves fully precisely because we are too complex, but Goedel's theroem might be proven wrong in the future. That'd be great news for transhumanists.

This is completely wrong. Gödel's theorem does not state that "any
sufficiently complex system is unable to describe itself." Very roughly, it
(specifically the first incompleteness theorem) states that any
consistent mathematical system
that is able to describe itself is necessarily incomplete. And,
there is no chance that "Goedel's theorem might be proven wrong in the
future." It is a theorem, a mathematical truth. Not a "theory", if
that's what you are confusing it with. For more info see
Gödel's incompleteness theorems [wikipedia.org].

The idea that a system which can describe arithmetic on the natural numbers can't "describe itself" has a very precise mathematical meaning. In essence, such a system can't prove every true statement about itself, without being inconsistent. This is very distant from the human concept of understanding something. Every true statement about even the smallest fleck of dust would take lifetimes to say. So in some sense, we may not be able to know every true statement about the human brain. But that does no

Huh? You really need to take a math class. Gödel's first incompleteness theorem (I suppose that's what you're referring to, since there's also others that bear his name, like the completeness theorem) has been proven true (in ZFC, at least, I assume) and thus cannot be "proven wrong in the future".On the other hand, of course, to which extent it's actually applicable is another question, and certainly a very valid one; but the theorem, as it stands, cannot be "proven wrong", although you might question

Do we even want to, wouldn't that take away some of the mystery behind humans.

We have a fairly good understanding of the way a rainbow is made, but I can still appreciate it's beauty. Same goes for a wide variety of phenomena.We understand the physiological make-up of boobs, but they're still pretty interesting and appreciated by a large % of the population. Just because we understand something, doesn't make them less wonderful and amazing. Besides, most people in the near future wont bother/be able to l

Unlikely, given that we are really no where close to even understanding completely everything about our complex brains.

When you have a crap ton of computing power available, you don't necessarily need to understand what you are modeling. You can just punch in the variables and let the computations "figure it out". I still think we are a ways away from understanding the human brain because it is much more complex than the mouse brain. Not only are there many many more neurons, but there are also many many

Also there's the difference between "This is the choice you would have made" and "You had no choice". If you know someone intimately, you'd be able to predict a lot of the things they would say or do in a given situation. Same with a simulation, given your reaction patterns that can be reasonably predicted. Even if we got the brain's wiring so pinned down that we could predict that tomorrow you want to try a cafe latte instead of black with sugar and that you're planning to stop to buy lawn furniture on you

Unlikely, given that we are really no where close to even understanding completely everything about our complex brains.

Do we even want to, wouldn't that take away some of the mystery behind humans. Afterall if we can figure ourselves out then doesn't that mean that we aren't really all that complex?

wouldn't that also give us perfect explanations of people's actions making situations predictable violating free will?

afterall if society is ultimately chaotic in terms of our understanding, then wouldn't this be the ultimate control?

No, even a "perfect" simulation of a human brain wouldn't be very useful in predicting actions. The brain is a chaotic system. If someone scanned your brain into a computer, even the tiniest imperfection in the scan would cause the thoughts to diverge quickly. And our current understanding of physics is that a "perfect" scan would be impossible. So your "free will" is safe in practice, and even may be protected by theory. Even if it is just an illusory concept with no ability to explain any experiment

It makes them uncomfortable - that is the illogical reason. The logical/informed reason is that it(determinism) is not true according to contemporary physics.When Heisenberg discovered Uncertainty in quantum mechanics, there was a huge rush by the Soviet to oppose him, because Marxist ideology concerning the human mind was founded on determinism. The new relations showed that the universe is (sans gravitation) a conglomerate of superimposed states, all of which are probabilistic in nature, and which provabl

This is a very difficult, unintuitive concept, and it completely abolishes the idea that you can predict human behavior, even though you may be able to reach better and better approximations as you reach larger scales.

How does it remove the possibility of predicting human behavior? Many macroscopic processes (e.g., motions of the celestial bodies) can be predicted very well, despite quantum uncertainty. You would have to argue that human behavior is determined at the quantum level, as Penrose does, not ver

Penrose is an excellent mathematician, but he's a crackpot when it comes to biology and the brain.

As for brain simulations, they almost always use randomness in the form of pseudo-random number generators. Physical random number generators are actually available and could be used, but nobody bothers because there is no conceivable way in which that could make a difference.

Penrose said unique thought and intellegence requires cosmic rays firing random neurons. Without this you have a deterministic machine, and not a brain.

Penrose is apparently a pretty smart guy, but I find this to be a weird
argument. If I step inside a very well-shielded environment that protects
me from exposure to particles coming from cosmic rays, should I expect
it to become more difficult to think? Conversely, when astronauts
travel into space where they (presumably) have somewhat less shielding
a

Hmm, couldn't you just give the simulator a source of entropy, such as a hardware random number generator? Or perhaps implement the simulator in an FPGA, and then overclock it to the point where it's just a little finicky?Given the difficulty of distinguishing between pseudo-random and truly random numbers, I don't think that would even be necessary. I would be very surprised if we made a brain simulator with a real entropy source, which was creative, and then replaced that with a pseudo-random number gen