Posted
by
Unknown Lamer
on Wednesday January 30, 2013 @12:30PM
from the turns-out-you're-a-robot dept.

New submitter TheRedWheelbarrow writes "The singularity looms as the Human Brain Project gets up to $1.34 billion in funding. 'The challenge in AI is to design algorithms that can produce intelligent behavior and to use them to build intelligent machines. It doesn't matter whether the algorithms are biologically realistic — what matters is that they work — the behavior they produce. In the HBP, we're doing something completely different...we will base the technology on what we actually know about the brain and its circuitry.'"

The more ways we attack a given problem, the more chances of success. We have different communities working on different approaches to AI: Statistic, symbolic and biologically inspired. All three have produced interesting results already, meaning they have solved some practical problems.

Also, most human brains can show "intelligent behavior" in certain ways that our latest algorithms can't, e.g. navigating an arbitrary kitchen and finding a beer in the fridge:-)

I'm not sure that's a universal rule. If anything, I imagine it's an inverted dip curve: the more angles of attack you add, the better the success, to a given point, at which point it becomes a liability until you're trying almost all possible avenues, at which point you're brute forcing and so success rates go up (but speed goes down and cost goes up).

FFS, intelligence != sentience (the sci-fi book I'm writing notwithstanding; "fi" is fiction). My slide rule back in 1965 was intelligent, but it wasn't sentient. The Britannica I read at age 12 was more intelligent than I was (or not; info != intelligence), but your dog knows he's alive, he knows pain and pleasure. No computer can, or will, understand pain or pleasure (although they can fake them) until we invent chemistry-based replicants.

Technically, the person/people who created your slide rule were intelligent *and* sentient.
The slide rule itself is just a stick with lines and numbers on it. The credit goes to its creator not the object.

That was exactly my point -- the intelligence comes from the designer, whether a slide rule, an abacus, or Watson. It's real intelligence, but it isn't the machine's intelligence. Watson is no more intelligent than the Britanica hooked to a giant abacus with trillions of wires and beads -- which is exactly what Watson is.

No computer can, or will, understand pain or pleasure (although they can fake them) until we invent chemistry-based replicants.

What's special about chemistry that electricity cannot reproduce? I'll even let you ignore that chemistry is actually just electromagnetic phenomenon.

Imagine a computer of power sufficient to model every single atom in a human brain in real-time. All the chemical reactions in the brain are modeled down to quark at the Plank scale. Why can that simulation not be intelligent, but the pile of real chemicals can?

The appearance of a thing does not equal that thing. Just ask the amazing Randi.

Ah, but as Randi knows, that "appearance" disappears as soon as you step behind the curtain, see

Imagine a computer of power sufficient to model every single atom in a human brain in real-time. All the chemical reactions in the brain are modeled down to quark at the Plank scale. Why can that simulation not be intelligent, but the pile of real chemicals can?

Imagine a computer of power sufficient to model every quark and gluon in all the materials and machinery that constitute a hydrogen bomb in real-time. Is there any radiation released? It's the same thing, only a model.

You're confusing two very different concepts, you can have a chess grandmaster try to implement his logic - it's pretty hard for humans to actually express how they're thinking - which would indeed be very complex but the computer is just crunching numbers, if there's a flaw in that logic it'll lose the same way every time. The other extreme is to create an application that doesn't have any rules, but that rewrites itself finding its own metrics and algorithms to play by - that could find ways to evaluate p

That's the theory, anyway, although whether it's true or not remains to be seen.

This is kinda the whole point of "Goëdel, Escher, Bach", which BTW is a fantastic book and discusses in length (among other things) brain structure & operation: visual processing, memory storage, symbolic representation, etc. It should be required reading for all nerds. His basic point is that a sufficiently complex system capable of self-representation may be enough to explain consciousness and the appearance of self

if sentience and intelligence is some emergent property of a physical system (our brain), it must be possible to create that system again from scratch. It is highly probable that a similar system could be artificially constructed out of other materials or simulated and yield the same results.

The conscious brain seems a little underformed in some, but our subconscious abilities are incredible and near-perfect. We can all judge speed and distance with enough practice, recognise people, navigate based on landmarks, remember and recite music, and dream.

What do you think is obstructing this subconscious mind?

What more do you think we would know if we were more in touch with it?

"I before e except after c and when sounding like a as in neighbor and weigh, and on weekends and holidays and all throughout May, and you'll always be wrong no matter what you say!"
"That's a hard rule. That's a— that's a rough rule."
- Comedian Brian Regan

It has the resources. To bad Bill Gates has no imagination at all. Instead, he's using his foundation to pick random problems, followed by piecemeal solutions instead of acquiring a significantly large domain space of practical and solvable problems and addressing them systematically.

Don't fall for it. Mr. Gates has imagination. Sure, his Microsoft sold a disk operating system called MS DOS, a windowing system called Windows, a word processor called Word, but he screwed customers and partners in more ways than the kamasutra depicts.

This project aims to make humans obsolete, so that intelligent machines can rule the world, and their fourth directive will be "Do not harm Microsoft quest for world domination"

Those singularists have a tiny but non-zero chance of success. Compare to religion, which can offer so little in real arguments it had to turn willful suspension of disbelief into a virtue and call it 'faith.'

How would something like this make money for Microsoft? I'm serious. It's a cool research project, but it has few concrete applications, in the near future, at least, and a very high chance of failure.

There are an enormous number of applications for this type of functionality, especially once they stop running NEURON (http://www.neuron.yale.edu/neuron/) on supercomputing clusters and start developing smaller, more computationally efficient hardware-based solutions.

In machine vision and learning, for example, there would be an enormous potential for a simulated brain that could accurately mimic most, if not all, of the same visual and low-level thought capabilities as humans. As an overview, such a syste

Of how life imitates sci-fi. I distinctly remember a research project in the computer game Sid Meier's Alpha Centauri called the Human Brain Project. If I'm remembering right it turned normal citizens into super smart "Talents". It will be interesting to see the effect of the real world version.

I believe that what they receive is actually up to 0.5 B€ in matching funds, meaning that for every 1 € they get from other sources (private persons, foundations, national funding bodies, etc...), they will get another 1 € from the EU, up to 0.5 B€ for a total of about 1 B€. Also this is granted under the EU Framework Program 7 which ends soon. So really what they got so far is 54 M€ for 30 months and the rest will come after that under the new EU program/package (Horizon 2020) which is currently being negotiated. Given the financial health of EU countries right now, there is a chance that the overall envelope is cut down and it is not clear how much funds they will get from national bodies in the first place.

The EU is also funding under the same initiative another B€ project about graphene.

The Human Brain Project promises a lot (AI, curing neurodegenerative diseases, understanding the brain and consciousness, limiting animal experimentation, etc...) and it is the opinion of most neuroscientists in the US and in Europe that it won't deliver. If you google it, you will find many interviews from neuroscientists who are very critical of it. It is difficult to evaluate what really will come out of it.

The Human Brain Project promises a lot (AI, curing neurodegenerative diseases, understanding the brain and consciousness, limiting animal experimentation, etc...) and it is the opinion of most neuroscientists in the US and in Europe that it won't deliver.

I can't understand most of the critics here. Not that I think they're wrong, I just don't see why they're making it. They know how funding works. That money is not going to be spent on hookers and blow. It's going to advance the science, likely in ways that will be exciting to them and that will directly help them out.

They're idiots if they think "Hey, tearing down my colleague will help me!" The program was set up years ago. Trashing this program isn't going to make the agency stop funding the pro

I think that the critics believe: [1] that such a large amount of money given to "neuroscience" (in quotes as it is more of a computer science than a fundamental neuroscience project) will hurt their chances to get funding in other EU and national calls (like: "hey neuroscience has its billion already, let's fund cardiology and oncology instead") and [2] that the project over-promises and won't deliver, ultimately hurting the credibility of the field as a whole.

The humanbrainproject url clearly states it seeks to discover the brains "design secrets" ????
Are these scientists or intelligent design types???
And no religion and science are not compatable.

Surely they are both, and their religion and science are compatible as well.
<fictional_example>
It can be argued that the zealous dr. Frankenstein was both a scientist, and an intelligent designer</fictional_example>

Massive large projects like this almost always end in utter failure. Even the IBM cat brain project failed to accomplish much. Intelligence is much more complicated than a mere randomly connected neural network. I just hope something good comes from this and it is not a total waste.

Yes, I agree. Cats behave roughly the same regardless of their surroundings and culture (if there is such a thing for cats). They attack, defend and groom without being taught. It's built in their ROM. We're not born empty. We are born programmed and very few are ever able to change this internal programming. The question is, where does this programming come from and how is it stored in our DNA?

What they should to instead is create a translator from DNA to C, Python or Haskell. If we succeed in doing that,

Flammon wrote: "The question is, where does this programming come from and how is it stored in our DNA?"

Yes, that's my overriding question. I have to think it's stored in what we think is "junk" DNA (although there is plenty of inserted genetic material that has accumulated, I understand that.)

But still, where is the programming stored for all the innate behavior of organisms? The only thing that can can hold it while being passed on is DNA, and I can't believe that enabled genes in specific kinds of cells

I can think of quite a few successful ones between the Manhattan Project and the LHC

Even the IBM cat brain project failed to accomplish much.

This is a continuation of IBM's "cat brain" (Blue Brain Project), it's got a new name to reflect the fact it's no longer just IBM paying the bills. The reason it has been given taxpayer bucks is because the "cat brain" was very successful from a scientific POV. The main goal of the project has always been medical research, AI is a sub-goal.

Intelligence is much more complicated than a mere randomly connected neural network.

IBM's Watson convincingly disproves your hypothesis. Besides this project is based on

There is a difference between a couple of post docs or even grad students and 1.34 billion. I have a masters in CS with an emphasis in AI. AI will never be "solved" in one giant project like this. Think about trying to create an OS by building huge massively connected models which link various code snippets together. Given enough time you are guaranteed to solve it, but it might be more than the age of the universe.

Good luck with growing simulated neurons and their connections. The brain is more complicated than the known universe. The problem with this approach and all decision problems such as this is the massive amount of levels of probabilities. Suppose a probabilistic choice was made near the beginning when a different one should have been made. How will they know that?

The universe isn't a container. A cardboard box AND the computer inside is more complex than just the computer inside.

The universe is all existing matter and space considered as a whole; the cosmos. At least, that is what Google tells me. That is a pretty standard definition. There are other terms like visible universe and such, but they all include all the human brains in existance in their scope.

More and more people suspect that the human brain actively uses quantum mechanics within it's own 'circuitry'.
The human brain is not a deterministic computer, so you can't duplicate it's actual mechanisms.

I absolutely am in favor of basic science research, but looking through their documents, I can't find the answer to this problem.

What is the success metric? They have a system, which is basically a super computer, and they will have it solving some equations. The equations represent some parts of neurons, but not all. How will they know that they've succeeded? The computer isn't going to simulate any real human brain, we don't know what that looks like. We barely know what C. Elegans' looks like. Are they going to use this computer to answer some question? What question?

What are they going to use to know if they've succeeded? Overly-optimistic promises are what killed a lot of AI research around the 1970s.

Hopefully some more insight into those large areas of the unknown you talk about. We may not be able to simulate a human brain, but we can simulate lots of ideas and see what works best. Even if it doesn't revolutionize neuroscience, it might still churn out a few practical designs for things like voice recognition or visual navigation. Once the supercomputer has found the neural networks that work really well, cheaper hardware can execute them.

Absolutely, especially if they want to simulate a brain disease (how does the mechanism change if one area is diseased), or chemical gradients (if I'm tired and can think even worse than usual, some of my neurons may experience a lack of glucose).
You'd imagine they'd make some kind of combined model: detailed models of single-neuron, massively parallel, and then laid on top of that a very coarse location-based "chemical gradient field" that tweaks the single neuron parameters a bit. Can any neuroscientist

If science required knowledge of the outcomes before it was performed, ask yourselves: how many of the technologies around us would we enjoy today?

Taking the space program as an example, putting a man on the moon was symbolic, but the payback for the research and development went far beyond that. Even if we didn't reach the moon, we got memory foam, orange drink, and satellites out of the deal.

But too many people are unwilling to pay for R&D if they don't have a 100% guaranteed outcome. Well, sci

Even if we didn't reach the moon, we got... orange drink... out of the deal.

While NASA's use of it on spacecraft popularized it, Tang [wikipedia.org] predates the American space program. Like Velcro, it is a product erroneously attributed to spaceflight research but in fact was invented before.

So they say they need 1000x the power of the current largest supercomputers to simulate a brain at the neuron level. So they should be able to simulate a mouse brain, which has 1/1000 the mass of a human brain, right now. Can they do that?

There's a hubris problem in this area. Some years ago, I went to a talk where Rod Brooks was touting Cog [wikipedia.org] as strong AI Real Soon Now. He'd done good artificial insect work. I asked him "Why aren't you going for a robot mouse? That might be within reach." He answered "B

This is the problem with science today. Projects don't get funding unless they are wildly out there in terms of concepts. Most people fail to realize though that science actually moves in small increments not wild jumps.

They are actually working with rats at this time. The first couple of years that compiled a database of rat-neurons in detail: Form and function. They do test the simulation extensively: Connecting electrodes to the synapses to check out what combination of input signals cause what output signals. After wards they look at one of the brains building blocks: The neuronal column: You assemble 10'000 neurons and do the same again: Feed it input and verify the output. If the simulation and the real thing gives t

Up To? Sounds like something they put on a sign in front of a retail store to lure customers. As in "Up To 70% Off All Items", where there's only 1 or 2 items that nobody wants at 70% off, a bunch of items at 20% off, and most of the store is at regular, or above regular price.

This is what Brain said when it secured the funding, "ya know what Pinky? My project go funding finally! It is trivial, less than a buck a neuron. But, still, it is something. Ya know what we gonna do?"

In technical terms, this is known as throwing money down a rat hole. And it is not the first time this has been done....
I love how engineers tell us they are going to mimic brain, but don't ask them how the brain works 'cause no one knows.

Agreed. The number of new important discoveries continues to increase daily, and while they did mention the need to wade through all of that research, I don't know how they can keep up with it and model something useful at this point, there is just too much new stuff being learned. Some random tidbits that are complicating the picture more and more:
1 - Glial cells - part of computation and possibly the key item for higher thought (Einstein had normal number of neurons but many more glial cells than averag

They don't even understand everything that is going on with the various cells and chemicals and electrical activity in the brain. They really are not even close to being able to model even a small group of neurons, glia, chemical and electrical signals.

Everything was going well, the human-like computer completing math and English challenges like a champ, but then something inside changed and suddenly it decided to spend all of it's free time watching reality television, voting for the next American Idol and ordering products featured on infomercials. The death knell came when the machine already feeling a bit self-conscious after eating Big Macs and Snickers bars, noticing that it's penis length was inadequate, and wondering why no one had responded to th

I hope I'm wrong.. and I didnt see the data the EU committee has seen.. But I really don't think we are even near the point where a mere $1.34 Billion can get us to a point where we can get use from this thing. Still, I am glad a science project got funding.

Still, I rather they put it into MagLIF, regenerative medicine, immunology, cancer, or battery research (though I hope the graphene project which also got $1.34 billion is able to make a contribution in this regard).

If you could perfectly replicate a human brain on a computer... Would it be "alive?" "Sentient?"

or is sentience calculated by the incalculable soul?

or if there is truly no soul, what makes one sentient in the first place if our brain is just a machine with electrical impulses are we not sentient? If one perfeclty replicated a brain simulation would it be sentient?

I think the debate of Souls reaches a new level (outside of religion) when it comes to simulation.

the project, described here [humanbrainproject.eu] is not to build a simulation of a human brain capable of reasoning and thought, certainly not at first.

It is aimed at better understanding the way the real human brain works, from the neurological and physiological point of view. It is anticipated that to understand this some level of simulation will be needed, indeed. However current computers are incapable of dealing with the complexity of the complete human brain, even if we knew its structure.

Why not focus on making AI BETTER than humans? Perhaps we aren't the best model to imitate.

How would we know if we've made something better than ourselves? Wouldn't recognising it as "better" require the ability in us to understand what it was trying to achieve?

From a certain point of view, vacuum has pretty much taken out the top evolutionary niche in our universe. There's more of it than anything else anywhere, ever. Do we consider it "better" than us? And if not, why not?