Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

schliz writes "Salk Institute director Terrence Sejnowski has called for more power-efficient, parallel computing architecture to support future robots that could keep up with the human brain. While human brains had 100 billion neurons and required only 20 Watts of energy, today's most powerful supercomputer, the 2.57 PFlop Chinese Tianhe-1A, requires four megawatts, and still has trouble with vision, motion, and 'common sense,' he said."

I think one mistake (besides the power requirements) that people make is to assume "if you build it it will work from the start", the human brain needs over ten year to develop even mediocre common sense and awareness of its surroundings. We should not be able to just build the hardware, install the software, flip a switch and then expect the machine to fully function the first year even. A learning period for the machine is to be expected (though it might be accelerated to some degree) if it is going to work like a human thinks.

Its interesting that you think epistemology actually plays a part for the flipping computer.

I could only agree if we are speaking of computer that is intending - by and within its design - to learn like, as well as act like us in a mature state. I agree this may be the most pure way for getting AI to resemble the human condition (for a lack of a better way to put it), but executing on this path is entirely a red herring.

I would say that trying to understand and emulate the learning process is 10 to 100

The only real saving grace is that this effort could actually be such a mirror for man kind, and accelerate our understanding of ourselves, if only slightly.

Maybe all we will discover is that if you have a *really* big network of interconnected nodes functioning in parallel and a good handling of metastable states you get a reasonable facsimile of intelligence. Maybe that's all intelligence is, after all, humans provide a pretty good facsimile of intelligence - but they aren't very logical.

You may only need to start out with one. It trains for however long it takes and will upload all experience and collected data to a separate online-repository. The next one you build will start out by downloading the necessary data uploaded by the first bot. As you start to build a hive-mind colony of bots, their collective adds to a library of knowledge and experiences that best compliment the very hardware processing them. They could even start assembling existing knowledge from human entries off the Inte

The OP is far more correct than you are. If you knew anything about ANN's the first thing you'd know is that they are modeled (by their very design methodologies if not through direct observation and intent) upon the human brain.

Neural Nets work like we thought 30 years ago how human brains might work. But we still don't know much about how the human brain really works. NNs are at best a very rough approximation that falls far short of modeling the real thing. And we can't model the real thing accurately because we still don't know how it works. We know how individual cells sort-of work, and we know what parts of the brain are involved in what sort of activity, but there's an enormous gap in between that we know nothing about.

This is a valid point. There is indeed a learning factor for the brain... at least some aspects of the brain.

Our brains are extremely inaccurate. Our perceptions are always relative and demonstrably imaginative. There is a lot more to what we think we see and know versus what we actually see and know.

The thing with computers as we currently use and design them is that they are dependant on accuracy. (I recall when DRAM was coming into existence... people were flipping out over the idea that this type of

There was an article in Discover last year sometime describing the different techniques computer scientists were using to try to emulate/simulate a human brain. One of the more interesting is one that actually used simple software to create several thousand neurons, each able to communicate with thirty or so other neurons, and they made the pathways changeable.

Obviously I'm simplifying and paraphrasing a year old article here, but one of the most intriguing things about this one setup is not only that it

Interview with Henry Markram [discovermagazine.com] This is the guy the article was about, but for the life of me I can't find the actual article where they describe the brain 'lighting up like a christmas tree', though I remember that exact phrase. Still, this describes his work pretty well. So might be worth a read.

Those neurons were probably implemented as perceptrons , and were probably distributed and multiple layers with feedback between them , so that the output was an input for the perceptrons on an earlier level , those perceptrons themselves outputed info into the latter layers , and so you get those remaining waves.

Sounds like neural network programming to me which has been around for quite a long time. Many people use it today, Google has their finger in the pie and i seem to recall the us army getting it to recognize different models of tanks. The trick is you have inputs and outputs then a network of connections and nodes in between. When the computer gets an input it finds any pathway to the best answer, then continually refines the path by using a survival of the fittest type tactic. Eventually when enough good p

This actually raises an interesting point that I've been thinking about recently. People imagine that an AI will also be a mathematical genius compared to us, because computers can calculate numbers quickly. Not necessarily so. One of the reasons we are slow with numbers is we keep vast amounts of related information along with the number. If I ask you to think of a number and tell me what it is, you might say "seven", but in your mind you might also be imagining the colours of the rainbow, the sides of a f

You have your ANN which is the seat of the AI's conciousness, but you attach an ordinary sequential computer (running ordinary software) to some of it's motor and sensory neurons.

The idea here, is that the ANN can control the "dumb" sequential processing computer for such answers. It can consciously input data via the motor neurons, then receive sensory stimulation back from it. This *WOULD* make the AI into a mathematical prodigy, at least compared to pure

That most powerful supercomputer, I'd assume, has not been tuned to actually work like a brain would.

This is like an emulator. A lot of computational power is probably wasted on trying to translate biological functions into binary procedures. I think if they truly want to compare, they'll need to create an environment that is enhanced for the tasks we want it to process.

Nobody expects the human brain to compute integer and floating point stuff at the same efficiency either, right?

As an undergrad philosophy student, I worked on the "reductionism" of Physics Theories (a sub set of simple Newtonian Mechanics) to sentential logical statements - presumably for an effort to map them to computer programing.

The task was daunting for and undergrad... and what we ended up with was not so intuitive. I can only imagine mapping the depth and breath of the brain - and in fact would postulate that it can not be done with any adherence to soundness and validity using todays digital hierarchy.

I know a way, but it takes about 18 years plus 9 months and a male and a female participant...

Also, what you end up with is usually an unemployed intelligence looking for something to do. And they don't always succeed. It's not obvious to me that we need more human intelligences. Maybe we need more and faster idiot savant machines, ones that excel at mundane things like driving road vehicles, doing laundry, loading dishwashers, sorting bills in chronological order. The boring stuff.

Yeah we already have billions of intelligent nonhuman entities. They're mostly in farms.We don't treat them well - we eat and exploit most of them). Why should we create more? So that we can exploit them too?

If that's the reason we'd just be causing more evil in the world than good.

Whereas if we instead used the tech to augment humans, we'd have about the same amount of evil and good. Or at least not increase the evil so rapidly.

For similar reasons we should not create animal-human hybrids. We're not ready

Every animal, every organism, on the planet exploits other organisms. Does that make all life evil? Why are we so different, that the way we treat other life as a resource makes us evil? Perhaps the most effective evolutionary adaptation that life has ever stumbled across is to be domesticatable, tasty, and/or useful to humans. It's a guaranteed win.

Humans are actually quite good at floating point math as embodied by ballistic trajectories --- watch outfielders run straight to where a ball will be when it comes down rather than following a curve, or a marksman who can consistently shoot coins or aspirin out of the air (for the former always positioning the bullet hole so that the coin will be useful as a watch fob).

Integer math as expressed in the real world can be quite good too --- I knew one teller who could take a fresh stack of $100 bills and zip

Humans are actually quite good at floating point math as embodied by ballistic trajectories --- watch outfielders run straight to where a ball will be when it comes down rather than following a curve, or a marksman who can consistently shoot coins or aspirin out of the air (for the former always positioning the bullet hole so that the coin will be useful as a watch fob).

Integer math as expressed in the real world can be quite good too --- I knew one teller who could take a fresh stack of $100 bills and zip down to the exact number needed to pay one's travel authorization (usually in the range of $2,000 -- $3,500, but usually different for each person in line) w/ a single motion, or there was John Scarne who could take a new deck of cards, shuffle it an arbitrary number of times, then cut to the Ace of Spades _every_ time.

Both of these examples actually show that human brains are extraordinarily good at processing hundreds of things at the same time; brains aren't all that fast actually, but they are literally massively parallel and exceedingly good at organizing data. The people in your examples wouldn't for example be able to do what they do without sensory input: the feeling of wind on their skin, humidity, the weight of the materials they are holding and their texture, sound of wind blowing past or money rattling in thei

is like an emulator. A lot of computational power is probably wasted on trying to translate biological functions into binary procedures.

Isn't that kind of the point of the article? To get around this need for all the computational power, we need hardware that's better at probabilistic analog computations, and to run it all in parallel.

A lot of computational power is probably wasted on trying to translate biological functions into binary procedures.

Tried and failed (which was to be expected). If you try to build code that follows the same type of principles that biological functions do, most of your computing power goes into finding stuff that can react with other stuff. That was a kick to write tho.

Instead of trying to emulate the human brain, which at the moment is unattainable, we should concentrate on efficiency paradigms of smaller neural ensembles. Once we achieve efficiency we can scale. Why haven't we learned anything from the CPU industry? They didn't start from 19nm manufacture. Why should we?

We shouldn't hurry. AI comparable to a human person can be achieved, but it is still a long way until we reach it.

Swarm Intelligence [wikipedia.org] would be a good place to start. Path-finding/graph search is only one part of AI though. It's very useful, but it's not necessarily the best method to solve all types of problem.

The honeybee is interesting because it's complexity is at about the limit of what personal computers can simulate today.

In rough order of magnitude terms, a honeybee brain has a million neurons with a thousand synapses each. Assume a neuron fires a hundred times per second. In the standard model of a neuron, each synapse can be simulated by a floating point multiplication and one addition.

Doing the math, a computer simulation of a honeybee brain in real time would need 100 gigaflops, which is in the range o

The reason this is the case is because current AI simulates a neural network as a program, you would have to produce chips which where actual neural networks the problem however is the interconnects which is in an order of magnitude more complicated compared to anything we can currently create. In fact the brain is quite slow, but its organization is what makes it powerful.

requires four megawatts, and still has trouble with vision, motion, and 'common sense

I have known many people who have ~100billion or so neurons that consume 20 watts of power, but they also have plenty of trouble with "Common sense". Actually they might be less sensible in some areas than a 100Kb C code running on a puny little Pentium 4.

The significant number is interconnect. In that area electronics is several orders of magnitude farther behind. Far enough that is seems doubtful something even remotely like the interconnect of a human brain can be reached artificially.

Side note: Comparing neurons and transistors, as is often done in the popular (but not very knowledgeable) press, is completely invalid as well. You need to compare neurons more to a micro-controller each.

The significant number is interconnect. In that area electronics is several orders of magnitude farther behind. Far enough that is seems doubtful something even remotely like the interconnect of a human brain can be reached artificially.

Hint: simulating is not the same as duplicating. A digital computer trades high-speed communication for interconnections. Think of serial vs parallel. If you simulate a neuron as an object located in memory, each neuron is interconnected to each other, only they cannot all communicate at the same time.

Considering the relatively slow rate at which neurons fire, that problem isn't so insurmountable as it seems at first.

You can't just focus on neurons and their connections. There are 10x more glial cells in the brain and more and more research is discovering that they not only perform their basic role to support metabolism and structure, they also communicate with themselves, communicate with neurons and are an integral part of cognition.

In addition, they are finding that chemical communication between cells is not point to point contained within the synapse only. Cells are swimming in chemical and electrical communic

Ok , you can do this with a FPGA but this requires something external to the gate array to reset the logic gates - the array can't rewire itself. Biological neural systems can rewire themselves and not only that - they can do it *while they're running*. Obviously you could have this on the fly rewiring in a software simulation but thats orders of magnitudes slower than using hardware so I don't think we'll see computers simulating human brains in real time anytime soon.

"It is called auto-reconfigurable FPGA. Look at Xilinx ones, for example"

My mistake, I need to get back up to date!

"some slightly more biologicaly plausible than others"

I'm not convinced the brain has to be simulated exactly to produce the same result. After all, robots can now walk like a human but they don't use exact facsimilies of human muscles - they use hydraulics or electric motors to achieve the same effect. No doubt there are parts of neurons operation and the brains overall architecture that are s

We're still not entirely sure of how a brain works. Oh sure, it's a neural network of some kind, but how do the neurons in a brain form meaningful connections with each other? How do they get their weightings of activation? etc.

Chances each neuron in the brain might be representable by a simple mathematical function with only a few terms. The way the neurons connect to each other might also be representable in a simplistic way. (btw. look up dynamic markov coding if you want to see a neat way a state can reproduce in a way that gives the newly created state meaningful input/output connections to other states).

So the problem isn't necessarily that our computers aren't powerful enough. The problem is that we still don't know how a brain works.

As a phd student working in sensory neuroscience, I'm going to go ahead and throw my opinion in the ring. I don't think neurons can be represented with a simple mathematical function. I think it is more likely, with what I've learned about active properties of ion-channels, dendritic morphology, and dynamic genetic expression that a single neuron may be more appropriately modeled as what we currently think of as a "neural network."
Granted, there are some neurons that have stabilized in their mapping fro

The dose of realism injected by a real live neuroscientist ought to be paid attention to. Most CS types know too little about neuroscience and psychology to have a worthwhile opinion about the viability of human-level machine intelligence and what it takes to get there. I used to believe we'd have a strong AI by oh, 2040 or so until I started really looking into the fields I mentioned, and every informed post like the one I'm replying to reaffirms my belief that we have a very, very, very long way to go--

Ok, I admit this sounds completely absurd at first, but there's an awful lot of similarities between the neural pathways of the brain and the countless number of ways websites link to each other, both directly and indirectly through their contacts, and their contacts' contacts, and all the contacts that eventually show up in an endless cycle of recursion, etc...

Now, google has to wade through all this, and constantly correct and update itself, to ensure it can get a user to the correct web page that best ma

The architecture on which you run the software also determines quite a lot of what you can do and how the software is executed. You need a certain topology of the hardware, otherwise it is impossible to do certain tasks efficiently. There is a huge difference between a slow but massively interconnected network like the brain, and a sequential microprocessor running instructions one by one at high speed.

As awesome as everyone talks up these 'brains' and how incredibly superior they are with only 20 watts, the fastest brain on earth can't even keep up with a 10 dollar pocket calculator that uses a fraction of a watt when it comes to remotely complex arithmetic.

Obviously, we have very two different things here. We created computers to be good at the stuff we are *not* good at, not to match our capabilities (we wouldn't spend so much money to make machines that are good at just the same things we are). That

As awesome as everyone talks up these 'brains' and how incredibly superior they are with only 20 watts, the fastest brain on earth can't even keep up with a 10 dollar pocket calculator that uses a fraction of a watt when it comes to remotely complex arithmetic.

Exactly!

My $50,000 BMW can't keep up with my $10 pocket calculator when it comes to math. And my $10 calculator can't drive me to the mall.

Not totally off-topic, I still am amazed at the power of my index finger, which can do things that Kings couldn't do 100 years ago. I can move it in a certain way (associated with my keyboard) such that it causes a total stranger to bring food to me - a pizza in 30 minutes or less.

computer CPU and software processes in a flat 1 dimensional stream nerual structures are emulated taking time to read each ones state one after another and simulate the actions of the interconnects to get the result

"Hardware/softcore" FPGA based neural net would form a flat even 2 dimensional "grid" array

but a DNA based brain is both a 3D structure and also has sub "fractal" patterned interconnected structures within it

to form even a bee style neural structure in a FPGA would still need the logic cells to

It seems like we already have this in FPGAs. We don't really have good clusters of them though..at least that I know.

I'm a software developer that has dabbed in VHDL and created some basic programs that got ran directly on a chip.

It was a major pain as someone just trying to write something. A higher level language designed for parallel computation on a large FPGA array might be more in line with what he wants...without trying to design hardware specifically to the problem. Although maybe after a while comm

They have their benefits and their drawbacks but at some point you'd think the benefits and drawbacks of silicon would even out. At the astounding rate technology's been progressing since I got into the industry, I'd have guessed that silicon would have passed us up by now, but that appears not to be the case. I believe a lot of AI researchers made similar predictions though, so I don't feel too bad.

I suspect there's some trickery going on in the meatputer though. The whole system feels kludgy. They seem

The human brain is obviously a combination of a logic system (computer) and a probability system that are able to communicate meaningfully with each other, seamlessly. Our ability to use logic AND guess at probable outcomes is what defines our intelligence. We want computers, and hence AI, to arrive at perfect answers for problems and different situations. Simply not possible. Our greatest mental asset is also our greatest mental liability. The ability to guess and use "intuition" and arrive at answers with

It just seems like a massive waste of computational resources... I would rather have a well programmed predictable computer program controlling my spacecraft vs a brian modeled after humans which may decide to go on strike or otherwise act unreliably.

Why not just use GAs and NNs in specific context where they make sense... rather than trying to copy brains?

If you want to solve hard math problems who is to say intelligent solvers can't be designed to provide real results for a fraction of the computer time?

I really doubt that anyone in the next thousand years will be able to build a machine equal in all respects to the human brain.

You can build a machine that will perform a single task or a variety of tasks but I have yet to see anything from anyone about building a machine that will recognize that a new task is required to solve a new problem and then formulate and perform that new task.

The problem with a machine is that is does not think, it does not ponder, it does nothing intuitively. It can resolve any

Better in specific dedicated tasks. But the efficiency argument still holds true, especially when you consider the brain is both the computational and memory unit.
Furthermore there is the issue of what we are still to learn about the brain, especially those processes that occur below immediate perception.

human brains working together (including passing information from one generation to the next) can build tools that can multiply billions of numbers.
Human brains get credit for any supercomputer's computation.