Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

Csiko writes "Researchers at Boston University's department of cognitive and neural systems are working on an artificial brain implemented with memristors. 'A memristor is a two-terminal device whose resistance changes depending on the amount, direction, and duration of voltage that's applied to it. But here's the really interesting thing about a memristor: Whatever its past state, or resistance, it freezes that state until another voltage is applied to change it. Maintaining that state requires no power.' Also theoretically described, solid state versions of memristors have not been implemented until recently. Now researchers in Boston claim that memristors are the new key technology to implement highly integrated, powerful artificial brains on cheap and widely available hardware within five years."

This is nothing like the cognitive human brain. This is only a variable memory device.

My hope is that these artificial brains come with an easy way to back up their memory. Since no one does computer backups, I can imagine it would be the same with their brains...

I can imagine it now...in 2025...

Kid: What's wrong with dad?Mom: He crashed last night.Kid: Did you do a full restore?Mom: Yes, but we haven't done a backup since 2012. So he thinks he's 25, doesn't remember you, and he keeps talking about President Palin.

Here is why this is just yet another pipe dream: any hardware that we can build can be emulated identically in software. It will perhaps run slower, but it will do the same thing. There have been no software agents that have been modeled for the past 50 years that are anything close to 'real ai', so why would shifting the problem to hardware do anything to advance the underlying problem? Speed and transistor counts don't make up for a lack of understanding.

Ever notice that anytime an interesting piece of science or technology is talked about, someone complains about how people say "we see this having applications in about five years", even when it's not really relevant?

Ever notice that every time someone complains that people complain about how people say "we see this having applications in about five years", people are making exactly the same complaint five years later?

Even if the rest of the things explained in the article happen many years away, the last couple of paragraphs explain the trend:

Neuromorphic chips won't just power niche AI applications. The architectural lessons we learn here will revolutionize all future CPUs. The fact is, conventional computers will just not get significantly more powerful unless they move to a more parallel and locality-driven architecture. While neuromorphic chips will first supplement today's CPUs, soon their sheer power will overwhelm that of today's computer architectures.

The semiconductor industry's relentless push to focus on smaller and smaller transistors will soon mean transistors have higher failure rates. This year, the state of the art is 22-nanometer feature sizes. By 2018, that number will have shrunk to 12 nm, at which point atomic processes will interfere with transistor function; in other words, they will become increasingly unreliable. Companies like Intel, Hynix, and of course HP are putting a lot of resources into finding ways to rely on these unreliable future devices. Neuromorphic computation will allow that to happen on both memristors and transistors.

It won't be long until all multicore chips integrate a dense, low-power memory with their CMOS cores. It's just common sense.

Our prediction? Neuromorphic chips will eventually come in as many flavors as there are brain designs in nature: fruit fly, earthworm, rat, and human. All our chips will have brains.

Hopefully, this is the solution to 2018's problem of reaching atomic levels of miniaturization. We have a breaktrought to continue with Moore's law beyond current technology.

I think Moore's law is becoming increasingly pointless to most of the world. It talks about speed, yet at this point few manufacturers are trying to win speed competitions. It's all about form factor and efficiency. To use a car analogy, the past number of years were the horsepower wars of the late 60s & early 70s. Now we have seen a switch to fuel (energy) economy as the main driver of development.

That being said, I think it's cool this is a possible future - it's not that be need more power, we need a

I think Moore's law is becoming increasingly pointless...It talks about speed...

Actually I think it talks about transistor density, not CPU frequency (speed). And transistor density keeps going up, year after year. In 2007 we had the CPU that beat Kasparov in 1997 and weighted 1.5 tons. This info is in the article, btw.

Actually, it talks about transistor density per unit cost - as long as manufacturing continues to improve and drive down costs, Moore's law will continue beyond the physical limitations of transistor density (stuff will continue to get cheaper even if it doesn't get 'faster').

I don't understand why most people focus on the maximizing transistor density part when 99% of applications call for minimizing cost.

While processing speeds are certainly linked to Moore's law, it is really only about the bi-yearly doubling of the transistor count while keeping prices roughly the same. Increasing the amount of cores and adding more on-die memory are easy ways to keep Moore's law going.

...well, easier than decreasing the half-pitch below 12nm.

By the way, Moore's law applies to memory density and CCD properties as well, neither of which appear to be close to their limits.

I think Moore's law is becoming increasingly pointless to most of the world. It talks about speed

It doesn't actually talk about speed at all; it talks about the cost of manufacturing chips of 2^n density where n increments every 18-24 months cost remains constant. It is, in fact, exactly what you go on to say is relevant despite the fact that what you're describing IS Moore's Law exactly.

By the middle of next year, our researchers will be working with thousands of candidate animats at once, all with slight variations in their brain architectures. Playing intelligent designers, we'll cull the best ones from the bunch and keep tweaking them until they unquestionably master tasks like the water maze and other, progressively harder experiments. We'll watch each of these simulated animats interacting with its environment and evolving like a natural organism. We expect to eventually find the "cocktail" of brain areas and connections that achieves autonomous intelligent behavior. We will then incorporate those elements into a memristor-based neural-processing chip. Once that chip is manufactured, we will build it into robotic platforms that venture into the real world.

Then, once they become self-aware, we can turn Arnold Schwarzenegger loose on them.

There is no reason to suppose that people would not ally themselves with an artificial brain. People have already aligned themselves with Pol Pot, Idi Amin, Adolf Hitler, and Josef Stalin--allegiances with undisputedly bad people who ultimately served them very poorly. There is every reason to expect that people will form an allegiance to an artificial brain if that artificial brain causes those people to receive adequate food, shelter, and medical care.

It can and is being designed for that use but I believe there have been problems with reliability of individual memristor units.. However in a neuromorphic design (non-Von Neumann architecture) you only need a certain percentage of the units to be reliable as the information is highly distributed and fault tolerant. Think of the massive cell death that occurs in Alzheimer's disease, yet patients are still fairly normal well into that process.

This technology fundamentally mistakes what is the hard part about building brains as adaptable as biological ones. The physical instantiation is not important, if the Church-Turing thesis is true. (And if you're saying Church-Turing is false, that's an enormous claim and you'd better have very compelling evidence to back you up.)

The hard part about building a brain is figuring out the patterns of connectivity between neurons. Biology solves this in some brilliant way, starting from a seed with almost no information (the genome) and implementing some process to incorporate environmental data, self-organizing into a very robust and complex structure with orders of magnitude more information. The great unknown is the process whereby this growth and self-organization occurs. Figure that out, and you'll be able to make any kind of computer you like function as a brain.

The process of figuring this out isn't going to occur magically. You need to test out your models at the systems level, with all the components working together. The more powerful the hardware we have to do this the more we can test and refine our models of how the brain achieves the same thing. This is both true if you're trying to model existing neuro architectures (like BU is) or if you're modeling evolutionary approaches like you describe above.

These memristive neuromorphic architectures hold the promise to get us orders of magnitude more processing speed while also keeping power levels low.

The summary really only promises enhanced speed and efficiency, but after reading the article, I agree with your complaint: "Researchers have suspected for decades that real artificial intelligence can't be done on traditional hardware, with its rigid adherence to Boolean logic and vast separation between memory and processing." Huh?

Now, I have some sympathy for the pragmatic argument that getting good tools into enough hands is the best way to raise the odds of cracking hard problems. Some people will point out (for example) that a modern 3d game like Crysis might have been emulated (at a fraction of real-time speed) 20 years ago, but nobody figured out how, or bothered to do so, (and no, Castle Wolfenstein doesn't count) because hardware limitations made it too cumbersome and only a few parties had the resources to even try.

Even so, claiming it "can't be done" is going too far. People are building conventional computers that simulate neurons on the order of a cat brain [forbes.com], but programming them is the problem.

Yeah, that was the same passage that made me double-take. A surprising thing to read on the IEEE site.

I agree that performance can matter. Especially so for brains, which interact with the physical world and have to respond on physical timescales (e.g., within hundreds of milliseconds in order to coordinate walking). If the technology were a lot faster than conventional machines for simulating neurons then that would be a meaningful advance, but this was not demonstrated in the article. The central argu

Let us assume they map out the brain, create an FPGA of memristor devices like this that can mimic the brain's exact structure.

First round it doesn't work.

Because robots can't have a soul. You need a spirit to have that kind of consciousness. You'll hear this argument immediately; I'm not going to argue directly against the spirituality thing, but the question to me is more complex than that, of course. Still, that'll be the first argument.

Then someone will make it work.

Now the interesting shit happens.

A lot of people have told me they're never going to die because, by the time they're old, technology will exist to copy their minds into machines. Think about that. Immortality through perpetuated consciousness.

Stop for a moment.

Realize you are alive, aware, and conscious.

Now, why do you experience consciousness?

You want to say, well, all that "soul" bullshit is weird and freaky. Scientifically unsound. I experience consciousness due to a series of electrochemical reactions in my brain. End of story.

Now suppose I move your brain's data into another organic brain, electronic brain, or anything else of the source. Would you continue to "live"? Would YOU continue to live?

To make the point more clear, what if I made an identical copy and booted both at the same time. Do you suddenly develop a psychic link with your other self, experiencing both existences at once, living in two different places?... ridiculous.

So you're bound to your brain. You cannot live forever unless your particular, specific, physical brain stays in tact. If I copy your brain to another cloned brain, yank yours out, and replace it with the clone, everyone else will interact with you as if you were you, no difference; but YOU would vanish into the blackness, you'd stop living, you'd die.

Why are you conscious?

Hmm that would be convenient for suicide cases. So much easier. Copy my brain into a biological clone brain, swap, and destroy mine. I get to die and nobody else has to worry about it because I don't die. The ultimate escape: you make your life someone else's problem!

Well, the whole problem with the destruction of the original could probably be solved by slowly replacing the original organic brain with the electronic one. Instead of copying everything at once and then deleting the original you basically "graft" the electronic brain onto the original (obviously it would be a lot trickier than that in practice but so would "just copying" it be) and slowly let the electronic hardware do more and more while the organic does less. Eventually you'll have an all electronic brain.

Yes, and this would work why? (in practice, would it?) My point is we don't understand consciousness (and have no way to verify things like this actually work) and that the question is very complex. Even if you don't accept the concept of a "soul," you have a very difficult problem in front of you. If you DO accept the concept of a "soul," you have something confusing and complex in front of you.

Mod this guy up. He's quite right. In my view consciousness is non-physical. That is to say it is not a measurable physical property. If this is the case, simply replicating the cognitive structure of a conscious organism does not necessarily instantiate a conscious state. Don't forget only 10 years ago photosynthesis was well understood in physical and biological terms, but now we discover that leaves take advantage of quantum effects to increase efficiency. There's a whole lot more going on in the brain than simple classical state change.

Of course it's physical -- what else would it be? What else *could* it be? The problem is that brains don't have JTAGs, and so it's quite difficult to tap all the inputs and outputs to run reproducible experiments, particularly while keeping the brain alive and functional. Add to that the fact that no two brains are identical, and you've really got your work cut out for you. The reason it's so easy to reverse engineer a chip (relatively speaking) is that one is a

Braincells communicate via synapses(of which there are approximately 10000 per neuron), synapses can form for long time or very transient. But at any given time you are constantly reforming and pruning synapses. At the end of the day you're having a new heap of synapses that are the days impression and when you wake up the next day the body have pruned away the vast majority of these synapses, giving you a fresh start.

At what point of artificial intereference with this synapse forming game would we lose

The reason slow replacent would work is because the opposite also works. Consider that everyday, hundreds of neurons in your brain are dying right now. Some are replaced by new neurons while others are not. And yet the continuity of you still exists.

My best guess (as someone who believes in a 'soul', but also realizes this can lead to some potentially absurd conclusions) is that for each small electric replacement, your consciousness will proportionally fade away. At bit like when you're half asleep say if half your brain was replaced.

I tend to think particles in the brain are mapped to an immaterial 'location' of the immaterial soul for want of a better analogy.

There is a concept called the Silver Cord. It's basically a metaphysical link between your physical body and the astral plane. Some call the astral plane a singularity of consciousness, God, and/or the spirit world. In short, your "spirit" doesn't live in your brain. Rather, your brain is controlled from the astral plane via the Silver Cord. In short, we are all puppet masters.

Now suppose I move your brain's data into another organic brain, electronic brain, or anything else of the source. Would you continue to "live"? Would YOU continue to live?

I don't want to foster a false dichotomy but it seems to me that there IS something of a binary choice here. Either there is some undefinable quality we can call a "soul", or what you think of as "life" or "consciousness" is an illusion. The data of whatever was just going through your head is still there to some degree, so you perceive your continued existence, but life is actually only a series of moments and one only has to do with the other because they're somehow connected, not because there's something special about being alive.If there is a soul and you "die" tomorrow then destruction of the flesh is not the ending of your life. But if there is not a soul and you "die" tomorrow you won't care. And if there is not a soul and you copy yourself to a mechanical brain, then you're both "alive" in that you are both functioning. You're both "you", and yet, neither of you is really you. Change your memories and you're someone else.Or in other words, when we actually have the technology to mimic the behavior of the human brain, then we may actually be able to answer fundamental questions about the "soul" that cannot be answered today.Personally I don't see any need for a soul to explain the behavior of a human; it's the same physical processes at work all the way down through bacteria and vira. But we could argue about that all day and achieve nothing but a big fat waste of time, both user and CPU.

So you're bound to your brain. You cannot live forever unless your particular, specific, physical brain stays in tact. If I copy your brain to another cloned brain, yank yours out, and replace it with the clone, everyone else will interact with you as if you were you, no difference; but YOU would vanish into the blackness, you'd stop living, you'd die.

First of all, I'm not convinced I want to live forever. Immortality sounds cool... But I suspect it would get dull after a while.

Second, I suspect that your little conundrum here could be solved by a slow migration to the clone/artificial/constructed brain. Rather than yanking it out suddenly you just replace bit by bit. You'd remain conscious the entire time. You'd never "die".

But...

A large part of the whole afterlife/immortality/soul debate essentially revolves around fear. Folks are terrified by th

First of all, I'm not convinced I want to live forever. Immortality sounds cool... But I suspect it would get dull after a while.

Eventually the world overpopulates.

People you care about die, or you have no friends FOREVER.

THINGS you care about die. Imagine living in a warrior society like recent (pre-WW2) Japan, where everyone around you is preoccupied with personal philosophy, the nature of beauty, honor, the like. Now think about all the bullshit you complain about in modern society, and think about Japan now with its culture influence from America and how Tokyo looks (giant screen TVs on buildings, lights everywhere, it's a

They always say our forefathers would vomit up a lung if they saw what we did to their country. Times were better back then, even with the government handing bibles out in schools and children getting their asses beat by teachers.

Japan had a much different society and instead of naturally degrading they've seen our 20th century garbage FORCED on them. It'd be like if we found something identical to post-revolution America and forced modern american democracy on it, with the sleezy politicians and disney

OK, we're way off topic here but I just had to post.The Japanese modernized all on their very own, they didn't have anything forced upon them. They went out and embraced reform, it's how they were able to dominate East Asia in the first half of the 20th century. Post Second World War, they were fairly eager to embrace the changes foisted upon them by MacArthur.

Prior to the Meiji Restoration, you seem to overstate the idea of personal philosophy and underestimate the idea of merchants and economic trade. The

Even with all the bad things in the world, even without anybody else to share it with, I'd be ok continuing on forever.

in all reality, there'd be no way for you to remember EVERYTHING. you'd likely have to run on a cycle of 150 years of memory (tops!) while forgetting older things as the neurons that were retaining them begin to die but the information they contain is not moved to another area.
this would present an interesting opportunity, you don't HAVE to remember thin

I hate to break it to you, but Japan wasn't all honor, philosophy and appreciation of beauty like popular culture loves to depict it. There was plenty of turmoil and poverty for centuries in Japan, if not millennia. A lot of people struggled merely to survive. Ask the Ainu if their lives consisted of honor and philosophy and being at peace with nature. Certainly, Japanese aren't going to argue about how their history has been romanticized, and they engage in plenty of that themselves.

Which leads me to think that our natural lives will take on entirely new meanings. The fear is what makes things like kissing your wife precious. I'm not sure so many things would continue to be precious in the absence of fear. How would that affect culture? If there was no clock to race, what then becomes the point in doing, well, anything at all?

I guess the fears of a machine dominated world might not be entirely accurate. It's not that we build machines that become sentient... it's that we become the ma

Well, the first problem that I have with this, is I have no proof that anyone else has a soul. This occurs even before I ask myself exactly *what* the definition of soul is, so I can determine whether I, myself, have one.

So you are asking me to believe a lot of vaguely defined things that it's my first approximation choice to disbelieve in. And they're vaguely defined, so I can neither verify nor refute them.

As a result, my only remaining choice is to consider it a silly argument. f you'd like to try aga

I recall reading a short story about a man who had his brain removed, and a computerized copy of his brain implanted so he could climb down into a hole and dismantle a nuclear weapon that didn't work as expected. For the life of me I cannot remember who wrote it or what is was called. bluefoxlucid post goes over a lot of what the short story was about, and I'd highly recommend it.... if I could only recall what it was named.

Now suppose I move your brain's data into another organic brain, electronic brain, or anything else of the source. Would you continue to "live"? Would YOU continue to live?

Yes, and yes.

To make the point more clear, what if I made an identical copy and booted both at the same time. Do you suddenly develop a psychic link with your other self, experiencing both existences at once, living in two different places?... ridiculous.

If you absolutely cannot dispense with the existence of a soul, just pretend i

The major question here is that now that you've dispensed with the existence of a "soul," you're left with the part where you copied your mind to "continue living" and yet all logic says that your personal experience with consciousness ends (you die) and there is another life form that now believes it is you (due to memories and the like).

The major question here is that now that you've dispensed with the existence of a "soul," you're left with the part where you copied your mind to "continue living" and yet all logic says that your personal experience with consciousness ends (you die) and there is another life form that now believes it is you (due to memories and the like).

How do you know this doesn't happen every time you go to sleep? Your stream of consciousness is interrupted, for all you know aliens are swapping out your meat processor

No secret sauce here. You're dancing around an argument, but let me try to add some clarity:

If a machine (of any construction, even bio-ware) was grafted into your existing brain, as to replace/extend it's normal functions, you may or may not notice at all. Nobody really "feels" when a specific set of cells in their brain is different, as they "feel" a cold/hot spot on their hand. It's much more about the perceived functionality. A headache or an itch is as more perception than a true "p

> Because robots can't have a soul. You need a spirit to have that kind of consciousness.Sorry, but that is an invalid conclusion based on incorrect assumptions.

Not to sound like a dick, but your understanding of consciousness is woefully incomplete / archaic. There are 7 layers of conciousness: Mineral, Plants, Animal occupy the bottom 3, with humans occupying the interesting position in the middle. (There seems little point to disucss the upper 3 when you are still struggling to understand the bottom

Well, what’s going on in the laboratory - and I have some fantasticgraduate students and we work together as a team - and what we havefound for example, is that if you place two different brains, twodifferent people

Many people have made this argument now. I recall a story about a god who had a giant ship. In restoring it, workers removed one plank at a time, replacing with a fresh piece of wood. Eventually every single board was replaced. The issue raised was that we basically built a new ship, and in fact could assemble the old pieces back into a copy. Which is the copy?

When you go to sleep (or are knocked out, or drugged, or in a coma, etc) your consciousness ends. When you wake up, your consciousness resumes. You do not freak out about that. You remember your consciousness from before, that it was in the body from before. We believe we have a soul that is immutable from our consciousness because we have had no other experience and cannot comprehend what it would be like.

Look at the experiments that have "reprogrammed" people to believe they like something they didn't before by creating memories of experiences where they liked it. They cannot remember not liking it.Or schizophrenics or people with split/multiple personalities. Our brains are not the infallable machine devices that we like to think they are; they are squishy, malleable things. Consciousness is not a black and white state; it only appears to be because that is the typical way of experiencing our mind.

What makes us conscious? The belief that we're conscious. If you cloned your mind and put it in another body you would have two minds that both believed they were you. But why should we have trouble with that? We don't believe twins are one person. Their actions distinguish them. The two entities that shared one mind at one time would diverge and quickly become two distinguishable entities.

If I copy your brain to another cloned brain, yank yours out, and replace it with the clone, everyone else will interact with you as if you were you, no difference; but YOU would vanish into the blackness, you'd stop living, you'd die.

Which is the exact reason I'd never take a ride on a Star Trek teleporter. I don't want to die and leave my entire physical, mental, and emotional estate to my identical twin who hasn't been born yet.

Amnesia and Alzheimer's is enough proof we don't have souls, no doubt what we call "consciousness" is really just a network of developed cells and memories that are attached to it. After all no one claims to be able to remember what it was like as an embryo, also when one is under amnesia. One's "soul" doesn't float away. The concept of "soul" is just our irrational psychic defense against the fact we all die someday. That so many peoples and cultures have come up with an afterlife speaks volumes that it is just a reaction against our powerlessness to heal and fix ourselves because of the expense, energy, intelligence and tools to do so.

We experience the self as a unified thing but it isn't. This is proven by people who've had brain damage in accidents and strokes where their "self" functions but they lose specific functions and aspects of 'who they are'.

You can find out more by reading the following book by a Neurologist.

This is Damasio's refutation of the Cartesian idea of the human mind as separate from bodily processes draws on neurochemistry to support his claim that emotions play a central role in human decision making.

Now suppose I move your brain's data into another organic brain, electronic brain, or anything else of the source. Would you continue to "live"? Would YOU continue to live?

This is a non-existent (i.e. ill-posed) question, because you haven't defined life and, more importantly, you haven't defined individuality. There exist many philosophies/schools of thought where the whole idea of existence is rejected, let alone perpetuity of an individual. In fact, most systems of belief/philosophies would say that "you" are just as different from a perfect copy of yourself as you're different from yourself a day, second, or nanosecond ago.

Why must be that "link" there? If you make a exact copy of you (forget chips, memristors, electronic, just focus on the copy part) will be 2 "you" around, with experiences that will start to diverge at that point. There is no universal "you", will be 2 entities that will think that each one is the real one, and the other the copy, and even could arge which is the one that got the soul, and with a bit of luck decide that or there are free souls around for anyone wanting one, or that never were one to start w

The 'I', the 'YOU' is an illusion. This proposition is not falsifiable. If I make a digital copy of ME, it will pretend, and be convinced, to be ME and any test you will put it to cannot disprove this.

The idea that ME can branch is uncanny but we will have to get used to it. Once you are used to it, ask to yourself : "Why is ME afraid to die if ME survives ?". Are you afraid of going to sleep every evening ? The you that wakes up is different from the you that fell asleep. It can be considered like a dis

This is where standpoint theory comes into play. The copy over there has a different vantagepoint than you do. It begins with little differences, like it wakes up next to the window and you wake up next to the door. It expands outward from there as you make different decisions.

Take identical twins, for example, They can have remarkably similar lives, perhaps due to living through somewhat similar circumstances while also tending towards making the same sorts of decisions about those things (perhaps they're