Our future

When J. Robert Oppenheimer saw the first atomic bomb and the destrctive capabilities it unleashed he was quoted as saying, "Now I am become Death, the destroyer of worlds".

It got me to think about the future I am helping to create. For the past two years the bulk of my spare time has been spent studying Nueroscience, the human brain and artifical intellience or simply AI. I began to think of what it is I want. Most scientists and engineers are working on robotics or building computers that can play chess. These are not my goals. My goal is to create an articifical brain. This is different than simply creating a computer that can think. I'm trying to make one that is capable of human emotions. I'm sure one could go on and on about the moral/ethical questions this could create, one could even tell me it's impossible. I don't believe anything is impossible. Everything comes down to how much time and effort one wants to apply to any event or project.

That being said, I can't help but examine the ramifications of my possible success. If a computer could think and could feel, what makes it inhuman? What makes it any different than us? Most philosophers say the only thing that seperates humans from animals is our ability to reason. I don't think humans are as complex and unique as we'd like to believe. Our minds are just signals and electrical pulses.

Replies to This Discussion

It's not at all improbable that our emotions are shaped by the perception we have of our own body, as well as of our interactions with other members of our species and the physical world. Would a human brain grown in a vat be capable of human emotions? That's doubtful. It'd be an interesting experiment though. Rather than trying to reproduce human emotions, creating artificial beings with a whole new range of emotions might learn us a lot on ourselves.

Here's what I think. Humans desire. Emotions create desires, and if you make an artificial brain capable of emotion then you've made something that desires. In that respect there is no difference. I like the way Isaac Asimov put it, and agree that the only difference would be that humans die and robotics do not. So it goes back around to desire. One of the reasons we desire is because our time is limited and we know it.

I think we see rationality as Human only because we haven't observed that trait in any other animals, but in the future it is possible we will either encounter alien life that use rationality or that we (as you are working on) create rational life.

Then we will just be intelligent life like other intelligent life. I really hope you can built a computer brain, that sounds fascinating. Do you think all computer brains would think alike? What are your thoughts on individuality with computer brains?
-Staks