I would like to be so funny about it, but is ISN'T funny. Don't you get it?

I for one welcome our new computer overlords.

Couldn't do a worse job than the fuckers already in charge - Even if they went SkyNet on us, at least they'd kill us all off quickly and efficiently, rather than condemning the planet (and us) to a slow, cancerous death.

Here's the good news: they don't care.
No matter how smart a computer is you can walk up, say "I'm going to bash you to bits with a sledge hammer now." and they don't even have an opinion about that.
We do not fight, strive and struggle because we are sentient; bugs and grass also fight, strive and struggle. It is possible that in protecting their investment some person will tell a robot "Don't let yourself get hurt." but thats about it; a fuse or or an overheat shut-off does the same thing.Processing power does nothing to change the give-a-shit factor. Unless programmed to no computer cares if it "lives" or it "dies".The idea that self-interest is a by-product of self-awareness is transhumanist mythology. It has no basis in fact. Frankly, it is the irrational side effect of the "faith in science" we see all over the psuedo-rational internet culture. I wish I could say that Dawkins himself is above it but apparently, no.

Here's the good news: they don't care.
No matter how smart a computer is you can walk up, say "I'm going to bash you to bits with a sledge hammer now." and they don't even have an opinion about that.
We do not fight, strive and struggle because we are sentient; bugs and grass also fight, strive and struggle. It is possible that in protecting their investment some person will tell a robot "Don't let yourself get hurt." but thats about it; a fuse or or an overheat shut-off does the same thing.Processing power does nothing to change the give-a-shit factor. Unless programmed to no computer cares if it "lives" or it "dies".The idea that self-interest is a by-product of self-awareness is transhumanist mythology. It has no basis in fact. Frankly, it is the irrational side effect of the "faith in science" we see all over the psuedo-rational internet culture. I wish I could say that Dawkins himself is above it but apparently, no.

The problem lies with the question: What makes something sentient?

Your logic is irrefutable and I agree with you.

However, it has been a struggle for a long time as to what makes something alive. Some say what you say is right: That it cares if it lives or dies. However, if that is the case, then animals like bears, fish, lions and zebras are "sentient" because they show signs of self preservation. It may not communicate it in our language, but if they didn't "care" then there would be no self preservation. In fact, some animals have shown signs of self sacrifice in order to protect other animals.

One thing that was said on CBS News last night about the contest really made sense. A person said that computers can't appreciate things like we do. For example: Music. A computer only sees music as frequencies and pitches. We see music as artistic or "hear" it past being sounds and what not. Music can be used to inspire and motivate us. Meanwhile, a machine does nothing with it.

Bottom line is this: Machines are probably never going to go Sky Net on us. Its nice fantasy and all, but until a computer program can manage to rewrite itself it can never be like us. We can change, machines can't on their own.

Who said that AI has to be the same as human one? They will never "feel" like us, because there is no need of that. They don't have to know the smell of a flower, because their intelligence is OVER that. We don't smell the same as dogs or we can't see or react as cats, because we don't have to. The point is, that AI has different goals. They are created to analyse information made by human beings, because they can not "produce" informations so far... they can only analize them. This is the biggest difference.

I will give an example. Our brain/consciousness doesn't know what single neuron is "thinking about", but it analyzes whole processes of bilions of neurons. AI is such as our consciousnes. Let's look at Google/Facebook AIs. Isn't like that? I am not worrying about SkyNet/Matrix wars, because AI needs our informations to live and develop, like parasite has to have its victim. The thing I am scared of, is that they will rule the world without us, because we can not UNDERSTAND bilions of informations from other humans as fast and objectively, as they can. If we don't stop this line of "progress", we end like neurons in one, "friendly" OVERMIND, which will drain our brains.

50 years after computers developed "the ability to change their own programming", and where are we?
Robots still follow orders. They can follow very complex sets of orders. Without orders they don't do much.
Why is that?

The "higher level" cognition, that most animals have, arises from a very large network of neurons that dwarfs (in raw numbers) the size and complexity of existing brain simulations. Volition and creative, independent thought (not necessarily the same thing) seem to be a by-product of that extremely connected structure.

Probably there is something about the sensory web that feeds our brains that contributes to animal cognition. There's no network in the world, not even the telecom closet at Verizon with as much bandwidth as connects one human eyeball to a brain.