If we get wiped out even though we don't treat them wrong -- then the AI isn't (to me) as intelligent as we make it to be. In this case, we can probably turn the situation around.

If we get wiped out because they treat us the same way we treated them (or make slaves out of us like in Matrix, if they even have a need for that), then I don't see the issue: a battle between two self-centered species, whoever wins doesn't matter since both suck.

Hi Furs,
I must admit I had to have a hard think about your feedback argument. Self-modifying code could result in very dynamic AIs which their creators couldn't even imagine or predict. However, they'd still be modifying themselves deterministically ("same input, same output" just becomes "same input and internal state, same self-modification, same output"). That is to say, programs cannot act willfully, they can only react in the manner of a slave. So to me, even as a vegan it would be okay to treat an AI as a slave and far below an animal such as a cat. Now a human brain augmented by AI is a different situation altogether and IMO would deserve all the rights and respect (and suspicion) we give to humans. Probably more suspicion actually. I think AI-augmented humans is where the interest and danger lies.
As for the free will thing, well I can't prove it, it's just a strong belief I have. People are responsible for their actions otherwise why do we want revenge when someone hurts us? If they were just automata we would have no right or desire to punish.

Quote:

Lastly, I firmly believe that if we treat AIs right, since they're supposedly smarter than us, they'll treat us well in return

Intelligence doesn't beget morality. Josef Mengele was intelligent enough to get degrees in medicine and anthropology but it didn't stop him dissecting living human beings. Oops I mentioned a Nazi...

I didn't seriously think you wanted an AI apocalypse BTW. You just seemed kind of frustrated with people as they are now. Maybe AI will make us better, I guess we'll find out - it'll be a while yet considering how stupid Siri and "Okay Google..." currently are.

You don't have to have deterministic AIs that follow the same-input-same-output paradigm. It is conceivable to have a random element within the neural network (or similar structure) that changes weights and values, perhaps slightly or perhaps more, and then produce unpredictable outputs for two identical AIs that experience the exact same inputs.

Hmm. I had considered that in the past... but it would seem a pretty impoverished form of free will if it just depended on a random number generator. I'm sure there is some phenomenon in nature that imparts true freedom (for example to choose between good and evil).
I was discussing this with a friend and he suggested that our every action is determined by our genetics, environment etc. I think some people are just scared of freedom because "If I am free, I am responsible" or perhaps more to the point: "If I am free, I am guilty."

At a low level randomness may be what free-will is in humans, just a simple quantum fluctuation in the neurons. Concepts like good and evil are a much more high-level thing and I doubt anyone could pinpoint a neuron, or neurons, within the brain that determines such things as good or evil. It would only need subtle changes at the physical level and the cumulative effects can produce vastly different outcomes.

That study only shows that people are somewhat inclined to automatically deceive themselves. Maybe it shows that we are somewhat free, rather than having freedom without boundaries. Which is obvious. The people in the study overestimated the degree of their freedom.
But, as I said before, some people have an urgent need to believe they are not free, because freedom brings with it all sorts of uncomfortable stuff like responsibility and guilt that they are too afraid to deal with.
Anyway to get back on topic, I was more concerned with proving that classical computers are not free and thus don't deserve rights, compassion, etc..

I think love and hate are largely out of our control. Although we can choose to cultivate our love and hatred or choose not to.

Quote:

eg, i couldn't choose to not to breath if i want to continue living,

That's why I'd say human beings are "free within constraints" rather than "unconditionally free" like God. I can choose what I want to eat but I can't teleport myself to the restaurant. I have to walk there, which is a constraint.

I was more concerned with proving that classical computers are not free and thus don't deserve rights, compassion, etc..

According to one of the forum members, self-learning machines do have rights. And we -- the creators of such machines -- should respect their rights. If such "supreme" beings choose to eliminate us, it is just the next logical step in evolution and so be it!

Hi Furs,
I must admit I had to have a hard think about your feedback argument. Self-modifying code could result in very dynamic AIs which their creators couldn't even imagine or predict. However, they'd still be modifying themselves deterministically ("same input, same output" just becomes "same input and internal state, same self-modification, same output"). That is to say, programs cannot act willfully, they can only react in the manner of a slave. So to me, even as a vegan it would be okay to treat an AI as a slave and far below an animal such as a cat.

FWIW that's what the Lisp hype in the 60s (or was it 70s?) was all about: self-programming via Lisp macros. That's why Lisp was/is considered the "language of artificial intelligence" (even though I disagree).

About the same input - same output, that can very well be true, but the question is, isn't it the same for humans? We don't know that, and YONG even though he is against my views, definitely thinks humans are not "free" in thinking and are, in fact, nothing special.

I can understand your position, since you believe in God and probably that humans have souls. I have no problem with it, just a different viewpoint on the world.

I do have a problem understanding YONG's reasoning though. He doesn't think humans are anything innately special and that the Universe is just something arising from random fluctuations. I don't have a problem with this position, but the fact he wants to treat humans special despite the fact he doesn't believe they are special. Makes no sense to me whatsoever. I guess it is the point where actions are definitely not the same as words.

What you said about constraints makes perfect sense though, and applies to AI just as well. Free will is concerned with free thinking, not anything else. An AI with no physical body definitely shows this as a possibility, or a "human brain uploaded to the cloud" as Ray Kurzweil puts it.

I do have a problem understanding YONG's reasoning though. He doesn't think humans are anything innately special and that the Universe is just something arising from random fluctuations. I don't have a problem with this position, but the fact he wants to treat humans special despite the fact he doesn't believe they are special. Makes no sense to me whatsoever. I guess it is the point where actions are definitely not the same as words.

I, being a member of the human race, do not want to see the demise of the ruling species of the lonely planet. That's why I am doing my part to warn forum members of the potential danger of unconstrained AI, as some of them may -- or will -- be involved in the development of such "supreme" beings.

My standpoint has nothing to do with whether the human race is special or not.

I, being a member of the human race, do not want to see the demise of the ruling species of the lonely planet. That's why I am doing my part to warn forum members of the potential danger of unconstrained AI, as some of them may -- or will -- be involved in the development of such "supreme" beings.

My standpoint has nothing to do with whether the human race is special or not.

Nah, we were talking about AIs getting "human rights" and stuff like that. You implied they can't have emotions and other "human qualities", which doesn't make sense to me if humans aren't "special" (I mean, special in the sense of such unique abilities). Wasn't talking about the human race as a whole -- but about a human vs AI (as individuals).

Nah, we were talking about AIs getting "human rights" and stuff like that. You implied they can't have emotions and other "human qualities", which doesn't make sense to me if humans aren't "special" (I mean, special in the sense of such unique abilities). Wasn't talking about the human race as a whole -- but about a human vs AI (as individuals).

On the "special" arguments, I was referring to whether the human race -- or any intelligent life-forms for that matter -- held any special place in the Milky Way, the observable universe, or the entire universe.

Of course, AIs can mimic human emotions or other human qualities. However, whether AIs -- or any self-learning machines for that matter -- deserve human rights is another story -- a highly-controversial story, indeed.

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot vote in polls in this forumYou can attach files in this forumYou can download files in this forum