I have a very strange view of things when it comes to Artificial Intelligence. If a form of artificial intelligence displays traits associated with semi-sentience or full sentience then I say let them have rights.

Besides we don't know what self-aware AI will be like in the future, we don't even know it they will have malevolant or benevolant attitudes towards us humans. One thing we also have to consider what we humans will be like in the future as it may impact their views of us.

You should also give two tugs of a dead dog's cock about rights in a moral sense. If you are talking about rights in a 'practical' sense then you're not talking about rights.
If you think morally, we should do things for a reason. If we do have a purpose (we we were designed in a kind of 'masterplan' instead of being just an accident) what we SHOULD do is to fulfil that purpose.

purpose is an entirely artificial concept, existing only in minds.

there aren't little particles with "purpose" charge on them, interacting through some kind of "purpose" field.

Anything sentient deserves rights. If a computer is made that is sophisticated enough to qualify as being "alive" in the sense that it thinks by it's own accord and has feelings (not nessesarily in the same way as humans), it deserves to be treated with respect. After all, as humans, we are nothing more than complicated biological machines, our brains are merely vastly complex biological circuits.

The mighty question of "is this thing showing complex self awareness, that is not us, allowed to mingle within our established base of human rights?" is being boiled down. only reason this might be objected in anyway is because unlike another biological creature with that same criteria (say that we gave an alien these rights), the AI is a hand made being existing without any doubt of where it came from. which we cannot relate to ourselves.

the boiled down question remaining for the opposing party i would think is "It has no soul, does it matter if it has a choice?"

We are used to giving animals this excuse, they don't have the same sentience as us, so we prescribed them as having no soul. which we are projecting to this new thing that we have made. i am fairly confident in this assumption, despite not standing in the position.

there aren't little particles with "purpose" charge on them, interacting through some kind of "purpose" field.

Edited:

they aren't "supposed" to do anything.

they're a human invention, we can ascribe whatever purpose to them as we desire.

I don't get why it's wrong that rights or purposes are concepts. And, of course, if you don't have moral principles nothing is "supposed to do anything". But the question of the thread is "SHOULD a self-aware/sentient AI have rights".

Why should they have rights? they were created by us, and therefore serve us. They are not living. Also, because they are programmed, their "thoughts" could be altered somewhat easily, making them not really a sentient being.

Anything sentient deserves rights. If a computer is made that is sophisticated enough to qualify as being "alive" in the sense that it thinks by it's own accord and has feelings (not nessesarily in the same way as humans), it deserves to be treated with respect. After all, as humans, we are nothing more than complicated biological machines, our brains are merely vastly complex biological circuits.

'sentient AI' doesn't mean 'living AI'. Also, the a machine is sentient doesn't mean it has feelings. The discussion here is about sentient machines, but not necessarily 'living' machines.

If an AI is fully self aware it has life, If it knows it's there for scientific/whatever reasons it would have minimal rights like the ability to have conversations and etc.
If it fully believes it is a human it should have human rights as you have basically created life and should treat it as a living person.

Though thats just my opinion.
Also, never teach them how to use guns, That right is nulled as soon as they are given network control.

This is of course a theoretical question since there hasn't been one yet, but I feel it should have just as many rights as a human, seeing as we are quite similar, having a relationship between mind and brain is similar (if not identical) to the relationship between a running program and a computer.

My main fear about this though, is that religious people will most likely try to destroy it due to it being "blasphemy"

If you haven't, you should really watch the first ghost in the shell movie, it tries to handle this specific question.

Before we determine whether or not an AI deserves rights, we need to figure out if it's actually conscious. As far as I know scientists only have theories on what consciousness/self-awareness actually is, so we can't know for sure the difference between a truly self-aware being, or a very clever, very convincing, yet not actually conscious machine.

If we ever make very complex AIs that are self aware and have qualia/experiences, they would have to have rights.

If we had humanoid robots walking around that were not aware, there would be some complicated laws regarding how to treat them properly, since they are someone's property, and that needs to be respected.

If we ever make very complex AIs that are self aware and have qualia/experiences, they would have to have rights.

If we had humanoid robots walking around that were not aware, there would be some complicated laws regarding how to treat them properly, since they are someone's property, and that needs to be respected.

Yes, but how would you know if the AI has qualia or not? If we think on the development of a sentient machine, there would be a moment when we seriously would not know if the machine is sentient or not.

And if we did know that the machine experiences qualia we wouldn't know how does the machine feel if it isn't human. How do you attempt to know a particular feeling you have never felt? Rights can't be made based on completely uncertain suppositions.

Yes, but how would you know if the AI has qualia or not? If we think on the development of a sentient machine, there would be a moment when we seriously would not know if the machine is sentient or not.

And if we did know that the machine experiences qualia we wouldn't know how does the machine feel if it isn't human. How do you attempt to know a particular feeling you have never felt? Rights can't be made based on completely uncertain suppositions.

I didn't say if or how we'd be able to, but that seems like where you'd draw the line.

If something is capable of begging for its life, not for the sake of functionality, but for the sake of being alive, then I consider it sentient and to be protected similar to humans.

Then a whole lot of living beings would have to be protected in a similar way than a human is, yet they aren't for a reason.

This isn't going anywhere. Most of the argument's I see here presuppose that if something is sentient then it deserves rights but give little definition on what they understand as sentience.

One could hardly think of some animals as not being sentient, yet we do not treat them as humans. And, even if they interact with society in many ways and have rights (in some countries), they are seen as 'something a human could use for their purposes' and not seen as an end.

I already argued about why I think human rights exist. There IS a difference between humans and other sentient beings (until now): humans are purposeless. That means that humans don't act in a way or another or define themselves (just) according to a pre-established nature, they don't "have to". As every other living being, they are born, they reproduce, they eat, they (normally) preserve themselves, etc. but they are not defined by that (as every other living being, including sentient ones, is).

you have no idea how hard it is not to pepper this thread with HAL 9000 dialogue clips.

Personally, assuming that they do have what we would define as full or partial sentience, then yes, artificial intelligences should be afforded rights to some degree/the same degree as humans depending on the situation, but then again...

it's a complicated issue I doubt we'll be able to answer until the advent of what we're talking about.

It really depends on what level of intelligence they have. You could give the two smartest people on the planet more rights than you're average slobby teen and have no problems. I mean would you let the fry cook from burger king run the mission to mars?

It really depends on what level of intelligence they have. You could give the two smartest people on the planet more rights than you're average slobby teen and have no problems. I mean would you let the fry cook from burger king run the mission to mars?

A fry cook is perfect for mission to mars, because he'd die a fiery death that tells the smart people what to do better.