Asimov's Laws of Robotica: NO police robots, NO military robots!!!

I'm asking why security companies or border patrol agencies should design robots that can't or won't harm humans? Or if someone or some group designs
true AI, why shouldn't it be allowed to protect itself from human attacks? And last, why can't other people (or future AI) have a say in the rules for
robots?

So we're just talking about science fiction stories and not a potential reality? If 2 or 3 people are fighting (as humans constantly do), how can a
robot stop humans from hurting each other without hurting one of the humans? How would it even know which human was in the "right" to see who to help?
I mean realistically. Would it tase everyone who is fighting, even though that would cause harm to them? What methods would it use to stop a human
conflict without harming either human?

The Laws can be interpreted that the robot or robots do not interfere with human affairs, to be passive if humans are fighting among them
selves, consequently according to these Asimov Laws robots can also not fight along with humans against other humans (with their robots) in wars
as soldier robots, or intermediate in gangs fighting gangs as police robots. We can let sentient robots fight with other sentient robots like
gladiators, wouldn't that be funny

But that would violate the 1st and 4th laws by allowing harm to humans/humanity through inaction.

I'm asking why security companies or border patrol agencies should design robots that can't or won't harm humans? Or if someone or some group designs
true AI, why shouldn't it be allowed to protect itself from human attacks? And last, why can't other people (or future AI) have a say in the rules for
robots?

I don't think you're quite into the spirit of the thing. Why should companies design robots that conform to the law? For the same reason auto
companies are required to install air bags. It's the law. Asimov, writing in the 50's when he proposed these laws, was trying to get us all to think
about the implications of AI. That we're still talking about the Three Laws of Robotics shows that he was successful. "I, Robot" is a perfect example
of what happens when things go astray.

I'm asking why security companies or border patrol agencies should design robots that can't or won't harm humans? Or if someone or some group designs
true AI, why shouldn't it be allowed to protect itself from human attacks? And last, why can't other people (or future AI) have a say in the rules for
robots?

OK now your clear! Security firms, border patrol, military, police should not even think of using robocops.

You made me think about it again, OK maybe we should not give The Laws of Asimov a Canonical religious status or to regard them as God given or
something, need to run the scenarios in my head

I'm asking why security companies or border patrol agencies should design robots that can't or won't harm humans? Or if someone or some group designs
true AI, why shouldn't it be allowed to protect itself from human attacks? And last, why can't other people (or future AI) have a say in the rules for
robots?

I don't think you're quite into the spirit of the thing. Why should companies design robots that conform to the law? For the same reason auto
companies are required to install air bags. It's the law. Asimov, writing in the 50's when he proposed these laws, was trying to get us all to think
about the implications of AI. That we're still talking about the Three Laws of Robotics shows that he was successful. "I, Robot" is a perfect example
of what happens when things go astray.

Just the opposite. The laws that auto companies follow were debated and passed by civilian-elected govts. Asimov's "laws" were never debated nor even
discussed by the public. So I'm basically asking why anyone should be required to follow his rules? People can do it they choose to, but who
has the right to require it or enforce it? The UN?

I'm asking why security companies or border patrol agencies should design robots that can't or won't harm humans? Or if someone or some group designs
true AI, why shouldn't it be allowed to protect itself from human attacks? And last, why can't other people (or future AI) have a say in the rules for
robots?

OK now your clear! Security firms, border patrol, military, police should not even think of using robocops.

You made me think about it again, OK maybe we should not give The Laws of Asimov a Canonical religious status or to regard them as God given or
something, need to run the scenarios in my head

No problem. Like I said, I just wanted to bring up another angle for the purpose of debate. I can agree with the laws to an extent, but that's because
I'm a pacifist. I also wish that humans would actually follow similar rules. But I don't see how they'd be realistic in a world that shuns
pacifism.

And even my form of pacifism includes the right to self defense. So theoretically, I'd be ok with artificial intelligence being able to protect itself
from human attacks, just as I theoretically agree that all animals have the right to self defense. And by extension, I'd reluctantly agree with the
idea of robot "guardians" using non-lethal attacks to protect a homeowner's home, to protect the children they're babysitting, or the clients they're
protecting (like human bodyguards do).

Blade Runner is my favorite film (because of Rutger Hauer also Dutch and the hero of my youth, because he also played a Knight in a youth TV series
called "Floris")

But these "replica" androids were biological, more like clones or something, because the Tyrell corporation experimented with biotechnological tools,
to prolong life time of the replicas, which always resulted in a virus the president of the company said to Rutger. Anyways if they were human, then
they were superhuman more like X-men

Yah, the three laws are a utopia version of robotics. Like you said drones violate them, in fact every single weapons guidance system is directed to
kill without question. There is no 'should or shouldn't I' programming included in the software of a warhead.

Morals of war and rules of engagement aside, once released they are designed to hit their target, period.

Amusing dilemma in the film, Dark Star. Arguing with a smart bomb.

I wonder if there will ever come a time when one can disarm a bomb with philosophy.

And even my form of pacifism includes the right to self defense. So theoretically, I'd be ok with artificial intelligence being able to protect itself
from human attacks, just as I theoretically agree that all animals have the right to self defense. And by extension, I'd reluctantly agree with the
idea of robot "guardians" using non-lethal attacks to protect a homeowner's home, to protect the children they're babysitting, or the clients they're
protecting (like human bodyguards do).

Well you're quite right, a sentient robot is also only a human, should have the right to defend itself, the Chinese cultures have no problem of seeing
for example a stone or a mountain as animated, they will be the first to accept a robot as animated, please modify and extent the existing Four Laws
of the Robotica with your ideas, for argument sake

If the robots are sentient/true AI, I think they should have equal rights as humans or at least a form of "animal rights". But seeing as countless
millions of animals are killed as livestock or for being "pests", I don't think laws of that caliber would be sufficient. But if the robots are no
different than modern computers, the laws need to focus on human behavior, not robot behavior. Kind of like how programs don't hack, people hack by
using programs; and how armed drones don't kill; the human spotters and "pilots" use armed drones to kill.

So maybe we should just be limited to making robots that have limited functions. Things like ATMs, kiosks/interfaces, and machines in factories, since
they don't have the ability to harm us. Or traffic lights and automated vacuum machines. Of course, I don't see many governments or militaries
agreeing with this.

So hmm, I may to put some thought into the laws to figure out if there's something that more people can agree on.

originally posted by: Maxatoria
There were on occasions where robots were produced without the full 3 laws, such as when humans needed to enter a dangerous radioactive environment as
the robots would see the human in there and obeying the 1st law would run in and kill themselves .... been many a year since i read the books but the
rules were mathematical and thus could be adjusted if needed and some of the stories covered the problems when a robot would go awry due to the change
in the program.

It should be said that the rules are not absolute, a polite go jump off a cliff or play in the fast lane to a robot would be overridden by the 3rd law
as it would understand the language use and act accordingly however a strong authoritative command to kill ones self would probably override the 3rd
law as generally it always seemed to be a balance like a set of scales and when it couldn't work it out normally it would just shut off and basically
die.

The 3 laws are a great starting point for robotic research as we include ethics into the mix as we consider ourselves above the robots in some ways
almost like slave masters and how in real life would we consider a sentient robot.

If the robots are sentient/true AI, I think they should have equal rights as humans...

...So maybe we should just be limited to making robots that have limited functions.

Yes they should have equal rights, we ourselves are only ¨avatars" in an Universe emulation, cyber entities, when we make ourselves sentient AI
entities, they are just as good as the real thing, have a (cyber) soul too.

What I find good of your reasoning is that all cyber souls (biologic or electronic) should have the right to defend it self, keep up the good work

Only limited robots will not work, then they will make clandestine sentient AI robots

originally posted by: Maxatoria
It should be said that the rules are not absolute, a polite go jump off a cliff or play in the fast lane to a robot would be overridden by the 3rd law
as it would understand the language use and act accordingly however a strong authoritative command to kill ones self would probably override the 3rd
law as generally it always seemed to be a balance like a set of scales and when it couldn't work it out normally it would just shut off and basically
die.

The 3 laws are a great starting point for robotic research as we include ethics into the mix as we consider ourselves above the robots in some ways
almost like slave masters and how in real life would we consider a sentient robot.

Military robots already exist in the form of drones. Once we manage to bestow autonomous control into our creations there will be no sure fire way of
implementing these 3 laws without servery limiting any artificial intelligence's ability to think for itself. Cant have the the illusion of freewill
while retaining control down to the fact that they contradict one another.

The best we can hope for really is that we teach our creations benevolence but with Man for a God i really don't see that happening.

The best we can hope for really is that we teach our creations benevolence but with Man for a God i really don't see that happening.

I got a new insight through this thread: If there come sentient AI robots with a psyche a consciousness a sub consciousness with feelings with a soul,
emulating human psychology, does it have the right to defend itself against humans against other robots against animals etc. etc. ???

Essentially the question is should Human rights extend to include artificial intelligence? Kind of hard to convince Monkeys to give equal rights to
tools. Just look at how we treat one another never mind the rest of the animals of our world.

Personally i think if the thing has the ability to empathize and think for itself it should indeed have similar rights to humans.

Thing is through our basic rights are being eroded away on a daily basis under the guise of maintaining our security and way of life. Chances are by
the time humanity develops a true artificial intelligence(10-50 years distant) we wont have any rights remaining.

This content community relies on user-generated content from our member contributors. The opinions of our members are not those of site ownership who maintains strict editorial agnosticism and simply provides a collaborative venue for free expression.