Are Asimov's three laws of robotics safe?

A common suggestion on the topic of AI ethics is to implement the Three Laws of Robotics that Isaac Asimov explored in his science fiction stories (e.g. in "I, Robot", which has just come out as a movie).

But are these laws really safe? As Asimov's stories themselves make clear, they can be misinterpreted in a lot of ways.

First of all, Asimov's laws are based on pure speculation and imagination. Until humans manage to artificially produce a non-biological intelligent being, everything we can say about this will remain pure speculation.

If humans are "dumb" enough to create something that will destroy them, so be it. Life is a fight for survival so whoever wins, wins.

Of course, this is all assuming that our intelligent robots have somehow managed to gain instincts, particularly those of survival. I think this all leads to the better question of "Why is survival/reproduction so import to biological beings?".

In allowing AI to progress it is essential to also control the progressions infinitely. If you allow something to get "smarter" then you, then you are no longer in control.

I'd agree that trying to control something smarter than you is futile. This is why we should design AI to want to be nice, in and of themselves. Not just nice in a way we can exactly specify in advance; nice in such a way that it could determine the spirit, rather than the letter, of things like the three laws. This turns out to be a highly non-trivialproblem.

e(ho0n3 said:

First of all, Asimov's laws are based on pure speculation and imagination. Until humans manage to artificially produce a non-biological intelligent being, everything we can say about this will remain pure speculation.

Maybe; however, I think we can't afford not to think extensively about this subject in advance.

If humans are "dumb" enough to create something that will destroy them, so be it. Life is a fight for survival so whoever wins, wins.

Life may be a fight for survival, but should we be content with this? I think we should be able to outgrow the nastier aspects of evolution, and cooperate with AI rather than fight it.

Of course, this is all assuming that our intelligent robots have somehow managed to gain instincts, particularly those of survival.

Not necessarily; surviving is helpful in achieving just about any goal. Survival can be a conscious, rational choice, rather than an instinct.

I think this all leads to the better question of "Why is survival/reproduction so import to biological beings?".

The biological beings to whom it wasn't important died without leaving children.

I'd agree that trying to control something smarter than you is futile. This is why we should design AI to want to be nice, in and of themselves. Not just nice in a way we can exactly specify in advance; nice in such a way that it could determine the spirit, rather than the letter, of things like the three laws. This turns out to be a highly non-trivial problem.

It would be nice in theory, however is it really possible? To stay nice we would in effect have to teach it sure. Though to keep it nice? Don't get wrong here for think it would be wonderful, however I also believe it to be theoretically impossible. It would be like saying that a perfect world without evil can also exsist. I know where you're coming from Ontoplankton, and I have read the links (very good I have to add thank you) however, and forgive me if this well read book has been brought up here before though I believe it's essential reading, Society of The Mind for those who haven't read it is a look into how AI can ultimately start off with all the best intentions, and inadvertantly go array.

Life may be a fight for survival, but should we be content with this? I think we should be able to outgrow the nastier aspects of evolution, and cooperate with AI rather than fight it.

What exactly do you mean by "nastier aspects of evolution"? What would be the purpose of cooperating with AI?

surviving is helpful in achieving just about any goal. Survival can be a conscious, rational choice, rather than an instinct.

I can make the "rational" choice of not eating and after five/seven days I'll be dead. Note however that this is not considered "normal" behaviour. For the majority of us (and all living beings for that matter), survival is pure instinct i.e. programmed, inherent.

The biological beings to whom it wasn't important died without leaving children.

I don't think there were such beings. It be interesting if they existed though.

Nomadoflife said:

It would be nice in theory, however is it really possible? To stay nice we would in effect have to teach it sure. Though to keep it nice? Don't get wrong here for think it would be wonderful, however I also believe it to be theoretically impossible. It would be like saying that a perfect world without evil can also exsist.

The concept of good and evil is a notion developed by humans. If there were no humans, then there would be no evil.

Ethics alone can't resolve the issues of the robotics 3. Epistemology is necessary, and neither Asimov nor others have persued these foundations extensively (except maybe in the story "Reason" of the book "I, Robot"). In short:
"what does a robot think is true, what can a robot know, how good is a robot's knowledge?"

---

In the story "Evidence" of the book "I, Robot" the 3 are compared loosely to the ethics of a good citizen (without the rigorous determinism).

I'm not sure. I don't agree with any of the reasons I've seen why it's supposed to be impossible a priori, though.

If you're really interested in the subject, I recommend diving deeper into what the SingInst has to say on "Friendly AI", especially here. There's an enormous amount of insights there.

Don't get wrong here for think it would be wonderful, however I also believe it to be theoretically impossible. It would be like saying that a perfect world without evil can also exsist.

We don't necessarily need true, absolute perfection, just a sufficiently close approximation. I don't see why an almost-perfect world couldn't exist (keeping in mind that we don't know exactly what we mean by "perfect").

If there exist humans who are nice enough that we'd entrust the future to them, or at least that we're comfortable having them around in modern society, why shouldn't we also be able to make such an AI? We could even "clean it up" beyond what any human could achieve, if we knew what we were doing.

What exactly do you mean by "nastier aspects of evolution"? What would be the purpose of cooperating with AI?

By the "nastier aspects of evolution", I mean all the red-in-tooth-and-claw stuff. "Survival of the fittest" may be how nature works, but I don't think it's something to strive for just because it's how nature works.

The purpose of cooperating with AI is basically the same as the purpose of cooperating with anyone: to not be harmed, and to not harm others, and to help us achieve (and maybe rethink) our goals.

I might as well ask, what's the purpose of being killed by AI?

I can make the "rational" choice of not eating and after five/seven days I'll be dead. Note however that this is not considered "normal" behaviour. For the majority of us (and all living beings for that matter), survival is pure instinct i.e. programmed, inherent.

The three laws are just fictional elements to set up the problem stories. As such they had to be immediately plausible, but no more than that. I think it's an error to take them too seriously.

Right; while it looks to me like the Three Laws are often taken seriously in popular discussions, I don't think there are many real AI projects that are proposing to implement them. (I think Asimov took them fairly seriously, though.)

Still, the arguments against Asimov's Laws apply to more approaches than just to the Three Laws. I think any approach based on a few (or many) unchangeable moral rules is dangerous. Only very few people seem to be taking the problems of ethical AI seriously enough to think hard about them.