A bunch of people - including Elon Musk, Stephen Hawking, etc - have signed an online petition that asks for automated weapon systems to be banned.
Personally I think that any further research and development of systems that can autonomously acquire and destroy human beings should be banned altogether, as we have already reached destructive capabilities that are unheard of in the natural realm; as it is drones, nukes and biological / chemical agents are enough to creep me out...

Ciao

Paul

The following 3 users would like to thank Uncle GroOve for this useful post:

I agree very much with the noble intentions behind this effort and recognize the real danger that the authors highlight. However, let us be realistic here, this petition and the whole idea that somehow autonomous weapons can be banned, or a ban would be effective in any shape or form is extremely naive and borderline dangerous.

With the advent of affordable and accessible robotics, widely available, ready-made software and resources on image processing, optical recognition and machine learning, the genie is out of the bottle now and it is impossible to put it back again. Anyone with an ounce of intelligence - and here I am not exaggerating in the slightest - can put together a self-propelled vehicle (be it drone, wheeled robots or even something with actual limbs) with the capability for mounting weapons and actually using it. Behold exhibit A:

This is an autonomous paintball sentry that is capable of recognizing targets and dispatching them. It is also built by an amateur at home, for peanuts.

So now we established that autonomous weapon systems can be built in pretty much the most resource constrained conditions, it is clear that bad guys (from unscrupulous individuals to naughty regimes) can and will build these things. The technology behind is so readily accessible and cheap that banning it is as misguided as trying to ban knives. They are trivial to make, even in fairly advanced forms.

A ban on nuclear weapons is somewhat effective, because the process behind building one is difficult. From acquiring the materials to the actual construction details and testing, it's all very involved, expensive and requires specialists who can be controlled to some degree at least. Even then, North Korea managed to arm themselves. So if a ban on nuclear weapons was not entirely effective, what chance does this ban have? None.

So now that we established that bad guys can build these things, and they don't give a rats arse about bans, we can also conclude that there will be a point in time when these things will get deployed by them.

I'd hope that at that point, our society is better prepared than a useless declaration that we can perhaps wave in front of a robot as it mows us down.

The following 10 users would like to thank xynth for this useful post:

I agree very much with the noble intentions behind this effort and recognize the real danger that the authors highlight. However, let us be realistic here, this petition and the whole idea that somehow autonomous weapons can be banned, or a ban would be effective in any shape or form is extremely naive and borderline dangerous.

With the advent of affordable and accessible robotics, widely available, ready-made software and resources on image processing, optical recognition and machine learning, the genie is out of the bottle now and it is impossible to put it back again. Anyone with an ounce of intelligence - and here I am not exaggerating in the slightest - can put together a self-propelled vehicle (be it drone, wheeled robots or even something with actual limbs) with the capability for mounting weapons and actually using it. Behold exhibit A:

This is an autonomous paintball sentry that is capable of recognizing targets and dispatching them. It is also built by an amateur at home, for peanuts.

So now we established that autonomous weapon systems can be built in pretty much the most resource constrained conditions, it is clear that bad guys (from unscrupulous individuals to naughty regimes) can and will build these things. The technology behind is so readily accessible and cheap that banning it is as misguided as trying to ban knives. They are trivial to make, even in fairly advanced forms.

A ban on nuclear weapons is somewhat effective, because the process behind building one is difficult. From acquiring the materials to the actual construction details and testing, it's all very involved, expensive and requires specialists who can be controlled to some degree at least. Even then, North Korea managed to arm themselves. So if a ban on nuclear weapons was not entirely effective, what chance does this ban have? None.

So now that we established that bad guys can build these things, and they don't give a rats arse about bans, we can also conclude that there will be a point in time when these things will get deployed by them.

I'd hope that at that point, our society is better prepared than a useless declaration that we can perhaps wave in front of a robot as it mows us down.

So what is the pro-AI argument then?

Do you suggest a free-for-all tech race to who builds the most advanced, unhackable, cognitive computing based system, maybe with a smidgen of compassion built in just so as to recognize that useless declaration waved in front of its optical sensors?

Incorrect. There are processes even here on earth that, while rare, make our efforts look feeble. E.g VEI 7 or 8, snowball earth. Once you consider astronomical events - e.g. large chunks of rocks, gamma ray bursts, nearby supernovae - what we're capable of is nothing.

Nature can sterilise the planet. We can't.

In the near future, all we can do is hope that the AIs like us.

I apologize - I really meant that amongst all living organisms (eukaryota to be precise) we have reached immense destructive powers. Of course global warming may still expose some super-bacterium which in turn will destroy all hominids and render its drone based warfare totally useless....

A bunch of people - including Elon Musk, Stephen Hawking, etc - have signed an online petition that asks for automated weapon systems to be banned.
Personally I think that any further research and development of systems that can autonomously acquire and destroy human beings should be banned altogether, as we have already reached destructive capabilities that are unheard of in the natural realm; as it is drones, nukes and biological / chemical agents are enough to creep me out...

Ciao

Paul

I was a bit taken aback by the title of this thread...

I'm no expert, but I would recommend saying YES followed by a quick SORRY to be the best response when talking to AWSs.

Jaja...
But that isn't what we're talking about, some bloody durak staring in a screen and shooting away with a drone. We're talking about some drunken durak who programs some kind of buggy algo that cannot distinguish friend from foe, and that takes out civilians, etc. The ban isn't on "automatic weapons" it is on the development of autonomous systems that no longer would require a sentient human being to kill another one, but would take the decision automatically.
Not.the.same.thing.
P.

Jaja...
But that isn't what we're talking about, some bloody durak staring in a screen and shooting away with a drone. We're talking about some drunken durak who programs some kind of buggy algo that cannot distinguish friend from foe, and that takes out civilians, etc. The ban isn't on "automatic weapons" it is on the development of autonomous systems that no longer would require a sentient human being to kill another one, but would take the decision automatically.
Not.the.same.thing.
P.

It would certainly revolutionise the urgency to apply critical patches!

Jaja...
But that isn't what we're talking about, some bloody durak staring in a screen and shooting away with a drone. We're talking about some drunken durak who programs some kind of buggy algo that cannot distinguish friend from foe, and that takes out civilians, etc. The ban isn't on "automatic weapons" it is on the development of autonomous systems that no longer would require a sentient human being to kill another one, but would take the decision automatically.
Not.the.same.thing.
P.

really, buggy software like you get in the brain of some psychopath, or buggy software in a autonomous drone, where is the difference?
Pandora's box is open and we all have seen the terminator haven't we?

Do you suggest a free-for-all tech race to who builds the most advanced, unhackable, cognitive computing based system, maybe with a smidgen of compassion built in just so as to recognize that useless declaration waved in front of its optical sensors?

Is that really our best alternative?

P.

I don't think this argument is for- or against-AI argument. AI as most people imagine it, or what is technically called a Strong AI, or sentient machines in other words, is very, very far away. Yes, dreaming academics or semi-layman visionaries make bold claims of it being 10-20-N years away. In reality, it is completely unknown, but very far away in any case. I'm quite sure we will not see Strong AI in our lifetime. Unfortunately the media has done it's best to make sure that every last person thinks that what is happening right now is the birth of AI. The reality is significantly less romantic. All the brilliant "AI" algorithms that actually work currently or the foreseeable future are nothing more than simple statistical methods that are not even close to anything resembling conscience or "cognitive computing". Machine learning (as it is being used today) is nothing more than a veil that is pulled over the eyes of the masses as a shockingly dishonest marketing gimmick. These days, any time some muppet does linear regression over some data they call it machine learning. I call it a simple statistical modeling technique invented more than a 100 years ago.

In any case, as I wrote, banning it is not a solution. It is again a dangerously simplistic attempt at legislating a problem away. Banning it will make people feel as if they were safe, making the situation worse by evoking a false sense of security. As I wrote, there is nothing technically challenging in making killer machines. They do not need an AI (in the cognitive sense). The only thing that is certain is that they will be constructed, and by banning research in this area, all the West will achieve is that these machines will be built by regimes who do not give a toss about our sensibilities and we will have less knowhow on how to protect ourselves.

I am a man of ideals by all means, but we have to be practical in these matters. If humans were perfect, we could all just agree to live happily like high elves in our magnificent forest groves. In peace, harmony and prosperity.

Unfortunately, evolution honed human beings into selfish, greedy and untrustworthy bastards. We are capable of working together in small groups, the groups themselves do not get that well along. These are traits that helped us survive for tens of thousands of years. A few decades or even centuries of enlightenment won't erase that. As long as we have the selfish gene and are fundamentally driven by the reptilian parts of our brain, we have to assume that the bastards are out there, and they will not hesitate to use weapons we find too fiendish ourselves.

Jaja...
But that isn't what we're talking about, some bloody durak staring in a screen and shooting away with a drone. We're talking about some drunken durak who programs some kind of buggy algo that cannot distinguish friend from foe, and that takes out civilians, etc. The ban isn't on "automatic weapons" it is on the development of autonomous systems that no longer would require a sentient human being to kill another one, but would take the decision automatically.
Not.the.same.thing.
P.

This is going to happen no matter what. How do you propose to enforce the ban? There is nothing stopping NK, Iran or any regime that has access to at least one relatively modern computer from constructing completely autonomous weapon platforms. You can't control the technology, it is so widely accessible that is impossible to redact it anymore. You can't control the knowhow, because yet again, it is too widely spread. You might as well ban "bad intentions" and "criminal thoughts" because this ban will be just about as effective as those bans would be.

The choice here is not between "autonomous weapon platforms not being invented" and "autonomous weapon platforms being invented". The choice is between "only shady regimes having these weapons" and "everyone having these weapons". In which case I'd rather we have them too, if for nothing else so that we can protect ourselves.