A Brave New World!

Future Soldiers

Are Killer Robots with Artificial Intelligence a Good Idea?

Although we are living in an age of routine miracles not all of them are good, and it remains to be seen if the moral development of mankind can keep pace with technological progress. There is abundant evidence that we have not; the best examples being the development and proliferation of nuclear weapons and human induced global warming; both of which of which threaten the existence of all life on this planet.

One only needs to listen to the attitudes of the Republicans who presently control of the US Congress in order to recognize how ill equipped humans are to deal with the enormous destructive capabilities of science when it is mismanaged. For instance, these short sighted, ignorant, moral Neanderthals deny human agency in global warming and want to build more advanced nuclear weapons rather than lead a movement for nuclear disarmament that would remove these doomsday weapons from the arsenals of nation states.

The USA is the most technologically advanced country in the world, and the government that sets the rules governing how this powerful technology is to be used is elected by the American people in what is widely considered to be among the most enlightened and humane societies in the world – indeed most Americans believe they are “exceptional” in this regard.

Yet the US is the only country that has attacked another nation with an atomic bomb; has greatly expanded its arsenal since that horrific event, and refuses to pledge not to be the first to launch another nuclear attack. The atomic arms race that occurred in the twentieth century between advanced nations is a direct result of this American policy. In view of the perilous human condition at this moment in history, the last thing this troubled world needs is a new class of deadly weapons; especially weapons that do not require human agency to activate.

It is this alarming possibility that motivated over a thousand of the world’s leading scientists specializing in Artificial Intelligence in robots to send an open letter to the International Joint conference on Artificial Intelligence that is meeting in Buenos Aires Argentina, calling for an international ban on weapons invested with AI.

The letter was signed by renowned physicist Stephan Hawking, Tesla’s billionaire technology entreprenuer Elon Musk, Apple Co-Founder Steve Wozniak and the Chief executive of Google Deep Mind, Demis Hassabis, and signed by 1000 leading researchers in the field of Artificial Intelligence including Stuart Russell, U-Cal Berkeley Professor of Computer Science, director of the Center for Intelligent Systems, and co-author of the standard textbook “Artificial Intelligence: a Modern Approach.”

In this terse but profoundly troubling letter the scientists warn:

“Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”

After considering the arguments of those who claim that the use of robots guided by artificial intelligence would reduce the loss of human life in warfare, the scientists present the other side of the story with a cost/benefit analysis for humanity.

“The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.”

Although it is not well known, most of the great physicist who built the world’s first atomic bomb at Los Alamos wanted to explode it off the coast of Japan, so the Japanese military could witness it’s awesome power and give them a chance to surrender rather than drop the bomb on civilian populations in cities – a great crime against humanity – then put the technology under the control of an international agency that would make it available to the world’s scientist in an effort to prevent a nuclear arms race that could end in the destruction of humanity. The cutting edge computer scientists in the field of Artificial Intelligence who signed this letterare candid in their moral concerns about the uses to which they do not want their discoveries employed.

They want no part in weaponizing robots equipped with AI, and they cite scientists in other fields who have faced similar moral choices as a guide. Hence they are warning us that weaponizing robots with Artificial Intelligence is a road to untold horrors that will decimate human society as we know it as an unintended consequence of their work:

“Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.”

I am well aware of the role that activist scientists, led by distinguished physicist such as Dr. Mitshiu Kaku, played in blocking the development of Reagan’s “Star Wars” anti-ballistic missile shield by persuading leading physicist in the US not to work on the project because they believed it would destabilize the nuclear order and increase the chances of accidental nuclear war. The computer scientists that signed this letter have also made their feelings about what government policy ought to be regarding the development of weaponized robots:

“In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”

The question for American citizens is: What is our government’s policy on weaponizing robots with AI? As near as I have been able to discern the US government has no clearly defined policy on this question. Thus far it seems that this technology is being pioneered by scientists working for private companies and professors in leading research universities. However of greatest concern is the US military, which seizes upon every new technology and exploits its military value, already has a secret research and development program experimenting with killer robots. Hence it is incumbent upon the American people to demand that our government lead an international effort to ban such weapons before they get out of hand.

Hollywood has given us glimpses of what a future with such robots could be like: Hal in Space Odyssey, Gaut in The Day the Earth Stood Still, and the infamous Terminator Series, et al. Alas this is no longer confined to the realm of science fiction. One scientist who is particularly concerned with the direction of the development of AI is Dr. Selmer Bringsjord, professor of cognitive science, computer science and logic and philosophy at in Rensselaer Polytechnic Institute. By virtue of his background in science and philosophy, Professor Bringsjord is especially concerned with the ethical implications of his research and the danger it poses to homo sapiens:

“I’m worried about both whether its people making machines do evil things or the machines doing evil things on their own. The more powerful the robot is, the higher the stakes are. If robots in the future have autonomy…, that’s a recipe for disaster. If we were to totally ignore this, we would cease to exist.”

Arnold Swartznegger as the Terminator

A vision of the future?

Now that’s a wakeup call that we dare not ignore, for considering that these robots can even operate in outer space…they would be literally out of this world! That’s why we must spare no effort to stop them from coming into our world.