Artificial intelligence ethical questions

FLI - Future of Life Institute. Artificial Intelligence: The Danger of Good Intentions Why well-intentioned AI could pose a greater threat to humanity than malevolent cyborgs. by Nathan Collins March 13, 2015 Nate Soares (left) and Nisan Stiennon (right)The Machine Intelligence Research InstituteCredit: Vivian Johnson The Terminator had Skynet, an intelligent computer system that turned against humanity, while the astronauts in 2001: A Space Odyssey were tormented by their spaceship’s sentient computer HAL 9000, which had gone rogue.

The core concern is that getting an entity with artificial intelligence (AI) to do what you want isn’t as simple as giving it a specific goal. MIRI grew from the Singularity Institute for Artificial Intelligence (SIAI), which was founded in 2000 by Eliezer Yudkowsky and initially funded by Internet entrepreneurs Brian and Sabine Atkins. Back in 2000, Yudkowsky had somewhat different aims.
Marc Andreessen on Twitter: "The smartest people I know who do personally work on AI think the scaremongering coming from people who don't work on AI is lunacy."
Outing A.I.: Beyond the Turing Test. Ray Kurzweil on Artificial Intelligence: Don't ...

Mostly through a centralized AI that is prone to regulation. One of the possible paths of natural progression is for AI to transition towards a decentralized blockchain technology. The great power of Bitcoin and blockchain technology is that it prompts a rethinking of nearly every existing problem. The issue of Friendly AI, how we can effectively transform into a society with human-friendly artificial intelligence (AI), is exactly such a problem. There are at least five key features of blockchain technology that suggest a path to Friendly AI is not only possible, but could be likely. 1: Reputation Reputation has proved an important mechanism in our current physical-world and digital-world transactions and is likely to persist in the future. 2: Resource Argument 3: Consensus Models. Humanity can defeat SkyNet with BOOKS, says IT think tank. A group of researchers working for National ICT Australia reckons computer science courses need to look at artificial intelligence from an ethical point of view – and the popularity of sci-fi among comp.sci students makes that a good place to start.

As the research team, which included NICTA's Nicholas Mattei, the University of Kentucky's Judy Goldsmith and Center College's Emanuelle Burton, explain in their paper, ethical questions arise I a variety of AI environments. There's the “mechanics of the modern military”, the “slow creep of a mechanized workforce” for example. “We have real, present ethics violations and challenges arising from current AI techniques and implementations, in the form of systematic decreases in privacy; increasing reliance on AI for our safety, and the ongoing job losses due to mechanization and automatic control of work processes,” the paper states.
AI Has Arrived, and That Really Worries the World's Brightest Minds.

On the first Sunday afternoon of 2015, Elon Musk took to the stage at a closed-door conference at a Puerto Rican resort to discuss an intelligence explosion.

This slightly scary theoretical term refers to an uncontrolled hyper-leap in the cognitive ability of AI that Musk and physicist Stephen Hawking worry could one day spell doom for the human race. That someone of Musk’s considerable public stature was addressing an AI ethics conference—long the domain of obscure academics—was remarkable. But the conference, with the optimistic title “The Future of AI: Opportunities and Challenges,” was an unprecedented meeting of the minds that brought academics like Oxford AI ethicist Nick Bostrom together with industry bigwigs like Skype founder Jaan Tallinn and Google AI expert Shane Legg.

Musk and Hawking fret over an AI apocalypse, but there are more immediate threats. Click to Open Overlay Gallery. Robots, AI deserve First Amendment protection.
Photo by Scott Barbour/Getty Images The First Amendment protects the reporter who examines the campaign donations to each U.S. representative and then calculates the open-market value of their votes in Congress.

The Future Will be Boring -
There was once a story where the Devil argued that Heaven was far from a true paradise.

Having all your needs met, he argued, was a living death far worse than any torment he offered in Hell.
Prof. Hawking, the AIs will BE US -
Perhaps, as Prof.

Stephen Hawking thinks, it may be difficult to “control” Artificial Intelligence (AI) in the long term. But perhaps we shouldn’t “control” the long-term development of AI, because that would be like preventing a child from becoming an adult, and that child is you. “Success in creating [Artificial Intelligence] AI would be the biggest event in human history,” say Stehpen Hawking, Stuart Russell, Max Tegmark, and Frank Wilczek, in an article published on The Independent.

“Unfortunately, it might also be the last, unless we learn how to avoid the risks.” “Looking further ahead, there are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains,” continue the scientists. I totally agree. I agree so far. A real AI, one who thinks and feels like a person, perhaps much smarter than you and I, is a person.
Hugo De Garis on the Future of the Home Robot Industry -
US Navy funds morality lessons for robots. As we all learned from the 1986 film War Games, machines have the upperhand in warfare when it comes to making logical decisions (such as, the only winning move in nuclear war is not to play).

But now it seems the US Navy is not content with that party trick, as it is working on teaching artificial intelligence how to make moral and ethical decisions, too. A multidisciplinary team at Tufts and Brown Universities, along with Rensselaer Polytechnic Institute, has been funded by the Office of Naval Research to explore the challenges of providing autonomous robots with a sense of right and wrong -- and the consequences of their actions.
Stephen Hawking Says A.I. Could Be Our 'Worst Mistake In History'.

You have to understand Stephen Hawking's mind is literally trapped in a body that has betrayed him.

Sadly, the only thing he can do is think. The things he's been able to imagine and calculate using the power of his mind alone is mindboggling. However, and this is a very important thing - he is still human. He is as much influenced by human bias as the next person. We can easily fear those things which we do not understand, and fear makes us take stances or actions that often fall outside the bounds of rationality.
Can we build an artificial superintelligence that won't kill us?
SExpand At some point in our future, an artificial intelligence will emerge that's smarter, faster, and vastly more powerful than us. Once this happens, we'll no longer be in charge. But what will happen to humanity? And how can we prepare for this transition?
Artificial Intelligence Poses 'Extinction Risk' To Humanity Says Oxford University's Stuart Armstrong.

Artificial intelligence poses an "extinction risk" to human civilisation, an Oxford University professor has said. Almost everything about the development of genuine AI is uncertain, Stuart Armstrong at the Future of Humanity Institute said in an interview with The Next Web. That includes when we might develop it, how such a thing could come about and what it means for human society.