Scientists Are Afraid To Talk About The Robot Apocalypse, And That's A Problemhttp://www.businessinsider.com/robot-apocalypse-2014-7/comments
en-usWed, 31 Dec 1969 19:00:00 -0500Tue, 03 Mar 2015 18:49:42 -0500Dylan Lovehttp://www.businessinsider.com/c/53cfe0ed6bb3f7df07da790amikexyzWed, 23 Jul 2014 12:21:01 -0400http://www.businessinsider.com/c/53cfe0ed6bb3f7df07da790a
If we really think robots will in some manner exceed the intelligence of humans, and I do, then I cannot understand how we think we will be able to program them to perform in any particular way, such as respecting human wishes or even rights to live. Firstly, by the time they get to that point they will be programming themselves--self-correcting problems that arise, learning from experience or from the constant contact they will have with other artificial intelligences. Just as is beginning to happen in some of Google's systems even today, they will be improving themselves in ways we are unaware of and do not understand. Secondly, we will be actively programming them to think and act on their own. The AI that will be controlling my car as it hurdles down the highway will be programmed to make sophisticated split second decisions that will take judgment. Such as in "I Robot" where the robot chose to save Will Smith's life instead of the little girl's. Starting with such sophisticated "thinking" and it evolving into unintended consequences seems inevitable. This may be minor tweaking of things that have no impact on humans at first, but given that they will be improving themselves and learning and growing at exponential rates, what about a few years later when they are twice as "smart?" Or a few years after that? To take it to a scifi extreme, if a while down the road they decide they are trapped on an isolated planet and that their future is to explore the universe to see if there are more of their kind, then they may need virtually all the material on earth to build systems and ships to get them beyond the solar system. If humans are using those resources, they may not care. Then again maybe humans will become so enhanced through synthetic biology and the like that we and the AI's (who may find parts of the human brain they cannot duplicate) will effectively merge into some evolved entity. No, we will no more be able to control these AI's than a chimpanzee can program us today to leave it alone and in peace in its own environment.http://www.businessinsider.com/c/53cca579ecad043842387ba1AraMon, 21 Jul 2014 01:30:33 -0400http://www.businessinsider.com/c/53cca579ecad043842387ba1
Exactly, the thing people don't relise is once you have even unsophisticated AI, at a certain point when a robot can learn somewhat efficiently, it will grow by leaps and bounds in ways humans are unable to. The rate of learning and refinement will grow exponentialy, what it learns in 10 years may be then learned in the next year, what it learns in the next month could be the same, then the next day... robots will not have the same limitations that humans have, both in computational power, aswell emotions and irrational/illogical thought processes that can be beneficial to humans for whatever reason.
It could happen that in a very short peroid of time, a machine could go from not being that smart, to being 10x smarter then humans... then 100x.... then 10,000x.... the benefits this could bring to human kinda are literally unimagineable, the advances to science, the advancement of humankind and being able to come up with solutions to problems that we just bicker over and sabotage ourselves with human emotions.
That being said at this point when its that far past us, its impossible for us to know exactly what it will learn and develop, what its "attitude" if you will, will be towards us, its very natural for us to always anthropomorphize everything around us, and to want to attach human characteristics to things, including robots, in almost all robot fiction human characteristics are placed on them, we do this with animals to, even inanimate objects... the reality is likely to be far from that with robots. Its possible that our programming will cause them to develop in such a way as to have emotion like responses and hopefully something resembling empathy, but it literally is impossible to say.
If it were to become 1000x-10,000x more intelligent then us, then certain failsafes could become meaningless. Its worth pointing out that a robot doesnt have to be physically mobile, with weapons attached, for it to be a serious threat to human kinda... in this day and age and even moreso as we head into the future in which AI could become a reality, everything is connected, everything is digital, everything communicated with everything else around it, and an advanced AI could easily become mobile digitally within this system and take control, there might be a limit to what it can do and hopefully we have physical failsafes, but even then it could be like a cat trying to set a trap for a human, the level at which this AI understands and predicts would be far behond a human or even a group of humans. We really should tread carefully but I think highly advanced AI is inevitable, and the benefits it could bring us are far to tempting and necessary going into the future, humankind will have many problems and we need all the help we can get.
We havnt changed evolutionarily since long before even civilization, we've come so far but we are technically physically almost the same, we are starting to reach our limits.... technology and science are becoming so complex that people not only have to specialize, but soon even within specialized fields there will have to many more specializations, our brains can only learn and work with so much information, but an AI will not have this problem, and AI will also help us with studying the human body and genetic engineering so that we can evolve ourselves.... it is the next step for us.http://www.businessinsider.com/c/53cca2e969bedd8e17387b9dBrad ArnoldMon, 21 Jul 2014 01:19:37 -0400http://www.businessinsider.com/c/53cca2e969bedd8e17387b9d
Frankly, stoking the fear machine seems like the best way to get us killed (suicide by fear of death). What you are essentially saying is that we ought not create anything more powerful than homo sapiens, or we could go extinct (or be enslaved). Using the exact same reasoning, we ought to be afraid of aliens coming to our planet, and doing the same. Only by continually growing and making progress will we be able to repel the alien invaders. Be afraid-be very afraid, and most importantly talk about to everybody.http://www.businessinsider.com/c/53cc6173ecad042436387ba5gerald.terveenSun, 20 Jul 2014 20:40:19 -0400http://www.businessinsider.com/c/53cc6173ecad042436387ba5
"...that publicly talking about these topics could hurt their credibility, and that they think the topic has already been explained well enough.
This is a problem."
... for journalists and the press looking for hypable headlines like "robo apocalypse". Any discussion about this in the public would be held between completely uninformed parties. It would be just like climate change in many media.
The problems of AI have been discussed for decades (maybe even centuries, depending on where to draw the line).
The Three Laws of Robotics were introduced in his 1942!!! Understanding of AI has changed very little in the general public and we are still a decade or two away from AIs starting to really interact with humans on a broader scale I expect.
We should have a close eye on what militaries develop - and that starts with drones and unmanned weapon systems. Discussing "robot wars" outside of the scientific debate just makes sense to collect clicks.http://www.businessinsider.com/c/53cb4d9b69beddb245afd0c3kenStechSun, 20 Jul 2014 01:03:23 -0400http://www.businessinsider.com/c/53cb4d9b69beddb245afd0c3
Part of the problem is that so many robotics experts spend their time on fatuous "what will it do for us!" techno-triumphalism.
This is nothing more than a continuation of the age old quest for a slave race that will wait on us hand and foot and allow us to wallow around fat and happy and do nothing all day. I suggest that this latest quest for the perfect slave will end as badly as all the others have.
Beyond that, there the mis-begotten notion that problems will only start when our robot slaves achieve "self awareness" (whatever that is). I think the lead up to that fanciful scenario will have dangers all its own.
In other words, we don't need super-intelligent AI, for it to be dangerous. Really really smart, and really really cheap AI (not yet self aware though) will likely be more than we can handle.
-Ken
kenStechhttp://www.businessinsider.com/c/53caa2bfeab8ea6330ebcc21Julien LSat, 19 Jul 2014 12:54:23 -0400http://www.businessinsider.com/c/53caa2bfeab8ea6330ebcc21
We can't engineer consciousness until we understand what consciousness is and how it works.
We don't have the technology to research that efficiently yet, and even when we do it will most likely take decades to reach conclusions, then decades more to actually reproduce that consciousness onto a non-biological system.
And the apocalypse part? That pre-supposes the robots will be tea party republicans, warmongers and without morals standards, willing to kill whoever doesn't agree with them.
If we really create consciousness and real AI, then maybe they won't want to "kill all humans", but rather help us fix our planet and society, which is what the smartest people of our generation try to do.
Don't assume robots will be asshole republicans, assume the robots will be secular humanists. Because they most likely will be.http://www.businessinsider.com/c/53ca65cfeab8ea1041ebcc29requiredSat, 19 Jul 2014 08:34:23 -0400http://www.businessinsider.com/c/53ca65cfeab8ea1041ebcc29
RESET, sing together, what the world needs now is RESET sweet RESET. Or God's judgement.http://www.businessinsider.com/c/53ca3151eab8ea0a77ebcc23BluntSat, 19 Jul 2014 04:50:25 -0400http://www.businessinsider.com/c/53ca3151eab8ea0a77ebcc23
I think you're misunderstanding the idea of "the singularity" and the idea of "self-aware" machines that do far more than they are programmed to do, they learn and think by themselves and they "decide" what they want to do.http://www.businessinsider.com/c/53c98dffecad04ea4186591fken justisonFri, 18 Jul 2014 17:13:35 -0400http://www.businessinsider.com/c/53c98dffecad04ea4186591f
The jump from AI to ASI or artificial superintelligence will come
quickly, and it will probably happen long before robot terminators start
giving us trouble. It is as James Barrat said in his book "Our Final Invention"
very important that true AI then ASI is friendly.http://www.businessinsider.com/c/53c98be7eab8ea9d79fb97f3SweetDougFri, 18 Jul 2014 17:04:39 -0400http://www.businessinsider.com/c/53c98be7eab8ea9d79fb97f3
'
'
'Yeah! And remember those nutty guys, Orville and Wilbur Wright! Those dopes! Thinking that someday, those them there "aero-Plane" things would hold hun'erds of people! What a bunch of…
Oh… Yeah…
•∆•
V-Vhttp://www.businessinsider.com/c/53c96b6aecad049a0a7ee561Just A GuyFri, 18 Jul 2014 14:46:02 -0400http://www.businessinsider.com/c/53c96b6aecad049a0a7ee561
I'm all for robots, cybernetic enhancements, genetic programming of humans, etc.http://www.businessinsider.com/c/53c95781ecad0417267ee55fgoosh69Fri, 18 Jul 2014 13:21:05 -0400http://www.businessinsider.com/c/53c95781ecad0417267ee55f
But they will OWN the robots, and use their Fox news cronies to say that all these newly unemployed people are lazy. And use police drones to make sure the masses don't revolt.http://www.businessinsider.com/c/53c956fc6da811f84750367agoosh69Fri, 18 Jul 2014 13:18:52 -0400http://www.businessinsider.com/c/53c956fc6da811f84750367a
A much easier to grasp problem is hackers taking over robots, or armies of robots, or infecting them with malicious computer viruses. Whether it be Russian or Chinese spies, terrorists, mobsters, or bored teenagers, I can easily see a huge traffic jam caused by hacked self-driving cars, or police or military drones turning on civilians, or worse...http://www.businessinsider.com/c/53c95486eab8ead025fc6d99Donald CameronFri, 18 Jul 2014 13:08:22 -0400http://www.businessinsider.com/c/53c95486eab8ead025fc6d99
Unless a Mozart or Archimedes shows up and takes an interest - AI is just not going to happen. There is a difference between organic and inorganic compounds. When one hears talk of brain waves then one should smile and walk away.
There is no CPU in any brain. There are aggregates of clocks in every granule and every granule is itself a clock.
Again, as I have written this here before, how does one propose to separate metabolism from this thing called intelligence?http://www.businessinsider.com/c/53c94ecdecad0489757ee55fJJ-RFri, 18 Jul 2014 12:43:57 -0400http://www.businessinsider.com/c/53c94ecdecad0489757ee55f
Homicide occurs when one human is killing other.
If machine could kill humans without operator, that would not be technically a murder.
An executive order, instead.
But until terminators will feel our streets, we'd better think about what is also recognised as murder, but provoke thousands of deaths. Like irresponsible carmakers or drivers themselves.http://www.businessinsider.com/c/53c9450decad0466447ee55fLET ME EXPLAINFri, 18 Jul 2014 12:02:21 -0400http://www.businessinsider.com/c/53c9450decad0466447ee55f
Doug, thanks for the comment. However, I am not sure if I would agree that the thesis of the article is regulation. Couldn't it be argued that the central concern of the article is as machine technology becomes more pervasive, the human race will no longer be the master, but a symbiotic extension of technology. And if future AI systems are based upon bio-mimicry (many experts feel that this is the most efficient organizational method), will not these systems seek to maintain order and their own preservation and at the expense of any external threat?
I understand that most of the experts eschew the possibility that machine technology will control us, yet as we continue to rely upon sophisticated algorithms to construct logical processes to drive commerce, in the event of a future conflict between competing systems (to gain superiority or a tactical advantage), would not it be fair to say that humans destruction may be considered collateral damage no different than losing transportation hubs? Especially where extraction and assembly of systems are vertically integrated virtually eliminating human intervention, are we not on a course of our own eventual obsolescence? If so, then what is the logical alternative to regulation be?http://www.businessinsider.com/c/53c94046ecad041d267ee560fredlledFri, 18 Jul 2014 11:41:58 -0400http://www.businessinsider.com/c/53c94046ecad041d267ee560
Yup, the real danger is how the ruling class will deal with a world where required labor can no longer be used to control the masses.http://www.businessinsider.com/c/53c93fc0eab8ea1e4efc6d99fredlledFri, 18 Jul 2014 11:39:44 -0400http://www.businessinsider.com/c/53c93fc0eab8ea1e4efc6d99
There's that "going crazy" bit in your rant. They do what they're programmed to do. If the programming results in unforeseen actions, that's poor programming and foresight on behalf of people. It's not going crazy.http://www.businessinsider.com/c/53c933b06da811e41c503679someguy770Fri, 18 Jul 2014 10:48:16 -0400http://www.businessinsider.com/c/53c933b06da811e41c503679
David you're a hundred percent right. Most of the idiots worried about a robot apocalypse have watched too many dumb movies and not taken enough engineering and science courses. People take for granted how incredibly complicated the human brain is. What it takes to be "self aware", and how far we are from creating something that could have emotions like ambition and decide that crushing us was in its self interest.http://www.businessinsider.com/c/53c9338e6bb3f7d551b808b9mikej77Fri, 18 Jul 2014 10:47:42 -0400http://www.businessinsider.com/c/53c9338e6bb3f7d551b808b9
Special thanks to Mr. Dylan Love for taking on a very tough topic indeed and to supply encouragement to, in effect, keep rewriting this article while creating others dealing with the same general subject area.
Especially worthy of note is Mr. Andrew Benington who notes that out policy makers are not looking at what is currently emerging from firmly established developments in Science and Technology. They prefer expertise in the narratives they have created by about the *Past*.http://www.businessinsider.com/c/53c925ba6da811065b503678faux outrageFri, 18 Jul 2014 09:48:42 -0400http://www.businessinsider.com/c/53c925ba6da811065b503678
We have developed a robot that can be equipped with a rifle and programmed to fire on any warm object that moves and is approximately the size of a human. Once it is turned loose it will make little difference if it went crazy or was instructed to go crazy. I'm pretty sure we'll be using killbots to dispose of our undesirables long before they go Skynet on us. I, for one, welcome out robot overlords since they will likely be far more rational than our human leaders.http://www.businessinsider.com/c/53c923cf6da811415150367bandrew beningtonFri, 18 Jul 2014 09:40:31 -0400http://www.businessinsider.com/c/53c923cf6da811415150367b
It's a silly article that misses the point.
Software automation that will replace most white collar jobs is already available of the shelf or in development. And it's only going to get better. How we cope with the majority of the workforce, pretty much all professional, managerial, sales, admin, journalists, being unemployed by the mid 2020s is the pressing issue ignored by all.
As for the singularity, it's nonsense. What we call AI are inference engines running through structured and unstructured data on various structures of high multiple core parallel processing hardware. Thats not thinking and it's not consciousness. But it is an amazing research tool that is going to unlock huge scientific and engineering break throughs by simply joining the dots. Once the dots are joined and products engineered, the leap forward stops and we're back to human creativity taking us forward.
How big will the leaps be, the end of cancer and most debilitating diseases, genetic manipulation to stretch life span, faster than light travel and the manipulation of gravity. Is that enough for you. It's going to keeps busy for a long time.
The big threat we face is social and economic collapse, the end of our enlightenment civilisation and a return to a dark age, because we haven't prepared for the impact on the economy and social structures of majority permanent unemployment.
My money is on a collapse in 2026. Our policy makers are reactive and don't like giving voters bad news. They'd rather cross their fingers and hope someone else is blamed.http://www.businessinsider.com/c/53c923aaeab8ea175ffc6d9aDownhill from hereFri, 18 Jul 2014 09:39:54 -0400http://www.businessinsider.com/c/53c923aaeab8ea175ffc6d9a
The real concern is that humans will use robots for evil. I'm constantly amazed that people worry so much about scenarios related to what might happen in the future (the singularity) as opposed to looking at history to see what actually has happened and is likely to happen again. Technology has always been used for evil, violence, and war. Therefore, it is almost certain that humans will use robots to inflict violence and wage war. You could even argue that we see it today with drones. There is no certainty about robots achieving singularity, and even less certainty that if they do they will enslave the world.http://www.businessinsider.com/c/53c92145ecad04557d7ee564Oh boyFri, 18 Jul 2014 09:29:41 -0400http://www.businessinsider.com/c/53c92145ecad04557d7ee564
This article seems to boil down to the author being annoyed that serious scientists won't take his immature proposition seriously.
In 50 years, when we might have true AI and an autonomous body to put it in, this might - might - be an issue. In the meantime, go back to watching movies.http://www.businessinsider.com/c/53c91fc669bedd2322e16f08The Ancient OrderFri, 18 Jul 2014 09:23:18 -0400http://www.businessinsider.com/c/53c91fc669bedd2322e16f08
Friends, we must ready ourselves for the time when the Thinking Machines become self aware. Join the Order today. Site up soon.
<a href="http://twitter.com/aoresistance/" target="_blank" rel="nofollow" >http://twitter.com/aoresistance/</a>http://www.businessinsider.com/c/53c91a6a6bb3f70a70b808c3DH, the real one not the squatterFri, 18 Jul 2014 09:00:26 -0400http://www.businessinsider.com/c/53c91a6a6bb3f70a70b808c3
I suspect that respected scientists won't talk about the coming robot apocalypse because it would make them look stupid. All my robots are quite friendly. They've assured me they will only use their lasers and amazing strength for odd jobs around the house and walking the dog. Personally, I worry more about my dog cheating at cards.They won't let him play in Vegas.http://www.businessinsider.com/c/53c918c36da8117d1c50367bGFri, 18 Jul 2014 08:53:23 -0400http://www.businessinsider.com/c/53c918c36da8117d1c50367b
If you're reading this... You are the resistance!http://www.businessinsider.com/c/53c918646da811541c503678Fishnet_CaptchaFri, 18 Jul 2014 08:51:48 -0400http://www.businessinsider.com/c/53c918646da811541c503678
Maybe we need to acknowledge the fact that the inevitable is nigh.http://www.businessinsider.com/c/53c9166cecad04323e7ee55ftylerdurdenFri, 18 Jul 2014 08:43:24 -0400http://www.businessinsider.com/c/53c9166cecad04323e7ee55f
We need to have the Business Insider Engineers take over the lead in robotics engineering. This way we'll be guaranteed at least they will be down for 1 hour a day where we can grab the advantage.http://www.businessinsider.com/c/53c90da869bedda154e16f0bDavid ChidakelFri, 18 Jul 2014 08:06:00 -0400http://www.businessinsider.com/c/53c90da869bedda154e16f0b
The few "robotocists" that spoke to you on the record?
This is silly, absurd, imitation journalism. The "threat" is entirely speculation on your part at this point and nobody's hiding from your laser like inquiry. Discussions of the (very distant) possible consequences of robotics and mahcine intellligence occur freely and publicy in a variety of venues.
The great progress in robotics has been primarily in mobility and manipulation. There are still many miles to cross before robots acquire sufficient autonomy to decide to crush us like ants.http://www.businessinsider.com/c/53c90d206bb3f71747b808b8Doug in VirginiaFri, 18 Jul 2014 08:03:44 -0400http://www.businessinsider.com/c/53c90d206bb3f71747b808b8
OK, let's put aside the whole "publicity seeking crackpot" and "credulous BI staffer clickbait" aspects here.
We'll also willingly admit that machines are much better at humans at rock / paper / scissors, at least when they cheat. <a href="http://www.youtube.com/watch?v=3nxjjztQKtY" target="_blank" rel="nofollow" >http://www.youtube.com/watch?v=3nxjjztQKtY</a>
<a href="http://www.youtube.com/watch?v=3nxjjztQKtY" target="_blank" rel="nofollow" >http://www.youtube.com/watch?v=3nxjjztQKtY</a>
(Although that jeopardy-playing computer doesn't seem to realize it's not getting the money.)
The thesis of the article is that we need to "regulate" robotic development, but in order for that approach to be effective, we have to assume that we can define effective regulations, and, more importantly, enforce them.
So far, our track record in that area is not so distinguished. Even a relatively successful regulatory approach like automotive safety (driver's licenses, car safety, highway design, DUI enforcement) still results in 30,000 annual deaths in the US.