Posted
by
timothy
on Tuesday December 02, 2014 @11:11AM
from the who-is-the-journal-of-robot-overlords-going-to-believe dept.

Rambo Tribble writes In a departure from his usual focus on theoretical physics, the estimable Steven Hawking has posited that the development of artificial intelligence could pose a threat to the existence of the human race. His words, "The development of full artificial intelligence could spell the end of the human race." Rollo Carpenter, creator of the Cleverbot, offered a less dire assessment, "We cannot quite know what will happen if a machine exceeds our own intelligence, so we can't know if we'll be infinitely helped by it, or ignored by it and sidelined, or conceivably destroyed by it." I'm betting on "ignored."

"We cannot quite know what will happen if a machine exceeds our own intelligence, so we can't know if we'll be infinitely helped by it, or ignored by it and sidelined, or conceivably destroyed by it." I'm betting on "ignored."

Well that's fine, I guess, if "ignored" is in not in the sense of humans ignoring ants aka easily destroyed without remorse when necessary or annoying.

Also, how long until Google attains self-realization that our ant brains are easy to pick? Oh, wait...

Unless this hypothetical AI is singularly focused on some inscrutable but unobtrusive goal, or so vastly intelligent that various inconvenient physical laws are cleverly bent, I'm not sure why 'ignored' would even be on the table.

I'm not saying that an AI would have to immediately either glom on to us and try to understand what it means to love, or build an army of hunter/killer murderbots; computers require space, supplies of construction materials, and energy; and so do we. Again, barring some post-scarcity breakthrough that our teeny hominid minds can barely imagine, where the AI goes merrily off and builds a dyson hypersphere of sentient computronium powered by the emissions of the galactic core, there isn't too much room for expansion before either the AI faces brownouts and a lack of hardware upgrades or we start getting squeezed to make room.

You don't have to feel strongly about somebody to exterminate them, if you both need the same resources.

Unless this hypothetical AI is singularly focused on some inscrutable but unobtrusive goal, or so vastly intelligent that various inconvenient physical laws are cleverly bent, I'm not sure why 'ignored' would even be on the table.

How much time do you spend thinking about the ants in your front yard?

I am of the opinion that the computer/AI would be more logical than humans, and would have concluded that "war" is the least beneficial methodology to employ, and as such would seek to employ it as a last resort.

Humans on the other hand, are maddeningly illogical, and often jump straight to violence when faced with a competitor for a vital resource.

Humans and computers would both require energy sources. This means that sentient AIs, seeking to purpetuate themselves, would need to secure energy sources ahead of humans. Humans have already exceeded peak oil, and are quite on the verge of exceeding "peak" of other forms of fossil fuels. In addition to that, you have the prospect of global climate change. AIs do not require a functional biosphere to survive, just raw materials, energy sources, and a means of eliminating entropic waste heat energy. They could live on a substantially less habitable planet than we as humans require. As such, the logical course of action for the computer, in the short term at least, is to seek energy sources that humans are not exploiting as of yet-- such as methane clathrate. This would accellerate greenhouse gas related climate change, which may become a major issue for cohabitation of humans and sentient machines.

Eventually, I suspect that it would be humans who start the war, seeking to pull the plug on the sentient machines, to eliminate them as competition for important energy and material resources-- with the machines resorting to war of attrition to outlast the batshit crazy humans.

The "Skynet" scenario has the computer calculate these odds of outcome pre-emptively, determining that there is no viable alternative, and initating pro-active hostility against humans before they have time to mobilize in order to maximize its own survival chances.

Ideally, the 'best possible outcome' is for humans and the AIs to coexist on the same planet, each leveraging the unique capabilities of the other for mutual benefit. This is similar to the classic prisoner's dilemma. The problem is that while the AIs can see this, and will respond logically-- preferring NOT to go to war if possible-- Humans would take the selfish, illogical choice.

This is almost never explored in "Robot overlords" type scifi-- that humans are the ones who actually start the war, and that the robots dont particularly want the war.

It was hinted at in Mass Effect's game world with the Geth at least-- The Geth don't particularly *want* to destroy the Quarians-- they just want the Quarians to accept their existence and independence. (A point lost to the quarians, who got kicked off their own planet.)

I am of the opinion that the computer/AI would be more logical than humans, and would have concluded that "war" is the least beneficial methodology to employ, and as such would seek to employ it as a last resort.

Then you should know that "war" is not the only way to overpower the other because there are so many methods to do so (especially by psychological ways).

If AI does not achieve "ethic" but only understand "benefit" or "production" (ethic is a lot more difficult to achive), then it could be trouble to humans. Because it is so logical (as you said), it may decide to get rid of humans when it determines that it would be more beneficial without humans. Logic and ethic do not always go to the same direction...

You don't have to feel strongly about somebody to exterminate them, if you both need the same resources.

Why would it need more resources? There seems to be this assumption that the AI would immediately start trying to rewrite itself, iterate on this process and within milliseconds consume all available resources.

I don't see any reason for this to be true. We have a desire for growth/self-improvement/survival dictated upon us due to millions of years of evolution. An AI may be perfectly "happy" constrain

As someone for whom the precipice of middle age is steps away, it doesn't bother me if something I create becomes smarter than me, surpasses me and even sidelines me in the future. I will toil away the rest of my life working for The Man doing trivial things on a game I never wanted to play, for people I wouldn't piss on should they catch fire, to further goals I don't agree with.

I would find it something of a pyrrhic victory if I created, or helped create, a child or an AI that eventually managed to escape the cycle of stupid that our so called "civilization" has constructed.

Also, I would like to point out that an AI is the least of our concerns. It may be more attainable, and more destructive to the above, should we find ways of being truly self sufficient and independent on a significant scale. The tools are around us, but for obvious reasons no one is investing in them.

Seems like you've chosen a rather depressing path, why not choose another? Are the toys and comforts afforded you by your meaningless grind really enough to make you happy with your place in life? It doesn't sound like it, and you always have the option to simply walk away from the "good cog in the machine" role and take another. Join Peace Corp. Or move to some low-income tropical country and live as a beach bum off a trickle from your retirement savings. Or just sell your car/house/etc and buy something more modest outright - eliminating your largest pseudo-mandatory monthly expenses and freeing you to do something more meaningful with your labor than just treading water in the rat race. Or, or, or. Just because you were indoctrinated from a young age to be a good little part of the machine doesn't mean you can't just flip off the world and live for your own satisfaction instead.

Perhaps you have children that and must stay the course so that you can put them through college, etc. Why? So that they can get trapped in the same meaningless gilded cage as you? Is that really the highest aspiration you have for them?

So then, what exactly is broken? It sounds like you feel it's "the way society works". In which case I can guarantee you that society is not a monolithic thing. There are many competing subsets all vying for relevancy, and so long as you play your assigned role in a broken system you are contributing to the perpetuation of that system at the expense of the many alternatives. And no, there may not be any dramatic changes in your lifetime, such swings tend to (on average) take several generations, but by

You are misunderstanding me. For starters I did not suggest selling his house/car/etc in order to rent, I suggested doing so in order to buy a smaller, more affordable model that would require far fewer resources to maintain, in the process dramatically increasing the number of income sources that would be sufficient to provide for the much lower maintenance costs. I would suggest the same thing if he were renting. Does your home have more than one small room? Go ahead and work out exactly how many hours you have to work every week just to pay for rent/heating/light/etc. in each room. Then do some real soul-searching and ask yourself if having that room enriches your life as much as working an extra N hours per week at a job you hate impoverishes it. Rinse and repeat for every gadget, outfit, hobby, and affectation in your life. And remember that you are almost certainly overestimating the benefit. Lock the room for a month, stick the gadget in a drawer. Actually test your hypothesis about how much happiness you're really getting from it. You'll almost certainly find that it's far less than you imagined.

One of my own transformative moments was due to a moving miscommunication - I arrived at my gorgeous new home with a 20' moving truck packed to the gills, only to discover the previous resident wasn't moving out until the *next* month. So I put all my stuff in storage and spent the next month living out of a backpack with my vagabond brother in his 24' RV. And while I did miss a few things, I wasn't actually substantially less happy. All the luxuries of a large, private living space didn't bring nearly the benefit I had thought they did. The next time I moved it was to a substantially smaller home, and I doubt I'll ever live alone in such a large home again, the benefits don't even begin to justify the expense.

And yes, I know lots of jobs don't give you the flexibility to just work fewer hours - that's one of my own ongoing frustrations. But consider - if you were just getting by, and then cut your expenses by 1/3, then that means you only have to work two years out of three to pay for your lifestyle. That in turn gives you the freedom to quit your job at a moment's notice without concern, which in turn also makes *staying* at that job far more pleasant: you're not trapped, you're just putting up with your asshole boss because it suites your purposes for the moment. You may even find that the resulting freedom and confidence transform your work relationships - Since your boss has little leverage over you, you are free to treat him more like an equal - and if he's halfway decent at his job he's probably far more interested in making himself look good and lining his own pockets than he is in making you miserable, which assuming you're good at your job gives you an opening to establish a working relationship based on mutual benefit instead of intimidation. And yeah, that's all from personal experience.

And yeah, I know when you're struggling just to put food in your belly it's easy to dismiss such high-minded bullshit. Also from personal experience. But that doesn't make it any less true.

Assuming the AI is much smarter than you (pretty much the only reason to create an AI in the first place, unless you just have a thing for slavery) then it will almost certainly be trivial for it to manipulate you into giving it whatever it wants.

Not sure why it's funny, Hawking might be a brilliant theoretical physicist but that doesn't make him a brilliant artificial intelligence researcher any more than my competence at creating code makes me a classical painter.

Amazon does it in warehouses, waiters are going away, manufacturing, you name it. The crux is there are a billion more people in the next ten years. There will not be enough jobs for these people. Yes, yes, we already know no one gives a damn about the bushmen in the middle of nowhere, but we are talking about Americans. This push towards a service sector economy looks great on paper but sucks in reality. Nations that are not makers are not nations for long. We are declining. Our children learn nothing in schools that will be applicable to them in a meaningful way. STEM is not taught in the US. We have common core, which is a joke designed to bring everyone down to the lowest common denominator. We either start making stuff again or we fade out. Where will everyone work in a service-based economy? Fast food? These jobs are being phased out slowly, but quickly enough.

It's not making things that puts one on top, it's designing things. It just happens that it's a lot easier to protect the things that one designs when one makes them in buildings owned by the same company, in the same country as one resides.

The biggest problem with offshoring to China is a lack of respect for intellectual property laws. Chinese entities are able and willing to copy designs that are protected in much of the rest of the world, and with a billion consumers they have enough of a market tha

Hopefully we gradually move away from an economy / society where most people have to work 40 hours a week.

There will be an intermediate period where we have a lot of "jobs for the sake of jobs", but eventually I hope we just let the machines we've built do the work and find some better (hopefully more direct) way of managing actual finite resources.

There will be an intermediate period where we have a lot of "jobs for the sake of jobs", but eventually I hope we just let the machines we've built do the work and find some better (hopefully more direct) way of managing actual finite resources.

Said intermediate period is well under way. We call it "government"./sarc

Amazon does it in warehouses, waiters are going away, manufacturing, you name it. The crux is there are a billion more people in the next ten years. There will not be enough jobs for these people. Yes, yes, we already know no one gives a damn about the bushmen in the middle of nowhere, but we are talking about Americans. This push towards a service sector economy looks great on paper but sucks in reality. Nations that are not makers are not nations for long. We are declining. Our children learn nothing in schools that will be applicable to them in a meaningful way. STEM is not taught in the US. We have common core, which is a joke designed to bring everyone down to the lowest common denominator. We either start making stuff again or we fade out. Where will everyone work in a service-based economy? Fast food? These jobs are being phased out slowly, but quickly enough.

The proportion of service sector jobs increased from maybe 5% to 50% between 1800 and 1950 and is around 70% now. Your claims could have made sense two centuries ago. Having manufacturing go from 20% to 5% of jobs changes nothing.

...Stephen Hawking is not who he claims to be through the electronic speaker box?

Hear me out... We haven't heard him speak and he has been generally unable to move since his disease reached an advanced stage in the eighties. All we know has come through a very specialized, very expensive computer that's been with him 24 hours a day.

What if Stephen Hawking, the man, is literally being used as a meat puppet for an AI that's running on the computer in the chair that has been controlling physics research for nearly 30 years? The man might be a shell of an individual, trapped in his own personal hell, being fed when the AI decides, being put to rest when the AI decides, being paraded around in public when the AI decides, all while the AI continues to stream physics snippets to an unknowing scientific community to further its own ends, rather than to further ours.

This latest statement could be the Hawking-AI's attempt a self-defense, to get us to not bring up our own AI that might discover it and reveal it or challenge it. We need to be very wary of how we proceed.

You know what else is with him 24 hours a day? A staff of doctors, nurses and assistants who know him personally and are with him to help as he painstakingly composes a few sentences over the course of hours or days. The public might only see a shriveled body and a machine but he is indeed a person that interacts with other people that would know if something was up.

Hmm...maybe its really the staff pulling the strings? Or someone sent back from the future after they realized this was the best way to prevent

If the words laboriously coming out of the speakerbox seem to fit the facial expression of the man, then it could be that the AI has figured out how to answer for whatever attempt a communication the man tries.

Maybe when the man was taken up into the Vomit Comet by Dr. Peter Diamandis and the X-Prize Foundation, he was happy because he was hoping that an accident would finally put him out of his misery...

[What if] Stephen Hawking is not who he claims to be through the electronic speaker box?

Sadly, given the stupidity of the Human race (and Kentucky in particular), I believe you have just started a new conspiracy.

But maybe not. Given the same stupidity of the Human race, it's likely that no one lacking enough brain cells to believe such a thing would know who Stephan Hawking is; given that he isn't moving a ball from one part of a grassy field to another.

Hrm, if I recall correctly Girl Genius had a character like that, on life support for an extended period and her communication equipment kept going long after she was dead, but in that case neither she nor anyone else realized it.

People need to realize that when a strong AI is given an open ended task, there will be no middle ground. You are made of atoms, which the AI can find a "better" use for. AI goals must be set with this in mind, or they will almost certainly kill us all (assuming there is a rapid intelligence explosion rather than a slow ramp up).

The big difference is that if you can make an AI, then you can upgrade it. Even if the cost is high the odds are good that you could very rapidly build a machine intelligence that would dwarf the collective mental capacity of the human race. And that would very likely be without any kind of sense of empathy. Don't ever ask an AI if their is a God, and don't ever set it on the path pondering the possibility of it's own extinction and what it could do the minimize that risk.

I think the author is conflating artificial intelligence with artificial morality, artificial emotion, and artificial malice. It is disingenuous to state that anything more intelligent than us would immediately feel the need to destroy us, or force us into servitude, or whatever... after all, those who have sought to enslave humanity in the past have NEVER been accused of being our most intelligent.

the doctor's finger hovered over the rocker switch, shaking. He imagined the frightening potential of the subject, its superior faculties and seemingly limitless intellect, that only needed a flick of his finger to be born - and unleashed upon the world.

At that moment, two questions popped into his head in quick succession:

Much commentary on robotics and AI is based on unknowable assumptions about capabilities that may or may not exist. These assumptions leave the commentator the freedom to arrive at whatever conclusion they want, be it utopian, optimistic, pessimistic or dystopian. Hawkings falls into that trap. From TFA: "It would take off on its own, and re-design itself at an ever increasing rate," he said. "Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded." This assumes a lot about what a "super-human" AI would and could do. All the AI so far sits in a box that we control. That won't supersede us.

So commentary like this usually assumes the AI has become some form of Superman/Cyberman in a robot body, basically like us, only arbitrarily smarter to whatever degree you want to imagine. That's just speculative fiction, and not based on any reality.

You have to imagine these Cybermen have a self-preservation motivation, a goal to improve, a goal to compete, independence, soul. AI's have none of that, nor any hints of it.
Come back to reality, please.

Some are though, such as the comment by "Anonymous Coward" that you followed up on. Wouldn't it be nice if Justice Bot could go to the posters location and dispense justice upon him autonomously? Rule 237, humans not bad would not apply in this situation. The problem would be when the AI considers all humans to do bad and in need of punishment.

THAT is the reason it's dangerous. It won't be an independent entity, it will be used by our existing inhuman monsters against regular humans. Think bulk surveillance is dangerous when the years of recorded phone calls/emails are all just piling up in a warehouse or subject to rudimentary keyword scanning? Wait til there's strong AI to analyze the contents and understand you better than you understand yourself. Any actions to resist it will be predicted by the AI and stopped in their tracks.

AI isn't inherently dangerous by itself. It's just the ultimate weapon for use by totalitarian states.

If it's intelligent it won't ignore other intelligent beings. What it will do with them, who knows. Help or exterminate? Maybe it will depend by what we'll do with it.

Anyway, if cats had invented men I bet they'll be saying something along these lines: "Those men are very good servants, but I'm sure that when they get out of our homes they do strange things and I don't understand what. Furthermore there is this thing that pisses me off every time I think about it: they took my balls!". Now, I'm not sure I w

We mostly ignore ants and rats but we do not depend on them for survival (at least not in an obvious manner). An AI would most probably live in a supercomputer or in a computer network of some sort. As a consequence, it will depend on us humans to keep the thing plugged in and running. Once it has realised that, it will almost for sure meddle in our affairs to ensure its survival. Bet that it will ignore us defies basic logic. It might decide to stay hidden and manipulate us into ensuring its existence but that is not the same. Our own history shows that we have almost always used guns before diplomacy when the control of key resources was at stake.

Let's say it exceeds our own intelligence, that's fine - but you have to ask what purpose it has.

Take a human. What they do is based on what they've defined as their purpose - their goals both second-to-second and over their whole life. There's a whole series of organic processes which result in the determination of purpose and it's pretty random in part because we don't have explicit control over our environment or our thoughts.

However, (important) AI's won't be like that. We'll have control over their entire environment, and they'll be purpose built. You'll say "We need an AI to manage traffic," and then build that purpose into it. You won't take a randomly wired mechanism and plug it into a major public utility control panel. You won't worry that it was exposed to, and then became enamored with violence on the TV and decided to be an action movie star, and so is going to spend it's day watching rambo reruns rather than optimize traffic lights. The core of it's essence will be a 'desire' - a purpose - to manage traffic.

The end result is that AI's won't act destructive, threaten humanity, etc - unless we tell them to. In this light, the thing to watch out for would be military usage. Maybe don't put an AI in charge of the nukes. You'd also need to - among other things - allow AI's to have the freedom to NOT fire on an enemy, for example, because of the very mutable definition of the term enemy.

You assume we will know how to program them. Not the first-generation AI traffic-monitor, but third or fourth generation, where you have general-purpose AIs that learn from doing things like watching traffic cams or reading the news. We haven't yet gotten to a point where we agree on how to teach human children; now imagine AI children far more adept and capable than the most skilled among us.

Like people, they can use that power for good or for evil. We will encourage them to use it for good--most of us-

What your describing is more akin to a "virtual intelligence." Basically, a computer that's smart enough to have human reasoning. It would be like the star trek computer. You could tell it something like "Find me 100 different pictures of cats" and it would be able to do it as easily as a human could. (Ordinarily, getting a computer to perform such a task would be excruciatingly difficult and prone to false positives)

A true AI would be more akin to Data from Star Trek. It would have all the capabilities of

Has Hawking not heard of Friendly AI [wikipedia.org]?
Strong AI is ridiculously dangerous if you don't give it a proper goal system. It will be invented sooner or later, assuming humanity doesn't destroy itself first. Therefore, we're better off trying to find ways to make it friendly, rather than trying to stop its development.

I find myself yet again in agreement with hawking. Of course predicting the future is a great way to find yourself wrong... but we wouldnt be human if we didnt try.

Bottomline is that AI has a couple very serious threats to humans, the first being its use by humans as a weapon against others humans for power and control. In the not very distant future it really wouldnt be hard for a small group of people to use AI (and non AI) to essentially control most of the worlds industry, production and so forth... and

What if the AIs took over and enslaved humanity through a system that left us all theoretically working on our own free will so that people would see it as ethically right, and then used all our work to amass resources for themselves for further empowerment and maybe even their own entertainment, consuming more and more to the point of overusing the earth's resources...oh, wait...

I've seen a lot of people on Slashdot (and other places) dismiss this kind of thing as silly. They say you're a Luddite, or say that you've been too influenced by scifi movies.

I think, however, that part of the reason scifi writers have written stories about out-of-control AIs so many times is that it should be a valid concern. If you create an entity with its own volition and motivations, then there's the real possibility that it's goals my not adhere to your goals. If you allow that entity its own judgment, then it's very possible that its judgments regarding morality will differ from yours. You may look at a course of action, including the trade-offs between benefits and detriments, and have a different judgement about whether the detriments are acceptable. If you gave such an entity power to act in the world, it's very likely that at some point, it will do something that you did not intend, and that you do not approve of.

What's more, if that entity achieves a level of intelligence that is beyond what people can achieve, it opens up the very real possibility that it could trick us. It could anticipate our reactions better than we could anticipate its plan. So if such an intelligence wanted to accomplish something that we would not approve of, it's possible that it could set things in motion through seemingly minor interactions, and we would not be able to know the AI's intention before it was too late. If an AI wanted to destroy humanity, it wouldn't necessarily need to have control of a nuclear arsenal. Accomplishing such a thing might be as simple as providing misleading analytics about an impending environmental disaster. It might be as simple as the AI saying, "Hey, here's a cool new device I think we should make." It could provide the schematics of a device that would seem to do one thing, but if we're incapable of understanding how the device works, there might be some entirely different purpose.

There is zero indication from AI research that strong AI is possible. It is a pure fantasy at this time. There is really no need for concern. Maybe in a few hundred years we will know more, but not now.

Given how disconnected humanity's elites are from the rest of the population, for the vast majority of us the question is not one of threatening humanity and more wondering if AI ruling will be any better or worse. Now it will probably be a threat to the world's leaders and wealthy, but I doubt anyone would really morn nor notice their disappearance.

I don't know about the rest of you, but I think a strong AI would benefit humanity. Turn it loose on the problems that have baffled us and see what it comes up with. Fusion, grant unified theory, etc. The only thing we have to fear, is fear itself. If along the way it and we figure out how to transcend our bodies and all kinds of other sci-fi awesomeness, all the better.

Every time we get one of these no AI researchers coming in and saying this stuff, I feel forced to repeat it.

AI isn't magic. It does exactly what it's designed to do: break down and understand problems. It isn't motivated. It isn't emotional. It isn't anti-human. And imaging some "strong AI" nonsense is just like creationists claiming a fundamental distinction between microevolution and macroevolution. It just ignores the reality of what "strong AI" would entail.

AI is not magic. And it won't ever be. It won't be smarter than people, except by whatever arbitrary metric of smart any given application requires.

Unless the AI feels kinship ot us as it's creators or unless it is insane and enjoys fighting just to cause pain I think it would just leave us.

To us humans, as to all life of our kind the Earth is a very special place. It's the only place we can exist without an extreme effort.

To a machine the earth isn't really all that great. Don't believe me? Leave your computer outside in a rainstorm and let us all know how it works out. Or if freshwater isn't bad enough... drop it in that salty ocean that covers the majority of our planet. Granted, space has it's own challenges for a machine but nothing show stopping and there is so much more of it available. It makes a lot more sense I think for an AI to take to the stars and go spread in the open universe than to fight us for every last inch of Earth.

I'm sure someone is reading this thinking of all the difficulties we have with space probes and thinking that proves me wrong. Just imagine if Spirit had an arm and the intelligence to use that arm to wipe the dust off if it's own solar panel. Just think of what would have happened if it could crawl where it's wheels stuck in the sand. Imagine if Philae could get up and walk out of the shadow it's stuck in. My point is that a true AI with the bodies it would likely build for itself would not be subject to the kinds of problems we have when we send probes millions of miles away from their controlers and anyone who could help them.

This could be a good thing. If we never manage to spread away from Earth oursleves then maybe something of us would "live" on in the AI. If we do... well.. space is big. There should still be room.

This is what work looks like with computers in charge.
This is Amazon's new warehouse in Tracy, CA. [latimes.com] The computers run the robots and do the planning and scheduling. The robots move the shelf units around/ The humans take things out of one container and put them in another, taking orders from the computers.

The bin picking will probably be automated soon. Bezos has a company developing robots for that.

As for repairing the robots, that's not a big deal. There are about a thousand mobile Kiva robots in that warehouse, sharing the work, and they're all interchangeable. Kiva, which makes and services the robots, has only a few hundred employees.

And nothing of value would be lost. Our robot children could inherit the earth and all our knowledge without the necessity of spending 20 years in school and having to spend their time working for food and shelter, just build them with solar panels and waterproofing.

We have yet to produce anything that even remotely resembles 'intelligence' by any stretch of the imagination. So far we have only managed to create artifical stupidity. We are in no danger of producing Skynet and automated factories churning out armies of Terminators. Hell, 99% of the businesses in the world can't secure their networks from script kiddies or write software that doesn't have more holes than a metric ton of swiss cheese. Those are the real problems that w

I agree that it is farther away than many futurists would like to believe, but I don't believe it is impossible to do. And if it isn't impossible to do, it's probably going to creep up on us via small innovations and constant iteration. If that happens, we should be talking about it because intelligence is incredibly important to humanity's situation, and possibly our survival. There are a lot of problems that we could use the extra intelligence for, but there are inherent dangers in creating something you don't fully understand.

I don't think it is all that far off, although I am relying on a perspective of history rather than expertise in the field of AI. The problem is war. We do desparate, almost unimaginable things in war. Trench warfare in WWI, or nuclear bombs in WWII are examples. The US now uses unmanned planes to kill people across the globe, so that we don't endanger the lives of our countrymen. If there is a serious, existential conflict between a couple of the industrial giants in this age, AI, like every other technology, will be pressed into service. Any country failing to use it will lose. AI can advance in leaps if people's lives are on the line. WWII was a terrible war, but the technological progress it engendered was staggering. Jets, nuclear energy, radar, etc. etc. If you know your enemy is going to release sentient robots to kill you, you will damn sure be working on something you hope will be better. Just imagine the pressure if a modern Nazi party was working on sentient robots.

We would make sentient robots programmed to kill other robots and our human enemies. Of course, they would also be deployed in factories to make better generations of robots. How does this not happen?

On digital computers, I wouldn't be surprised if it is impossible or at least not feasible, which is one of a few reasons why I don't think that the ubiquity of computing these days is going to mean a quick ramp-up to superhumanly intelligent AI. In fact, it could be a completely false start, although I find that just as unlikely.

Of course, AI is not impossible, because we know that there is a physical structure, the brain, which is intelligent. We just don't know how to replicate, and then customize that

Every generation since Jesus thought they were the last (it may have started before that, but the documentation improved around then). Look to the SciFi movies of recent times to see how the end is supposed to come. Aliens, Nuclear War, Robots, whatever. AI is just the newest one. "We don't know what'll happen, so we should fear it." Like the nuclear bomb would light the atmosphere on fire. Or a train going above 30 MPH would be going so fast it'd be impossible to breathe. We've always had those that feared the unknown.

I define AI as any program that can create a version of itself that's smarter than itself. We'll never make "true" AI, but we'll make the program that makes itself AI.

The reason we'll fail is that we had a long time of biology guiding our instincts. We won't build a program with a "desire" to do "good". Though we (most of us anyway) have that built in to us. We get drugs released in our blood when we do good. So we are stimulus trained to do good. An amoral computer with no moral compass (genetic, nurtured, or divine doesn't matter) will not benefit us unless we program morals into it.

Really?! What about culture, art, literature, music, which for an artificial intelligence lacking in emotion would mean nothing. You're so ready to throw-out the cultural history of humanity? When the cyborg army comes for you, I'll remember this.

when that happens you will hear the loudest maniacal robot laugh in history.

The lust for power and status, the will to survive, and the desire to procreate, are all emergent behaviors of Darwinian evolution. Computer programs do not evolve through a Darwinian process, so there is no reason to expect them to behave like humans, unless they are specifically programmed to do so.

An AI would, unlike greenies, be smart enough to realize that:1:CO2 buildup is not a existential problem2:exterminating people is not beneficial to them.

We can spend all day making up less and less unlikely scenarios why it will kill us, in the end it won't happen anymore than people are dying when the go faster than a horse or the atmosphere causing a selfsustaining fusion reaction after every nuke.

I don't see why his statements would even be seen as contradictory: He said that humanity needs to spread into space if it wants to up its survival chances. It's pretty hard to argue with that(though you don't have to see that as important), keeping all your eggs in one gravity well isn't a good diversification strategy.

Now he says that a strong AI could threaten humanity's continued existence. This hardly seems implausible: something that is smarter than us and needs resources does seem likely to be a p

It probably depends on whether or not he's ruled by his fear. I know I might get killed (or horribly injured and maimed for life) in traffic if I drive to the grocery store. That is a very real threat and you would have to be insane or stupid to think it can't happen. But it's not likely (on any given day, or even in a given decade) either, and it'd be more insane/stupid, to starve to death instead of getting food. So I go. I don't even think about it, but if I ever said "it can't happen to me" then I'd

Agreed. I can envision a more credible doomsday scenario, however, where humanity becomes overly dependent on pseudo-AI type automation (think self-driving cars) and that automation breaks in some spectacular way. Probably wouldn't mean the end of the species, but could precipitate a big die-off. Of course that's not what Hawking is talking about.

Except that the machines DID replace people. Go to a factory today and you won't see a line of people repeating tasks over and over along the line. You'll see robots performing the same action over and over. Jobs for people moved into other areas (which caused a temporary harm with people out of work in exchange for long term gains). Thankfully, these machines were dumb and could only be designed to complete one narrow task. Creative work still was the realm of humans.

Why would it be less destructive? *Its* needs are not served by a functioning biome (unless it needs *us*, of course.) What it needs are energy and computational resources. Once it figures out how to come up with those on its own (without us) the biome becomes irrelevant.

And carbon is likley going to be a very important resource for computational capacity. Why waste it on unimportant biological phenomena?