Mar. 21, 2016
09:00 am JST

Sci fi-ish perhaps, but if at some point the various Lethal Autonomous Weapons Systems will be able to self-program and self-replicate, there will be something to worry about. But for now, the various human masters of war and their weapons are what need to be controlled.

Mar. 21, 2016
09:53 am JST

People fear what they don't understand. Why would a robot turn against humanity or try to take over the world? Despite all of the huge advances we've made in computing, we still haven't even the slightest idea how to make a robot that desires anything, let alone power over humans. We don't even have a model for how such an intelligence could even be made. All of the sci-fi scenarios where robots turn against us require the robots to one day just miraculously develop self-awareness, because we don't even understand our own awareness yet, so we can't conceive of how we would build it in others.

No, the thing to fear is not robots becoming too powerful for humans to control. The thing to fear is other humans usurping control of the robots and systems we have carelessly designed. As the Internet of Things gets larger and larger, more and more things are getting built without basic security systems in place. As robots become more and more complex and more and more dependent on components outsourced to a variety of companies that can't be depended on to keep their products secure, the far more likely scenario is a robot that gets hacked through some component the designer didn't even know had an unsecured back door. Or even potentially an intentionally unsecured back door. Evil still comes from people, not machines.

Mar. 21, 2016
02:22 pm JST

I don't believe it's the machinery that's the problem, It's all in the software. The software/AI that one should be worried about. Anyone can build a robot, even a mechanical exo-suit or something and how they program it is what makes it a menace or a reliable machine.

Mar. 21, 2016
06:09 pm JST

Because without motivation (or rather, a self-generated will to take action) it's not going to get anywhere, is it. Without a will of its own, an AI will just do whatever it was programmed to do, including shut down when told to do so. Even that amazing Go-playing computer of Google's doesn't do anything until an operator tells it to activate.

Mar. 21, 2016
06:44 pm JST

Mar. 21, 2016
08:22 pm JST

Why would a robot need a motive? That's such a human thing, heheheh

Because without motivation (or rather, a self-generated will to take action) it's not going to get anywhere, is it.

It will do what it's supposed to do while learning from it, even if taking over the world could eventually lead to its own demise. Motive is unnecessary to reach that end. Humans may need motivation, but robots just keep at it in drone fashion.

Mar. 22, 2016
01:56 am JST

There are more immediate things to worry about. Robots will be controlled by humans for a long time, but it's humans that have taken us to the brink of destruction. Evil robots will be restricted to help lines, where they will ask you to repeat yourself over and over, secretly giggling to themselves.

Mar. 22, 2016
06:54 am JST

The threat isn't that robots will take over the world - they don't have the same desires as humans. The much more mundane but serious concern is that we will program some seemingly benign machine that will learn at an exponential pace. There are no theoretical limits to how far that process can go once started, and it would create something immensely powerful that thinks in a way that is completely incomprehensible to us. The threat lies in the simple dilemna that poses: unless we program that thing in a way that allows us to control it, it could (without any evil intent or human agency) simply destroy us all because of some loophole in its programming. Because it would evolve quickly into something so incomprehensible to us, making that programming right is extremely difficult to do because we can't even fully understand how it would "think" once it surpassed a certain point. So we are stuck between those two conundrums.

Mar. 22, 2016
08:52 am JST

Mar. 22, 2016
09:54 am JST

rainydayMAR. 22, 2016 - 06:54AM JST
The much more mundane but serious concern is that we will program some seemingly benign machine that will learn at an exponential pace.

The thing is, AIs don't truly learn, not the way humans do. Most AIs are algorithmic, meaning they essentially follow the same steps over and over again to complete a procedure. With clever coding they can "learn" to fine-tune the algorithm, but they can't step outside of it. Even AlphaGo, Google's amazing go-playing AI is said to "learn" go strategy through a neural network, but in truth what it does is simply "learn" how to optimize its search through all the possible go plays after whatever turn it's on. It's still nothing more than a brute-force processing approach to what humans do with far less computational effort.

Human neurology and AI processing are fundamentally different. Programmers like to say their new AIs "learn", but the more I study how humans learn the more convinced I am that those claims are exaggerations.

There are no theoretical limits to how far that process can go once started, and it would create something immensely powerful that thinks in a way that is completely incomprehensible to us

Computers already "think" in a way that is completely incomprehensible to us, such that they think at all (they don't truly). Just look at everyone ascribing human-like intelligence to them. That's our jam, we're hard-wired to look for other humans and to work out how they feel to such a degree that we can't stop. So we look at machines that appear vaguely to function like we do and we ascribe to them human abilities like the ability to make independent decisions and act in defiance of its programming, but for the state of AI now and in the foreseeable future, that quite simply cannot happen.

Mar. 22, 2016
11:39 am JST

So we look at machines that appear vaguely to function like we do and we ascribe to them human abilities like the ability to make independent decisions and act in defiance of its programming, but for the state of AI now and in the foreseeable future, that quite simply cannot happen.

True, I was using the term learn in a more colloquial sense rather than to describe how we as humans learn.

My understanding (and Im not an expert) is that the current form of AI isnt a threat since it involves basically what you describe, optimizing results for a single task through massive processing capabilities. What people like Elon Musk and others are more worried about is if in the future AI development goes beyond that model and we figure out how to develop AI with general intelligence (ie not limited to performing specific tasks but actually able to do, and learn to do, almost anything without the need for additional programming, etc).

The risk there being that we would create an intelligence so far beyond our own that we would be unable to control it through pre-emptive programming, etc because we wouldn`t be able to anticipate what would go wrong until it actually did (by which time it would be too late to correct it).

The AI wouldn`t act with malice or contrary to its programming, but unforeseeable outcomes of something powerful enough to consume and analyze the entirety of human knowledge in seconds and then decide on a course of action consistent with its programming based on that include things like it figuring out how to take over many of our systems and use them to further its programming objectives in unintended ways with disastrous side effects.

Of course we wouldn`t deliberately unleash something like that, but the risk of us accidentally doing so seems to be one people are most concerned about.

Mar. 22, 2016
09:29 pm JST

MagnetMAR. 22, 2016 - 08:23PM JST
You know humanity is taking a turn for the worst when great minds like Elon Musk and Stephen Hawking are in the minority on this topic.

Let's not commit the fallacy of assuming that just because a person is well-regarded in their personal fields of expertise, that everything they say about every other topic must also be right, shall we? Science doesn't need any cults of personality.

Mar. 23, 2016
01:04 am JST

Mar. 24, 2016
12:23 am JST

Yeah i think so because re-write software ability make it dangerous.It's make robots autonom.Hardware technology also supoort it today peoples talking about DNI (Direct Neural Interface) once it's possible robots software tech will reach furter.Than we can't control them.But every scientist discuss it.I think we must not use re-write software on robots or else our nightmare will be come true.

Mar. 24, 2016
02:01 am JST

My own thought is that I am less scared of an artificial intelligence taking over on its own for whatever reason and more scared of a machine taking over because somebody screwed up. It's more possible because it doesn't need the machine to be intelligent and it's more likely because there are any number of reasons why a machine could be affected in this way. Most of the reasons would relate to the human responsible for building or operating it in the first place.