If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

In Avengers: Age of Ultron, the villain (Ultron) starts out as an artificial intelligence experiment gone wrong. Is this just Hollywood storytelling, or should we be worried about a future dictated by robot overlords? To help answer this question, we’ve teamed up with the awesome Rusty Ward over at Science Friction!

Professors and researchers often speak about artificial intelligence as a type of computing that will present many positive opportunities for smarter and more efficient machines and technologies. Many in the general public, however, often associate AI with the apocalyptic fictional images they see on TV and in film.

While there are those who are overly fearful or overly excited about AI - there are many in between who understand both the potential opportunities granted by AI and the need for its deliberate and careful implementation.

You know what Skynet is, right? The insane global digital defense system that inadvertently started the apocalypse in the Terminator series? No, the NSA uses it for something else. Which begs the question....why, NSA?

In case movie franchises like Terminator and The Matrix haven't already made it totally clear, we'll soon be calling robots "master". Here's what we've figured out so far. Welcome to WatchMojo's Top 5 Facts; the series where we reveal – you guessed it – five random facts about a fascinating topic. In today's instalment we’re counting down five things you probably didn't know about the impending Robopocalypse.

Elon Musk and Stephen Hawking fear a robot Apocalypse. But a major physicist disagrees

Published on Jun 24, 2015

All new technology is frightening, says physicist Lawrence Krauss. But there are many more reasons to welcome machine consciousness than to fear it.

Transcript - I see no obstacle to computers eventually becoming conscious in some sense. That’ll be a fascinating experience and as a physicist I’ll want to know if those computers do physics the same way humans do physics. And there’s no doubt that those machines will be able to evolve computationally potentially at a faster rate than humans. And in the long term the ultimate highest forms of consciousness on the planet may not be purely biological. But that’s not necessarily a bad thing. We always present computers as if they don’t have capabilities of empathy or emotion. But I would think that any intelligent machine would ultimately have experience. It’s a learning machine and ultimately it would learn from its experience like a biological conscious being. And therefore it’s hard for me to believe that it would not be able to have many of the characteristics that we now associate with being human.

Elon Musk and others who have expressed concern and Stephen Hawking are friends of mine and I understand their potential concerns but I’m frankly not as concerned about AI in the near term at the very least as many of my friends and colleagues are. It’s far less powerful than people imagine. I mean you try to get a robot to fold laundry and I’ve just been told you can’t even get robots to fold laundry. Someone just wrote me they were surprised when I said an elevator as an old example of the fact that when you get in an elevator it’s a primitive form of a computer and you’re giving up control of the fact that it’s going to take you where you want to go. Cars are the same thing. Machines are useful because they’re tools that help us do what we want to do. And I think computation machines are good examples of that. One has to be very careful in creating machines to not assume they’re more capable than they are. That’s true in cars. That’s true in vehicles that we make. That’s true in weapons we create. That’s true in defensive mechanisms we create. And so to me the dangers of AI are mostly due to the fact that people may assume the devices they create are more capable than they are and don’t need more control and monitoring. I guess I find the opportunities to be far more exciting than the dangers. The unknown is always dangerous but ultimately machines and computational machines are improving our lives in many ways. We of course have to realize that the rate at which machines are evolving in capability may far exceed the rate at which society is able to deal with them. The fact that teenagers aren’t talking to each other but always looking at their phones – not just teenagers – I was just in a restaurant here in New York this afternoon and half the people were not talking to people they were with but were staring at their phones. Well that may be not a good thing for societal interaction and people may have to come to terms with that. But I don’t think people view their phones as a danger. They view their phones as a tool that in many ways allow them to do what they otherwise do more effectively.