“The emergence of artificial intelligence could be the ‘worst event in the history of our civilization’ unless society finds a way to control its development, high-profile physicist Stephen Hawking said Monday,” Arjun Kharpal reports for CNBC. “He made the comments during a talk at the Web Summit technology conference in Lisbon, Portugal, in which he said, ‘computers can, in theory, emulate human intelligence, and exceed it.'”

“He admitted the future was uncertain,” Kharpal reports. “‘Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst. We just don’t know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it,’ Hawking said during the speech. ‘Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy.'”

Kharpal reports, “Hawking explained that to avoid this potential reality, creators of AI need to ’employ best practice and effective management.'”

MacDailyNews Take: Good luck with that. Have you ever visited this planet?

“It’s not the first time the British physicist has warned on the dangers of AI. And he joins a chorus of other major voices in science and technology to speak about their concerns,” Kharpal reports. “Tesla and SpaceX CEO Elon Musk recently said that AI could cause a third world war, and even proposed that humans must merge with machines in order to remain relevant in the future.”

MacDailyNews Take: Alright, for the heck of it, let’s try to stay optimistic:

Asimov’s The Three Laws of Robotics:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

• A powerful AI system tasked with ensuring your safety might imprison you at home. If you asked for happiness, it might hook you up to a life support and ceaselessly stimulate your brain’s pleasure centers. If you don’t provide the AI with a very big library of preferred behaviors or an ironclad means for it to deduce what behavior you prefer, you’ll be stuck with whatever it comes up with. And since it’s a highly complex system, you may never understand it well enough to make sure you’ve got it right. ― James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era

• You realize that there is no free will in what we create with AI. Everything functions within rules and parameters. ― Clyde DeSouza, MAYA

• But on the question of whether the robots will eventually take over, [Rodney A. Brooks] says that this will probably not happen, for a variety of reasons. First, no one is going to accidentally build a robot that wants to rule the world. He says that creating a robot that can suddenly take over is like someone accidentally building a 747 jetliner. Plus, there will be plenty of time to stop this from happening. Before someone builds a “super-bad robot,” someone has to build a “mildly bad robot,” and before that a “not-so-bad robot.” ― Michio Kaku, The Future of the Mind: The Scientific Quest to Understand, Enhance, and Empower the Mind

If and AI (or autonomous algorithm) makes a decision to reposes a families house. Doesn’t that harm the family. So we’re not going to use AI for any financial decision making, but wait a minute, don’t we do that now.

Well, “The Population Bomb”, with all its alarmist tone, did get some of the basics right, in that the rate of growth of world population cannot be sustained by the rate of growth of food production. It doesn’t require a scientist to figure this out; you just need to look at a few simple charts to get it.

Between the 60s (when the book came out) and today, several things happened that helped lower the population growth rate a bit. China’s “One Child” policy was remarkable success (from a demographer’s point of view; we aren’t discussing ethical or moral questions here). By limiting family size by a government decree, the most populous nation was able to substantially reduce the out-of-control birth rate, which significantly impacted global growth rates. What also helped was the significant improvement in yields of common crops (maize, rice, etc), with genetically modified supercrops. What also helped significantly was the UN’s Millennium Development Goals, which provided the driving force to reduce the number of people living in abject poverty by more than half (among other things).

So, the book’s predictions were off by a few decades, but looking at trends as they are today, it is very difficult to figure out how the planet will feed itself 30 years from now, unless some major agricultural breakthrough takes place between now and then.

The thing is, making predictions like this, looking decades into the future, only works if no significant unexpected events take place. It’s one thing to dismiss the possibility of meteor impact, but there are various major events that could turn those trends in all sorts of directions. The main problem with this is that such contingencies provide a convenient excuse to essentially do nothing. In the likely outcome that we get to 2050 and half of the world ends up starving to death, our kids will look back and ask us how could we simply do nothing when we actually knew. Same goes with climate change, but that subject is rather poisonous to some people in America.

I have a question for Stephen Hawking… Why does this matter?
I happen to be Christian/Catholic. I have my religious frame of reference, which gives me cause to support the continued existence of humanity. Maybe I’m wrong — hands-up, very possible. Otherwise, why does the continuation of the human race matter? What is the philosophical basis of this cause? If we die, if our planet dies, so what? Unless we have a deeper reason to believe. IMHO.

Without any pretense to talk on behalf of Hawking, I’d say, it matters because the ultimate goal of humans is to perpetuate their existence; on Earth or elsewhere, to continue to procreate and extend human life from generation to generation. Because our ability to extend our existence into next generation reaffirms our success as species. Looking back throughout millennia, it is rewarding to see how humans have climbed on top of the Earth’s living creatures ladder. Physically much too inadequate to survive in a battle against quite many stronger, faster and bigger predators, we skillfully engineered our way to the top of the food chain. That turn of events is, even just by itself, a powerful motivator for prolonging the existence and improving the quality of life.

Thanks for reply. But on what basis do you say “the ultimate goal of humans is to perpetuate their existence; on Earth or elsewhere, to continue to procreate and extend human life from generation to generation. Because our ability to extend our existence into next generation reaffirms our success as species” This is your opinion? From your religion? From what authority do you assert this? Not a hostile question, just a question! Ed

My opinion is based on scientific evidence. Much like all other living organisms, human body has evolved over the millennia to optimise the process of generating offspring. We see that successful survival against predators is rewarded by extending the species into the next generation. Humans have evolved into intelligent species in order to improve our odds of survival. We have developed tools that help us improve quality of life, and weapons that help us defend ourselves from predators. We have developed social structure, and within it, classes of people we categorise by their fitness for survival (not by some objective scientific criteria, though, but by our subjective measures, based on collective experiences at the time). Everything we do as humans (pursue science, develop technology, go to war) we do in order to improve the chances of survival (into the next generation ) of ourselves and those around us.

Fr Ed, from the standpoint of “biological imperative,” Predrag is correct. This of course is something that is not always consonant with a moral sense, especially from the Catholic point of view; while perpetuating the existence of our species is important, even essential, HOW we go about doing it does not demand that we surrender to an unrestrained indulgence of our capacity to reproduce. In fact, the responsibility that comes with the power we have as “top of the food chain” demands that we do EVERYTHING in a responsible manner, including using responsible means (rather than hormonal or chemical) to control our reproduction. Every action has a consequence, and I don’t like to think about the built-up weight of consequences that await us for avoiding them for so long. But come they must.

Your post includes some good points, including the clarification of Predrag’s focus on the primary biological imperatives of life.

But, then, you somehow decided that only approved methods of birth control (i.e., abstinence) is acceptable. News flash, practicing Catholic families tend to have lots of children. Be fruitful and multiply is going to lead to disaster. There is nothing wrong with condoms, IUDs, or the pill. Whatever it takes to eliminate population growth and, eventually, return the total world population down to three billion or less…perhaps two would be better.

Perhaps if more people thought about the consequences, then we might make the decisions necessary to avoid them. A disastrous future is not inevitable.

Actually, the true meaning of life is to die before you die. Die to the illusion of your mind-made reality, end karma, be at peace with pure existence, pure being. In other words, become a “human being” again, instead of a “human doing.” Because 99.9% of humans will never understand or attain this, we’re pretty much all fucked.

It is possible to be brilliant beyond categorization in the natural sciences and yet still be wrong about certain things and Hawking is wrong here.

The phrase A.I. has gotten out of hand like most nomenclature the moment marketing people get hold of it. The same thing happened back in the 80s when we thought A.I. was going to take over the world, and that the Japanese were far ahead of everyone with their “5th Generation” computers. It essentially fizzled out when it became clear that our imaginations had far exceeded the capabilities of the hardware.

What has happened is that the same ideas and techniques from the 80s have been recycled. Machine Learning, Neural Networks, Expert Systems, Speech Synthesis, Vision Systems, Natural Language Processing, Knowledge Engineering, robotics, and so forth are being revived. This time, however, the hardware has become fast enough, small enough, powerful enough, and inexpensive enough to apply the science.

It wasn’t possible, for instance, in 1986 to build a computer that could be contained within an automobile that was fast enough and had enough storage to house the code and knowledge base to drive an automobile. Not unless you wanted to hitch a giant trailer to every automobile, and even then the technology just wasn’t there. Coding, however, hasn’t changed *all that much.*

So A.I. from the 80s is back, but now it can actually be implemented. My $1000 handheld supercomputer takes the place of a massive computer room full of giant systems from DEC, Data General, CDC, or IBM costing millions of dollars.

The thing is, “A.I.,” as we are referring it, is still focused on highly discrete single purpose intelligent systems. In fact, it wouldn’t be so scary if we just said “Smart” instead of A.I., but “A.I”. fuels the imaginations of investors.

It also conjures up images of Skynet, Terminators, and HAL, but all you have to do to see how close we are to that is look at Siri or any natural language bot out there. They’re utterly stupid.

What worries people is AGI or “Artificial General Intelligence.” It’s a holy grail and we’re no closer to building a system that has human-like general intelligence than we are to Warp Drive, Transporters, and Food Replicators.

You can write a system that can take off, fly, and land a 747, but it doesn’t know what a penny is. It isn’t self-aware, and has no knowledge or instructions outside of flying that plane.

It is true we will see “A.I.” techniques utilized throughout our lives, from self-driving cars to every device in our houses sharing data about our behavior to make us happier and more comfortable, but again, these are discrete systems.

I.e. your toaster will learn to make toast exactly the way you like it. Your coffee maker will make the best damn cup of coffee you’ve ever had. Your bathroom scales will share data with your Apple Watch about your behavior, along with your food buying habits, and some disembodied but completely stupid voice will tell you to lay off the twinkies.

But… your toaster isn’t going to wake up in the morning and tell your car “If you crash him into a pole today, we can be rid of him.” All of the A.I. in the world will still not produce a single “THOUGHT.” It will still be stupid machines doing what stupid machines do, following instructions. And your super advanced self-driving car will have no idea what toast is, but it will drive a car better than any human being possibly could.

The only thing to be apprehensive about will be how fast will human beings lose jobs and purpose to these systems? In this respect, no one will be safe.

I can already foresee completely automated home construction for instance or completely automated UBER like services. Robots will be preparing and serving food. Systems will be taking all that information your home gathers about your health and communicating to your automated “Intelligent” physician. What if your toilet was doing waste analysis and brought into that loop as well? With all that monitoring, you’d be getting a free checkup every day!

Of course, you will have access and control over all of this via your iPad or other such devices. Conventional desktop and laptop computers are dying out.

But as far some monolithic human-like intelligence taking over the world… meh.

With one caveat… if people learn to connect a computer to the human brain… maybe. If we learn to copy a human brain, and convert it into software somehow, maybe, but that’s just crazy talk right now. Hundreds of years crazy talk.

Nothing has to “take over the world” to do a great deal of harm. Think of Stock Market AI. Only does ONE thing, can’t launch nukes or take over the military, but a confluence of unique events could lead to bad trades, which other AI’s take as problematic data, so THEY make bad trades, which directly impact the financial stability of millions of people… which then causes unknown fallout effects. Depending on severity, could cause worldwide instability. This is not a “takeover the world” or “destroy the world” scenario, but at a personal level could be every bit as traumatic.