行业新闻

霍金演讲（上） | 2017年北京全球移动互联网大会

Over my lifetime, I have seen very significant societal changes. Probably one of the most significant, and one that is increasingly concerning people today, is the rise of artificial intelligence.

In short, I believe that the rise of powerful AI, will be either the best thing, or the worst, ever to happen to humanity.

I have to say now, that we do not yet know which. But we should do all we can, to ensure that its future development benefits us, and our environment. We have no other option. I see the development of AI, as a trend with its own problems that we know must be dealt with, now and into the future.

The progress in AI research and development is swift. And perhaps we should all stop for a moment, and focus our research, not only on making AI more capable, but on maximizing its societal benefit.

Such considerations motivated the American Association for Artificial Intelligence's, two thousand and eight to two thousand and nine, Presidential Panel on Long-Term AI Futures, which up to recently had focused largely on techniques, that are neutral with respect to purpose.

But our AI systems must do what we want them to do. Inter-disciplinary research can be a way forward: ranging from economics, law, and philosophy, to computer security, formal methods, and of course various branches of AI itself.

Everything that civilization has to offer, is a product of human intelligence, and I believe there is no real difference between what can be achieved by a biological brain, and what can be achieved by a computer.

It therefore follows that computers can, in theory, emulate human intelligence, and exceed it. But we don’t know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it.

Indeed, we have concerns that clever machines will be capable of undertaking work currently done by humans, and swiftly destroy millions of jobs.

While primitive forms of artificial intelligence developed so far, have proved very useful, I fear the consequences of creating something that can match or surpass humans. AI would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded. It will bring great disruption to our economy.

And in the future, AI could develop a will of its own, a will that is in conflict with ours. Although I am well-known as an optimist regarding the human race, others believe that humans can command the rate of technology for a decently long time, and that the potential of AI to solve many of the world's problems will be realised. I am not so sure.

2.

In January 2015, I, along with the technological entrepreneur, Elon Musk, and many other AI experts, signed an open letter on artificial intelligence, calling for serious research on its impact on society.

In the past, Elon Musk has warned that super human artificial intelligence, is possible of providing incalculable benefits, but if deployed incautiously, will have an adverse effect on the human race.

He and I, sit on the scientific advisory board for the Future of Life Institute, an organization working to mitigate existential risks facing humanity, and which drafted the open letter. This called for concrete research on how we could prevent potential problems, while also reaping the potential benefits AI offers us, and is designed to get AI researchers and developers to pay more attention to AI safety.

In addition, for policymakers and the general public, the letter is meant to be informative, but not alarmist. We think it is very important, that everybody knows that AI researchers are seriously thinking about these concerns and ethical issues.

For example, AI has the potential to eradicate disease and poverty, but researchers must work to create AI that can be controlled. The four-paragraph letter, titled Research Priorities for Robust and Beneficial Artificial Intelligence, an Open Letter, lays out detailed research priorities in the accompanying twelve-page document.

For the last 20 years or so, AI has been focused on the problems surrounding the construction of intelligent agents, systems that perceive and act in some environment. In this context, intelligence is related to statistical and economic notions of rationality. Colloquially, the ability to make good decisions, plans, or inferences.

As a result of this recent work, there has been a large degree of integration and cross-fertilisation among AI, machine learning, statistics, control theory, neuroscience, and other fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks, such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems.

As development in these areas and others, moves from laboratory research to economically valuable technologies, a virtuous cycle evolves, whereby even small improvements in performance, are worth large sums of money, prompting further and greater investments in research.

There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer, is a product of human intelligence; we cannot predict what we might achieve, when this intelligence is magnified by the tools AI may provide.

But, and as I have said, the eradication of disease and poverty is not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits, while avoiding potential pitfalls.