Human-level AI: Is it Looming or Illusory?

The Centre for the Study of Existential Risk’s June 2015 Lecture, with Professor Margaret Boden.

Human-level (“general”) AI is more difficult to achieve than most people think. One key obstacle is relevance, a conceptual version of the frame problem. Another is lack of the semantic web. Yet another is the difficulty of computer vision. So artificial general intelligence (AGI) isn’t on the horizon. Possibly, it may never be achieved. No AGI means no Singularity. Even so, there’s already plenty to worry about—and future AI advances will add more. Areas of concern include unemployment, computer companions, and autonomous robots (some, military). Worries about the (illusory) Singularity have had the good effect of waking up the AI community (and others) to these dangers. At last, they are being taken seriously.

Professor Margaret Boden is a world-leading academic in the study of intelligence, both artificial and otherwise. She is is Research Professor of Cognitive Science at the Department of informatics at the University of Sussex, where her work embraces the fields of artificial intelligence, psychology, philosophy, cognitive and computer science. She was the founding-Dean of Sussex University’s School of Cognitive and Computing Sciences, a pioneering centre for research into intelligence and the mechanisms underlying it — in humans, other animals, or machines. The School’s teaching and research involves an unusual combination of the humanities, science, and technology.

The Centre for the Study of Existential Risk is an interdisciplinary research centre at CRASSH within the University of Cambridge dedicated to the study and mitigation of risks that could lead to human extinction or civilisational collapse.