Sunday, October 21, 2012

The seventh Singularity Summit was held in San Francisco, California on October 13-14, 2012. As in other years, there were about 600 attendees, although this year’s conference program included both general-interest science and singularity-related topics. Singularity in this sense denotes a technological singularity - a potential future moment when smarter-than-human intelligence may arise. The conference was organized by the Singularity Institute, who focuses on researching safe artificial intelligence architectures. The key themes of the conference are summarized below. Overall the conference material could be characterized as incrementalism within the space of traditional singularity-related work and faster-moving advances coming in other fields such as image recognition, big health data, synthetic biology, crowdsourcing, and biosensors.

Key Themes:

Singularity Thought Leadership

Big Data Artificial Intelligence: Image Recognition

Era of Big Health Data

Improving Cognition: Bias Reduction and Analogies

Singularity Predictions

Singularity Thought Leadership
Singularity thought leader Vernor Vinge, who coined the term technological singularity, provided an interesting perspective. Already since at least 2000, he has been referring to the idea of computing-enabled matter and the wireless Internet-of-things as Digital Gaia. He noted that 5% of objects worldwide are already embedded with microprocessors, and it could be scary as reality ‘wakes up’ further, especially as we are unable to control other phenomena we have created such as financial markets. He was pessimistic regarding privacy, suggesting that Brin’s traditional counterproposal to surveillance, sousveillance, is not necessarily better. More positively, he discussed the framing of computers as a neo-neocortex for the brain, extreme UIs to provide convenient and unobtrusive cognitive support, other intelligence amplification techniques, and how we have been unconsciously prepping many of our environments for robotic operations. There has also been the rise of an important resource in crowdsourcing as the network (the Internet plus potentially 7 billion Turing-test passing agents) filters optimal resources to specific cognitive tasks (like protein folding analysis).

Big Data Artificial Intelligence: Image Recognition
Peter Norvig continued in his usual vein of discussing what has been important in resolving contemporary problems in artificial intelligence. In machine translation (interestingly a Searlean Chinese room), the key was using large online data corpuses and straightforward machine learning algorithms (The Unreasonable Effectiveness of Data). In more recent work, his lab at Google has been able to recognize pictures of cats. In this digital vision processing advance (announced in June 2012 (article, paper)), the key was creating neural networks for machine learning that used hierarchical representation and problem solving, and again large online data corpuses (10 million images scanned by 16,000 computers) and straightforward learning algorithms.

Improving Cognition: Bias Reduction and Analogies (QS’ing Your Thinking)
A perennial theme in the singularity community is improving thinking and cognition, for example through bias reduction. Nobel Prize winner Daniel Kahneman spoke remotely on his work regarding fast and slow thinking. We have two thinking modes, fast (blink intuitions) and slow (more deliberative logical) thinking, both of which are indispensable and potentially problematic. Across all thinking is a strong inherent loss aversion, and this helps to generate a bias towards optimism. Steven Pinker also spoke about the theme of bias, indirectly. In recent work, he found that there has been a persistent decline in violence over the multi-century history of time, possibly mostly due to increases in affluence and literacy/knowledge. This may seem counter to popular media accounts which, guided by short-term interests, help to create an area of societal cognitive bias. Other research regarding cognitive enhancement and the processes of intelligence was Melanie Mitchell’s claim that analogies are a key attribute of intelligence. The practice of using analogies in new and appropriate ways could be a means of identifying intelligence, perhaps superior to other mechanisms such as general-purpose problem solving, question-answering, or Turing test-passing as the traditional proxies for intelligence.

Singularity Predictions
Another persistent theme in the singularity community is sharpening analysis, predictions, and context around the moment when there might be greater-than-human intelligence. Singularity movement leader Ray Kurzweil made his usual optimistic remarks accompanied by slides with exponentiating curves of technology cost/functionality improvements, but did not confirm or update his long-standing prediction of a technological singularity circa 2045 [1]. Stuart Armstrong pointed out how predictions are usually 15-25 years out, and that this is true every year. In an analysis of the Singularity Institute’s database of 257 singularity predictions from 1950 forward, there is no convergence of time in estimates ranging from 2020-2080. Vernor Vinge encourages the consideration of a wide range of scenarios and methods including ‘What if the Singularity Doesn’t Happen.’ The singularity prediction problem might be improved by widening the possibility space, for example perhaps it less useful to focus on intelligence as the exclusive element for the moment of innovation, speciation, or progress beyond human-level; other dimensions such as emotional intelligence, empathy, creativity, or a composite thereof could be considered.

The seventh Singularity Summit was held in San Francisco, California on October 13-14, 2012. As in other years, there were about 600 attendees, although this year’s conference program included both general-interest science and singularity-related topics. Singularity in this sense denotes a technological singularity - a potential future moment when smarter-than-human intelligence may arise. The conference was organized by the Singularity Institute, who focuses on researching safe artificial intelligence architectures. The key themes of the conference are summarized below. Overall the conference material could be characterized as incrementalism within the space of traditional singularity-related work and faster-moving advances coming in other fields such as image recognition, big health data, synthetic biology, crowdsourcing, and biosensors.

Key Themes:

Singularity Thought Leadership

Big Data Artificial Intelligence: Image Recognition

Era of Big Health Data

Improving Cognition: Bias Reduction and Analogies

Singularity Predictions

Singularity Thought Leadership
Singularity thought leader Vernor Vinge, who coined the term technological singularity, provided an interesting perspective. Already since at least 2000, he has been referring to the idea of computing-enabled matter and the wireless Internet-of-things as Digital Gaia. He noted that 5% of objects worldwide are already embedded with microprocessors, and it could be scary as reality ‘wakes up’ further, especially as we are unable to control other phenomena we have created such as financial markets. He was pessimistic regarding privacy, suggesting that Brin’s traditional counterproposal to surveillance, sousveillance, is not necessarily better. More positively, he discussed the framing of computers as a neo-neocortex for the brain, extreme UIs to provide convenient and unobtrusive cognitive support, other intelligence amplification techniques, and how we have been unconsciously prepping many of our environments for robotic operations. There has also been the rise of an important resource in crowdsourcing as the network (the Internet plus potentially 7 billion Turing-test passing agents) filters optimal resources to specific cognitive tasks (like protein folding analysis).

Big Data Artificial Intelligence: Image Recognition
Peter Norvig continued in his usual vein of discussing what has been important in resolving contemporary problems in artificial intelligence. In machine translation (interestingly a Searlean Chinese room), the key was using large online data corpuses and straightforward machine learning algorithms (The Unreasonable Effectiveness of Data). In more recent work, his lab at Google has been able to recognize pictures of cats. In this digital vision processing advance (announced in June 2012 (article, paper)), the key was creating neural networks for machine learning that used hierarchical representation and problem solving, and again large online data corpuses (10 million images scanned by 16,000 computers) and straightforward learning algorithms.

Improving Cognition: Bias Reduction and Analogies (QS’ing Your Thinking)
A perennial theme in the singularity community is improving thinking and cognition, for example through bias reduction. Nobel Prize winner Daniel Kahneman spoke remotely on his work regarding fast and slow thinking. We have two thinking modes, fast (blink intuitions) and slow (more deliberative logical) thinking, both of which are indispensable and potentially problematic. Across all thinking is a strong inherent loss aversion, and this helps to generate a bias towards optimism. Steven Pinker also spoke about the theme of bias, indirectly. In recent work, he found that there has been a persistent decline in violence over the multi-century history of time, possibly mostly due to increases in affluence and literacy/knowledge. This may seem counter to popular media accounts which, guided by short-term interests, help to create an area of societal cognitive bias. Other research regarding cognitive enhancement and the processes of intelligence was Melanie Mitchell’s claim that analogies are a key attribute of intelligence. The practice of using analogies in new and appropriate ways could be a means of identifying intelligence, perhaps superior to other mechanisms such as general-purpose problem solving, question-answering, or Turing test-passing as the traditional proxies for intelligence.

Singularity Predictions
Another persistent theme in the singularity community is sharpening analysis, predictions, and context around the moment when there might be greater-than-human intelligence. Singularity movement leader Ray Kurzweil made his usual optimistic remarks accompanied by slides with exponentiating curves of technology cost/functionality improvements, but did not confirm or update his long-standing prediction of a technological singularity circa 2045 [1]. Stuart Armstrong pointed out how predictions are usually 15-25 years out, and that this is true every year. In an analysis of the Singularity Institute’s database of 257 singularity predictions from 1950 forward, there is no convergence of time in estimates ranging from 2020-2080. Vernor Vinge encourages the consideration of a wide range of scenarios and methods including ‘What if the Singularity Doesn’t Happen.’ The singularity prediction problem might be improved by widening the possibility space, for example perhaps it less useful to focus on intelligence as the exclusive element for the moment of innovation, speciation, or progress beyond human-level; other dimensions such as emotional intelligence, empathy, creativity, or a composite thereof could be considered.