“It seems to be accepted that intelligence – artificial or otherwise – and ‘the singularity’ are inseparable concepts: ‘The singularity’ will apparently arise from AI reaching a, supposedly particular – but actually poorly-defined, level of sophistication; and an empowered combination of hardware and software will take it from there (and take over from us). But such wisdom and debate are simplistic in a number of ways: firstly, this is a poor definition of the singularity; secondly, it muddles various notions of intelligence; thirdly, competing arguments are rarely based on shared axioms so are frequently pointless; fourthly, our models for trying to discuss these concepts at all are often inconsistent, and finally, our attempts at describing any ‘post-singularity’ world are almost always limited by anthropomorphism. In all of these respects, professional ‘futurists’ often appear as confused as storytellers who, through freer licence, may conceivably have the clearer view: perhaps then, that becomes a reasonable place to start. There is no attempt in this paper to propose, or evaluate, any research hypothesis; rather simply to challenge conventions. Using examples from science fiction to illustrate various assumptions behind the AI/singularity debate, this essay seeks to encourage discussion on a number of possible futures based on different underlying metaphysical philosophies. Although properly grounded in science, it eventually looks beyond the technology for answers and, ultimately, beyond the Earth itself.”

The full paper can be read here. It’s long and winding and covers a lot of ground. But the essential thrust, as always, is that this stuff is more complicated than we think. Oh, and we’re all going to die before we find out anyway! The final section concludes:

“But, in many ways, the real take home message is that many of us are not on the same page here. We use terms and discuss concepts freely with no standardization as to what they mean; we make assumptions in our self-contained logic based on axioms we do not share. Whether across different academic disciplines, wider fields of interest – or simply as individuals, we have to face up to some uncomfortable facts in a very immediate sense: that many of us are not discussing the same questions. On that basis it is hardly surprising that we seem to be coming to different conclusions. If, as appears to be the case, a majority of (say) neuroscientists think the TS cannot happen and a majority of computer scientists think it can then, assuming an equivalent distribution of intelligence and abilities over those disciplines, clearly they are not visioning the same event. We need to talk.”

Thinking, consciousness, and intelligence are used so much in discussion but are seldom well-defined.

My view of intelligence is somewhat unique.I think is a physical process that is involved with maximizing the diversity and/or utility of future outcomes to achieve optimal solutions. It doesn’t require consciousness and emerges in networked and integrated systems. This could include intelligent behavior of slime molds or machines. Consciousness I reserve for biologically entities with nervous systems.