Ten Years To the Singularity If We Really, Really Try … and other Essays on AGI and its Implications

December 25, 2014

Ray Kurzweil has projected the date for a Technological Singularity as 2045. AI researcher Ben Goertzel believes it could potentially happen much sooner, if appropriate attention and resources are focused on the right R&D projects.

What current technologies are most likely to lead to the rapid advent of powerful Artificial General Intelligence systems? What impact will the advent of such technologies have upon human life? What philosophical, scientific and spiritual ideas should be deployed to explore such questions? How probable are Terminator type outcomes, versus friendlier scenarios where advanced artificial intelligences play a beneficent role to humanity and other sentiences? What should be our top priorities now, looking forward to a radically different AI-centric future?

This book gathers together essays that Ben Goertzel wrote during the period 2009–2011, for H+ Magazine and other periodicals, which explore these issues from various directions. Each essay is presented along with a brief personal introduction discussing the context in which the essay was written, and reviewing relevant developments from the period 2012–2014.

comments 6

I was born and reside on the East coast of Africa. The internet infrastructure on the continent is poor and expensive. My point is that I am out of any mainstream thinking so my conclusions may be erroneous.

I regard Ray Kurzweil as a techno-optimist (glass half full). I tend towards techno-pessimism (glass half empty). In this instance, I really, really hope he’s right and I am wrong and I can join others in their wishful thinking. It may be that I regard the technological ‘Singularity’ as more profound than is commonly accepted.

IMO AI sentience is required to power a true Singularity and we are a long way from that. We are not concentrating R&D resources in the right places. We [humanity] are hobbled by religious luddites and scare tactics like ‘Terminator” scenarios. My Singularity (and AI sentience) will not occur in 10 years. Maybe in a century. I hope I’m wrong.

We have promising directions for research. We are also under serious time pressure and we have nothing to lose. Humanity has its back to the wall so the possibility of malign AI should not concern certain luddite segments as much as it used to. We have stuffed-up the environment, depleted resources, precipitated climate change and are busy breeding ourselves to oblivion. Sentient AI is the only possibility of pulling us out of the dwang. But (naturally) we will waste time with petty crap and go extinct (and that’s what we deserve).

relying on an outside source-force-intelligence sounds like secular religious sentiment. The AI replacing the fictitious god later usurped by Superman: the Amerikan solution being in need of salvation through the endemic screw up of that countries applied exceptionalism converted globally into armageddon, The Scandinavians should be consulted first I think. For all the above reasons.

Interesting work, but written in almost total denial of the fact that pretty much every powerful technology devised has been intended for, or eventually used in the prosecution of warfare. I see this as, unfortunately, a very likely scenario with AGI, that superpowers will develop their own competing systems and give them more freedom than perhaps they should have to ensure they are able to defend against or even destroy the other AGIs. Fortunately this is not within the capabilities of certain pathological nation states, but it is within reach of states with little regard for human rights or even democracy. Already we are starting to see the first skirmishes in what may become a long drawn out cyber version of WWIII.

The arguments he puts forward against the “scary idea” are also framed without any consideration for the chance of AGI being militarised by multiple competing states. Therefore, even though they are robust in context, they rest on dangerous assumptions or oversights regarding human nature and what history has show nations are prepared to do to each other.

He seems to also overlook the risks of religiously inspired Luddites attacking advanced societies working toward AGI which threatens their control of information and therefore their belief systems ( perhaps this is already happening?) and how this may hold back civilian AGI, which is nurturing, and accelerate militarised AGI, which could be problematic if it gains autonomy.

Actually Ben touches on a few things relevant to my previous comment in the chat here has in this video, https://www.youtube.com/watch?v=i6ctsWLi_G4 essentially he admits that he can’t rational determine what may happen but that his intuition indicates a positive outcome.

Perhaps that is a case of a “useful delusion”, because otherwise we become paralysed by our fear of the unknown and then we have no chance of being able to bring about a positive outcome?