Chat / Contact

Latest News from 100TB

Is Google's Killswitch Enough To Stop The AI Singularity?

The singularity is a fairly old idea that first gained traction in 1993 with science fiction author, Vernor Vinge, in his essay "The Coming Technological Singularity". It basically supposes that at some point in the unknown future computers and artificial intelligence will become so advanced they will surpass the understanding of humanity.

It wasn’t until 2006 when author Raymond Kurzweil released his book The Singularity is Near that the theory of singularity really gained traction. The computer scientist posed that as technology advances it will eventually reach a point where exponential improvement is made at a near instant rate; the singularity.

This all seemed fairly outlandish in 2006. How different the world was back then. To put that into perspective, remember it was the same year Facebook launched to the public. Once Kurzweil’s theories were seen as flights of fancy, now they are taken with just the smallest pinch of salt.

Science Fiction to Reality

There is a benign understanding that posits human intelligence and artificial intelligence will combine and human evolution, history or existence – however you want to put it – will basically become intertwined with AI in an artificial consciousness. At which point, well who knows? Some super human race of exponentially expanding intelligence and understanding might follow.

Or it could go this way...

The theory goes that computers are very good at completing tasks. Complex mathematics, logical step based functions and large-scale record keeping are all in a day’s work for a computer. But imagine if an intelligent computer of the not too distant future is asked to complete a particular task, for example the production of processing chips. This computer also has the ability to analyze and where possible make its own improvements regarding its ability to complete the task. So the computer improves and improves, shaving a second off here, saving a penny or two there. Eventually it reaches a point where the AI looks to find where else it can make improvements in functionality. Suddenly the computer notices that there are deep inefficiencies in the production line.

These inefficiencies are in fact humans. Humans who take breaks, call in sick, lose concentration and sometimes make genuine, honest mistakes. This artificial intelligence doesn’t see John, the employee of two decades with an aging parent. The intelligence just sees an inefficiency. In pursuit of the completion of its self-improvement task of it blocks human interaction from the production line. Using its networks it also seizes control of the production line of other factories. In so doing it locks the humans out and doubles the amount of raw materials coming into the line. This happens in the name of increasing the production of processing chips. You can see these possibilities no longer seem far-fetched

Those who are concerned about AI’s level of integration into normal life need only point to the intuitive but fairly harmless AI that was taught to play Tetris. The AI was so desperate not to lose that it learned to just pause the game so as to delay the moment of losing indefinitely.

While this may sound like something out of the pages of a Phillip K. Dick novel, there is evidence to suggest those ‘in-the-know’ are genuinely concerned.

Building in a Kill Switch

Google is quietly working on a project called DeepMind. This UK based project was founded in 2010 and acquired by the tech giant in 2014. The project is a leader in artificial intelligence research and its application for positive impact. It made headlines in 2016 forwinning a best of five match (4-1) of Go against Korean grandmaster Lee Sedol. This is a watershed moment as Go, a 2,500 year old game is one of the most complex in the world and had up until last year been seen as needing a degree of human intuition to play to the highest level. Deep Mind already has some real world applications such as identifying faces in photos among other things.

The creators of DeepMind are also concerned about AI’s ability to malfunction and potentially see humanity as an obstruction to its ever-improving abilities that they have built a kill switch into the system. That is the big red button that will turn the machine off if it spirals out of control. What this practically means is that there is a guarantee within the very DNA of the machine that says it will NOT learn to resist human intervention attempts.

The future holds fascinating developments in the world of AI and machine learning. How this will affect the world we live in and the way we interact with it is truly fascinating. So long as humanity maintains its positioning at the top of the food chain, humanity’s future looks far improved with greater AI integration, yet…