A top Facebook research scientist tells Sputnik about possible threats, benefits and the future of artificial intelligence (AI) at the DeepHack Q&A hackathon in Moscow.

Humanity has seen significant development in artificial intelligence of late, with international corporations working on self-driving cars, mobile devices that appear to understand their owners’ speech and websites that are always at the ready to propose just what it thinks a user would like.Is the so-called rise of the machines real? It may seem to be so, and the paperclip maximizer thought experiment, shows that machines do not need to be perceived as evil.

Tom Mikolov, a research scientist at Facebook AI Research and former member of the Google Brain team, who developed sophisticated neural networks algorithms, sheds some light on AI and its future.

“There is this impression that there will be suddenly some super-clever AI having God-like powers and will be turning the whole world into paperclips or whatever, I don’t think that this is the case, it doesn’t make that much sense to me” Mikolov told Sputnik.

“It is like saying that whenever we will be able to fly to space, in two years we will be able to fly to stars — it just didn’t happen and it wasn’t very likely to happen too”.

Mikolov is in Moscow to speak on the topic “The Roadmap Towards Machine Intelligence” at the international Hackathon DeepHack Q&A, which kicked off on January 31 at the Moscow Institute of Physics and Technology.

“There’s some continuing progress that takes some time, and before we can make computer that can turn the world into paperclips, there will be a computer that will be able to be just much less than that, and before that there will be even more simple one and all progress will take maybe tens of years, so there’s not going to be a situation where we would make some small mistake and the computer is going to become ten times smarter than we are and just do crazy things.”He noted however, that — really — anything could happen.

“It’s very difficult to make predictions about intelligence, because we don’t understand the concept that well, and it is not clear how far is the intelligence of humans from optimum intelligence. We are limited in many ways, but before computers get to our speed, it will take very long, because right now we do not have algorithms that will be able to represent intelligence, we do not even have the basic concepts that will be required.”

Mikolov suggested that such fears might be related to pop-culture memes, creating a picture of AI as a compassionless killing machine.

“I was speaking with Asian people and they did say that the robots are not really having such a bad name there… because they did have actually some other famous movie, but there robot supposed to be the good one and basically the whole culture became influenced very much by the movies that people watch so now that the AI supposed to be evil in the Western world.”

The threat of total annihilation may seem less real, but many are concerned about much more realistic possible threats of AI: the application of smart systems and the use of the Internet makes privacy more and more important.“It is definitely likely that there could be issues when it comes to AI and privacy especially if you’ve got sort of AI, some system that basically is… like your personal assistant in some sense,” Mikolov said.

He noted, however, that this issue is more likely to be a headache for next generations: “Yes, in the long distant future, I think there will be a lot of very big problems that we will have to solve or face, but so far, technologies are so much underdeveloped, the abilities of the systems are so low that I’m not really worried about it for the next quite a few years.”

As for the next generations, there are a considerable number of tasks that modern IT is struggling to solve, and developers have a picture of what a future AI may be like.

“When it comes to intelligence, I think that the future goal for computers is to extend our intelligence. If you want to extend your intelligence in the way that you will be able to solve a task you currently cannot solve,” Mikolov said.

“I am hoping that computers can help us on the intelligent side and that we would be able to solve tasks that otherwise are too time-consuming for us.”Nevertheless, there are daunting obstacles to the creation of AI, Mikolov said.

“I’m not very optimistic about the progress made there so far, not just me, but generations before me, people wanted to do this since the 50s maybe, they were super smart people who had this goal and pretty much failed. It is hard to see how far we are from there, I think that people are sometimes a bit too optimistic and they are just trying to do too many things too quickly and then they fail,” he said.

“The main obstacle right now is our understanding of intelligence. Intelligence seems very natural to us, but that is quite tricky, because we as humans are able to understand a lot of things because evolution shaped us this way and computers are not shaped this way.”

By 2015, less than 10 years from now, 140 million American white collar jobs will be taken over by artificial intelligence. That’s on top of the regular unemployment rate. Do you feel lucky? No wonder the elites are building secure underground bunkers. Imagine millions of unemployed young men and women roaming the streets and wanting a piece of the pie.

Don’t worry for the elites, they will have real robot cops going after those who step out of their station in life. The future can be bleak. It all depends on what the current generation will do. Millennials now outnumber the baby boomers apparently. Well do something about it millennials! It’s your turn.

Research Fellow in Artificial Intelligence and Digital Games, University of York

Disclosure statement

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Partners

Researchers from Google DeepMind have developed the first computer able to defeat a human champion at the board game Go. But why has the online giant invested millions of dollars and some of the finest minds in Artificial Intelligence (AI) research to create a computer board game player?

Go is not just any board game. It’s more than 2,000 years old and is played by more than 60m people across the world – including a thousand professionals. Creating a superhuman computer Go player able to beat these top pros has been one of the most challenging targets of AI research for decades.

The rules are deceptively simple: two players take turns to place white and black “stones” on an empty 19×19 board, each aiming to encircle the most territory. Yet these basics yield a game of extraordinary beauty and complexity, full of patterns and flow. Go has many more possible positions than even chess – in fact, there are more possibilities in a game of Go than we would get by considering a separate chess game played on every atom in the universe.

AI researchers have therefore long regarded Go as a “grand challenge”. Whereas even the best human chess players had fallen to computers by the 1990s, Go remained unbeaten. This is a truly historic breakthrough.

Games are the ‘lab rats’ of AI research

Since the term “artificial intelligence” or “AI” was first coined in the 1950s, the range of problems which it can solve has been increasing at an accelerating rate. We take it for granted that Amazon has a pretty good idea of what we might want to buy, for instance, or that Google can complete our partially typed search term, though these are both due to recent advances in AI.

Go originated in China over 2,000 years ago and is played by millions.Alan, CC BY

Computer games have been a crucial test bed for developing and testing new AI techniques – the “lab rat” of our research. This has led to superhuman players in checkers, chess, Scrabble, backgammon and more recently, simple forms of poker.

Games provide a fascinating source of tough problems – they have well-defined rules and a clear target: to win. To beat these games the AIs were programmed to search forward into possible futures and choose the move which leads to the best outcome – which is similar to how good human players make decisions.

Yet Go proved hardest to beat because of its enormous search space and the difficulty of working out who is winning from an unfinished game position. Back in 2001, Jonathan Schaeffer, a brilliant researcher who created a perfect AI checkers player, said it would “take many decades of research and development before world-championship-caliber Go programs exist”. Until now, even with recent advances, it still seemed at least ten years out of reach.

The breakthrough

Google’s announcement, in the journal Nature, details how its machine “learned” to play Go by analysing millions of past games by professional human players and simulating thousands of possible future game states per second. Specifically, the researchers at DeepMind trained “convolutional neural networks”, algorithms that mimic the high-level structure of the brain and visual system and which have recently seen an explosion in their effectiveness, to predict expert moves.

This learning was combined with Monte Carlo tree search approaches which use randomness and machine learning to intelligently search the “tree” of possible future board states. These searches have massively increased the strength of computer Go players since their invention less than ten years ago, as well as finding applications in many other domains.

The resulting “player” significantly outperformed all existing state-of-the-art AI players and went on to beat the current European champion, Fan Hui, 5-0 under tournament conditions.

AI passes ‘Go’

Now that Go has seemingly been cracked, AI needs a new grand challenge – a new “lab rat” – and it seems likely that many of these challenges will come from the $100 billion digital games industry. The ability to play alongside or against millions of engaged human players provides unique opportunities for AI research. At York’s centre for Intelligent Games and Game Intelligence, we’re working on projects such as building an AI aimed at player fun (rather than playing strength), for instance, or using games to improve well-being of people with Alzheimer’s. Collaborations between multidisciplinary labs like ours, the games industry and big business are likely to yield the next big AI breakthroughs.

A computer can run through thousands of these per second.

However the real world is a step up, full of ill-defined questions that are far more complex than even the trickiest of board games. The techniques which conquered Go can certainly be applied in medicine, education, science or any other domain where data is available and outcomes can be evaluated and understood.

The big question is whether Google just helped us towards the next generation of Artificial General Intelligence, where machines learn to truly think like – and beyond – humans. Whether we’ll see AlphaGo as a step towards Hollywood’s dreams (and nightmares) of AI agents with self-awareness, emotion and motivation remains to be seen. However the latest breakthrough points to a brave new future where AI will continue to improve our lives by helping us to make better-informed decisions in a world of ever-increasing complexity.

I’ll spare you any Arnold impersonations, as The Terminatorimpersonation is perennially the material of hack comedians. On the contrary, the Terminator series is one of the more profound examples of predictive programming, establishing memes and implanting preparatory ideas comparable to The Matrix. While The Matrix is the classic conspiracy-genre trope for “awakening” to the fraud of the system as a whole, the Terminator series is far more ominous and serious in its foreboding message. Foreboding, because the real shadow government plan is to erect Skynet in reality, and serious because the establishment’s entire paradigm is that ofdepopulation. Mix the two together, and you get Terminator. Thus, I have been of the opinion for a few years now that the reason for the erection of A.I., while full of esoteric undertones, is pragmatically about erecting a control grid impervious to human error which will then function as a global human deletion grid.

Past regimes and empires collapsed due to corruption, degeneration and human frailty. What, then, is the one way to avoid this imperial atrophy? The answer is robotics, and removing humans from the equation – the rise of the machines. For this analysis, I am not going to do the traditional scene by scene approach to symbolism: The Terminator series is pretty straightforward. Like a gigantic android middle finger, the Terminator films are a full-frontal example of the long-term plan of the establishment to erect a control grid with human agents out of the loop. I will also look at real white papers and plans that detail this plot, as well as prominent voices who have given this very warning.

In the first installment, we are introduced to an apocalyptic future where an amorphous Skynet has decimated the globe with a nuclear strike intent on wiping out the human population. Here enters Arnold, the T-800 model cyborg assassin, sent back in time to halt the birth of John Connor, the future leader of the human resistance. With archetypal 80s blue lightning, the action commences with a naked Arnold ravaging L.A. in pursuit of Sarah, John’s soon to be mom who gets busy with Kyle Reese, another resistance fighter from the future sent back to give birth to John. I’m not positive, but something there in this plot timeline doesn’t add up – if your future dad comes back in time to conceive you, presumably you could also come back in time to conceive yourself, if you were incestuous. But this brings up a side theme in the Terminator films in addition to A.I. – the issue of determinism, time and free will. Whether the nuclear apocalypse is predestined or whether the time continuum can be altered was a big movie question in the 80s – just ask Marty McFly and the Doc.

Skynet nukes America (think Matthew Broderick in Wargames!) because its “achieving self-awareness” results in a calculated cost-beneft analysis of the threat and uselessness of billions of hominid meatbags. Human reasoning and emotions and frailty give rise to error, and humans might shut down Skynet, ergo they must be eliminated. The essential revelation is not that robotics will evolve consciousness (which is all based on the outdated mechanistic Enlightenment worldview that all of reality is an atomistic causal determinism), but rather that the radical eugenics program of the global elite has morphed into a technocratic transhumanism. Racial and familial eugenics is really a thing of the past – an older form of eugenics that gave way to bioethics and bioengineering. Combined with technocratic futurism, we now have a new paradigm, spun off from the Darwinian and Malthusian models – transhumanism or post-humanism.

T2 film poster.

In Terminator 2: Judgment Day, this becomes more evident, as an advanced silver silly putty nanotech T-1000 bot is now on the trail of Connor with a new twist introduced – the future humans have also sent a hacked T-800 Arnold bot to protect John. Debates ensue concerning the ability of free will to change the track of history, blah, blah, but what’s more relevant in the sequel is the statement Arnold makes concerning how Skynet came to be. Arnold reveals that the U.S. military decided to go to a fully A.I. robotic and drone force. Aware readers will recognize that this is now quickly becoming our reality, as numerous publications have reported the Air Force plan to move entirely over to unmanned A.I. drones. The Declassified Air Force A.I. plan for 2009-2047 reads as follows:

“Figure 10 – Long Term – Full Autonomy

The final portfolio step leverages a fully autonomous capability, swarming, and Hypersonic technology to put the enemy off-balance by being able to almost instantaneously create effects throughout the battle space. Technologies to perform auto air refueling, automated maintenance, automatic target engagement, hypersonic flight, and swarming would drive changes across the DOTMLPF-P spectrum. The end result would be a revolution in the roles of humans in air warfare.

4.6.4.1 Long Term (FY25-47) Technology Enablers

Assuming legal and policy decisions allow, technological advances in artificial intelligence will enable UAS to make and execute complex decisions required in this phase of autonomy. Today target recognition technology usually relies on matching specific sensor information with predictive templates of the intended target. As the number of types of targets and environmental factors increase the complexity of and time to complete targeting increases. Further, many targeting algorithms are focused on military equipment. Our enemies today and those we face in the future will find ways to counter our systems. Autonomous targeting systems must be capable of learning and exercising a spectrum of missions useful to the Joint Warfighter. However, humans will retain the ability to change the level of autonomy as appropriate for the type or phase of mission.”

The goal is thus to attain full autonomy, with attack systems functioning to spot targets and threats ahead of time using predictive templates, something like a military version of pre-crime. Just like Minority Report, the decision of Skynet as to who will constitute a future threat and must therefore be eliminated (without any trial or due process) is to be determined by predictive algorithms! The justification of the preemptive iStrike doctrine is, of course, a “wartime scenario,” but all this legalese shuffling around means is all humans are potential threats in a global “wartime scenario.” While the majority of mankind still thinks the battlefield of life is competition between nation states and rival corporations, the globalists have already planned decades ahead in their white papers for scenarios of universal, perpetual war theater engaged against the “insurgent” population of humanity en masse.

In Terminator 3: Rise of the Machines, a sexy silver putty nano bot from the future again traverses back in time to hunt John Connor, while Arnold as the older Commodore 64 model returns to be his guardian angel. Both Terminator 2 and 3include the theme of the A.I. obtaining “self-awareness,” which cheese ball scenes of Arnold and Connor bonding over Arnold beginning to have “feelings.” The entire ethos of A.I. obtaining “consciousness” is itself a nonsensical myth born of Darwinism meets tech, operating on the reductionist, mechanistic, materialist assumption that “consciousness” is nothing more than a more complex evolution of chemical reactions. The final goal with this idea is, as I’ve written, a mirrored, virtual mimicry of our present reality, with a melded bio-organic one.

T3. Skynet should’ve just sent the hot bot the first time, in the 80s.

Let’s return to our virtual theater for the moment. The most striking aspect of T3 is the activation of Skynet by the Air Force as a software program installed in almost all computers and electronic devices, which then activates a wargame scenario of a cross nuclear strike between Russia and the West. From what I understand, this is absurd, as power plants and nuclear arsenals are not wired into the Internet, which is why attacks on Iranian nuclear facilities for example reportedly require a direct installation of viruses through a zip drive. Regardless, in T3 Skynet is a ominpotent botnet program that goes global. If the rumors of PROMIS software are correct, the rise of “Smart” everything could make this a possibility. Recall Petraeus as head of CIA stating your dishwasher would spy on you. Indeed, the public plan from IBM is to erect fully integrated SmartCities in areas like Singapore and Rio. I have posted it many times, but the CEO explicitly states that pre-crime and total information awareness will be the reality in these plastic nightmares. Imagine living inside a giant city sized Best Buy.

In Terminator 4: Salvation, the setting is the future where the resistance takes on Skynet’s home base directly in San Francisco through infiltration aided by a “resurrected” Cyberdyne cyborg criminal, Marcus Wright (Sam Worthington). Marcus turns out to be a longterm plot of Skynet to kill John Connor by infiltrating the resistance with a hybrid, programmed super soldier assassin. The Christ-like imagery comes to the fore, with Wright siding with the humans to help destroy Skynet. In part 4, Skynet is the immediate antagonist, and we are given a full picture of its gruesome slave-factory and total control system erected around experimenting on and ultimately wiping out humanity. The salient element here is that Skynet is structured like a SmartCity! The full integration of all tech, as well as the Internet of Things results in a slave city where humans are brutally slaves of the technology they built. Vice recently reported on the military’s real Skynet program, revealing the holistic integration I’m speaking of:

The future SmartCity – Skynet! Image: Deviantart.

“Their study argues that DoD can leverage “large-scale data collection” for medicine and society, through “monitoring of individuals and populations using sensors, wearable devices, and IoT [the ‘Internet of Things’]” which together “will provide detection and predictive analytics.” The Pentagon can build capacity for this “in partnership with large private sector providers, where the most innovative solutions are currently developing.

iLove with Spike Jonze’s “Her”

In particular, the Pentagon must improve its capacity to analyze data sets quickly, by investing in “automated analysis techniques, text analytics, and user interface techniques to reduce the cycle time and manpower requirements required for analysis of large data sets.

Cloud robotics, a term coined by Google’s new robotics chief, James Kuffner, allows individual robots to augment their capabilities by connecting through the internet to share online resources and collaborate with other machines.

By 2030, nearly every aspect of global society could become, in their words, “instrumented, networked, and potentially available for control via the Internet, in a hierarchy of cyber-physical systems.”

These are not science fiction speculations – notice the trend in military, intelligence, economics and banking, etc., all the way down to daily life with your personal BFF, your iPhone. Whispers are, in the next few years, your iPhone will be a personal assistant, able to converse with you, much like the “IOS” in Spike Jonze’s excellent film, Her. However, Her might be considered in this same light – was it merely a conditioning tool to warm us to the idea of falling in love with our own technological femme fatale? You’ll notice that Scarlett has twice now been the face of the new Pistis Sophiatechgnosis goddess, when we consider Lucy. On top of that, we will soon see new A.I. films such as Ex Machina, featuring more robo hotties. As one commenter opined on my site, we can see more of the eugenics barrenness plan at work in the coming robo sex bots. We can think again here of A.I., where Jude Law portrays Gigolo Joe, the Fred Astaire-esque male escort bot. The entire plan appears to be deceiving man into thinking the virtual and synthetic can fulfill his desires, and as he gradually accepts this faux overlay, the kill grid will commence to massively depopulate.

“With artificial intelligence, we are summoning the demon,” Musk said last week at the MIT Aeronautics and Astronautics Department’s 2014 Centennial Symposium. “You know all those stories where there’s the guy with the pentagram and the holy water and he’s like… yeah, he’s sure he can control the demon, [but] it doesn’t work out.”

About the Author

Jay Dyer is the author of the excellent site Jay’s Analysis, where this article was originally featured.