As we move ever-closer to the inevitable heat-death of the universe, our modern society is plagued with a question of its own making – can machines think like we do? The growing trend in popular culture is fixated on how the computers in our everyday lives think and make decisions, while the technological breakthroughs of the recent decade have sought to give computer systems the ability to think for us. To do this, machines need a form of Artificial Intelligence to make informed, quick decisions on our behalf.

What is AI?

AI is not a new concept within Computer Science. The first steps into the research of computational intelligence were done by Alan Turing in 1950, nearly seventy years ago, in his famous publication “Computing Machinery and Intelligence” where he posed the question at the beginning of this article. Turing was the first person to put forward a serious proposal in the philosophy of Artificial Intelligence – that a “thinking machine” was at least plausible.

The goal of AI is to build intelligent systems[2] in the effort of (1) understanding our and other’s intelligences better; and (2) to make computers and machines more useful to us. To define what an “intelligent system” is, though, is difficult. Dr. Bridge defines intelligent systems as

“Intelligent systems provide solutions to problems that are difficult to solve. The difficulty stems from the presence in the problem of disorder, uncertainty, lack of precision or inherent intractability.”

What I think the above quote means is that the system neither acts or thinks like a human but, rather, acts in an ideal way to solve a complicated problem based on what it has been told about its environment and what it can use to solve the problem. Artificial Intelligence doesn’t try to be Human Intelligence. It doesn’t need to.

What is it good for?

If you’re in any way interested in modern technology and services, or if you exist in the buzzword universe of entrepreneurs, then you have heard about how “Machine Learning” is the best thing since sliced bread.

Machine Learning is what AI is: a way for computers to use data to make an informed decision. Just like you may look at the weather of the last couple days to help decide if it will rain tomorrow, a computer can do the same [3].

Machine Learning algorithms are incredibly useful at emulating, replacing and automating human behaviour. A great example of this are Nest Thermostats used to provide a wifi-enabled “smart” heating system for the house. The Nest Thermostat took a week to generate its initial schedule and from then on determined what the best temperature at each point of the day would be based on how, and when, the household had changed the temperature the previous week. Over time, the thermostat would learn what was best for the home and only turn on the heating when it was actually needed.

In theory, this would save the house money on their energy bill and keep the people inside the house comfortable forever without any effort from themselves. Unfortunately, a study by Rayoung Yang and Mark W. Newman, titled “Learning from a Learning Thermostat: Lessons for Intelligent Systems for the Home” [4] found that the system could easily learn the temperature changes but it couldn’t understand the intent behind them. For example, when a participant’s pregnant daughter was visiting, they turned down the temperature to make her comfortable but the system didn’t know or understand why they wanted it colder and continued to make it colder at that time every week until it was manually changed. AI is a powerful tool but it’s only as intelligent as its creators make it.

Will it kill us all?

Alright, I’m going to level with you, I’m a computer scientist, not a philosopher, and there’s a lot of debate in this area as to whether AI will doom us or if it’s just a case of ‘when?’ but, it’s not a question we can answer with certainty right now. Stephen Hawking recently gave a short speech at the newly-opened Cambridge Centre For Intelligence where he stated:

“In short, the rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity. We do not yet know which.” [5]

However, critical philosophers such as John Lucas have argued, using Gödel’s incompleteness theorem, that a formal system (such as a computer program) could never see the truth of certain statements, while a human being could [6]. There may be some truth to what Lucas showed back in 1961, but recent advances in processing speech, tone and, to a minor extent, the intention behind certain human actions, it’s easy to see why some of the best minds of our generation are fearful of the future to come.

Through popular fiction pieces such as I, Robot, Ex Machina, Terminator and Transcendence, we’re familiar with the fears of a robot uprising or an all-knowing artificial intelligence. Although unsettling (I’m looking at you, Her), these works will remain fiction for a while longer. Right now, there’s no reason to fear it so I, for one, welcome our new AI overlords.