You Should Start Learning About Artificial Intelligence. Here's How.

There are a lot of different levels of artificial intelligence being applied in a lot of different ways. Here's a primer for starting to wrap your head around it all.

Mike Riggs
December 9, 2016

The artificial intelligence company we profiled in our recent video about Vicarious, an AI startup, won’t affect your life tomorrow. Vicarious’ goal of building human-level AI — basically, software that can think as creatively as we humans think — is a long-term project. We don’t know how long, and Vicarious doesn’t either.

But that’s OK, because basic AI is already here and it’s pretty exciting, even if it can’t pass for human. Siri is a form of artificial intelligence. So is the feature on some cars that helps you parallel park. We’re going to see — though maybe not notice — this kind of technology in more and more of our hardware and software. As Kevin Kelly wrote in Wired, “[T]he business plans of the next 10,000 startups are easy to forecast: Take X and add AI.” It’s been two years since Kelly made that guess, and Silicon Valley has yet to prove him wrong.

Apple's Siri is a form of AI many of us interact with everyday

So what is artificial intelligence? Broadly speaking, it’s non-human technology that can solve problems with little or no human assistance. Ideally, we want AI to solve those problems as well as we do, or better. And that means we want AI to find the optimal solution more quickly than a human could. Something like what IBM’s Watson did earlier this year:

University of Tokyo doctors report that the artificial intelligence diagnosed a 60-year-old woman's rare form of leukemia that had been incorrectly identified months earlier. The analytical machine took just 10 minutes to compare the patient's genetic changes with a database of 20 million cancer research papers, delivering an accurate diagnosis and leading to proper treatment that had proven elusive. Watson has also identified another rare form of leukemia in another patient, the university says.

Watson made the correct diagnosis really quickly because it can do two things we can’t: Consume a vast amount of information in a short amount of time and then make relevant connections within that massive pool of data in an equally short amount of time. Can humans consume lots of knowledge? Of course! But we’re constrained by our slow rate of consumption, our limited ability to retain information, and the high transaction costs of sharing what we know. I can only read so fast, I can only remember so much, and I can’t talk and listen at the same time. By comparison, Watson’s ability to do those things is essentially unlimited.

An early prototype of IBM's Watson / Image via Wikimedia Commons

Existing AIs are amazing, but they all still have to be trained to solve specific problems. Consider AlphaGo, the Google DeepMind AI developed to play the ancient Chinese game of Go, which is like a complicated version of Chess. Go is so complicated, in fact, that you can’t format every possible move and then have a computer learn them. Instead, you teach the computer how the game is played, and then have it play games over and over. That’s what Google did:

The key to AlphaGo is reducing the enormous search space to something more manageable. To do this, it combines a state-of-the-art tree search with two deep neural networks, each of which contains many layers with millions of neuron-like connections. One neural network, the “policy network”, predicts the next move, and is used to narrow the search to consider only the moves most likely to lead to a win. The other neural network, the “value network”, is then used to reduce the depth of the search tree -- estimating the winner in each position in place of searching all the way to the end of the game.

AlphaGo had to understand the rules of Go, then it had to play enough games of Go to be able to understand what was likely to happen throughout the rest of the game. Which means after every move, AlphaGo recalculated what its opponent was likely to do next. Google accomplished this by giving its AI 30 million moves from actual Go games and then having it play thousands of games against itself. This required “a huge amount of compute power,” according to Google, and it ultimately led to AlphaGo defeating South Korean Go grandmaster Lee Se-Dol earlier this year.

That’s an amazing accomplishment, but it’s not a cure for cancer or HIV; it can’t tell us how to solve the world’s biggest problems, and it’s definitely not Skynet. Basically, AlphaGo and Watson tell us AI still has a long way to go.

In the meantime, you should be learning about AI. What it can and can’t do, and how it works. Here are some easily accessible resources for doing just that. You should start with the basics of machine learning, which underpins today’s most advanced AI:

About Freethink

At Freethink you'll find inspiring stories from the frontiers of our rapidly-changing world. Each week we release powerful, short-form videos, profiling an innovator, entrepreneur or activist who is thinking differently and making a difference.