In this week’s episode of the AI Podcast, Goodfellow explains that a major obstacle in deep learning is the need for “a massive amount of labeled training data,” which requires a lot of work to be done by humans.

“If you take a deep neural network and you teach it to read, to actually look at photos and recognize letters that it can see in the photo, it can do that about as well as a human being can,” says Goodfellow in conversation with AI Podcast host Michael Copeland. “But the process of learning to do it doesn’t look anything like the process that a human follows to learn to read.”

GANs allow deep neural networks to consume and learn data at a faster rate, with less human effort. The “adversarial” comes into play as two networks compete against each other. The generator network creates the image, while the discriminator network discerns the image’s authenticity.

“You can think of it as kind of like an art critic. The discriminator network looks at the image and says whether it’s real or fake,” says Goodfellow. “And it’s also able to tell the generator what it should do to make the image look slightly more realistic.”

When reflecting on how a brief debate at an Ontario bar, Les Trois Brasseurs, led him to GANs, Goodfellow is adamant that to be a successful researcher, you shouldn’t only dive deep into topics, but also allow yourself time to be creative.

“I try to make sure that I don’t sign up for too many things, because I need to have some space to be a little bit spontaneous,” he says.