Humans made a huge cognitive leap when they first sketched figures onto rocks—now, computers are learning to do the same.

Imagine someone told you to draw a pig and a truck. Maybe you’d sketch this:

​

Easy enough. But then, imagine you were asked to draw a pig truck. You, a human, would intuitively understand how to mix the salient features of the two objects, and maybe you’d come up with something like this:

​

Note the little squiggly pig tail, the slight rounding of the window in the cab, which recalls an eye. The wheels have turned hoof-like, or alternatively, the pig legs have turned wheel-like. If you’d drawn it, I, a fellow human, would subjectively rate this a creative interpretation of the prompt “pig truck.”

Until recently, only human beings could have pulled off this sort of conceptual twist, but no more. This pig truck is actually the output of a fascinating artificial intelligence system called SketchRNN, a part of a new effort at Google to see if AI can make art. It’s called Project Magenta, and it’s led by Doug Eck.

Last week, I visited Eck at Google Brain team’s offices in Mountain View, where Magenta is housed. Eck is clever, casual, and self-effacing. He received his Ph.D. in computer science from the University of Indiana in 2000, and has spent the intervening years working on music and machine learning, first as a professor at the University of Montreal (a hotbed for artificial intelligence) and then at Google, where he worked at Google Music before heading to Google Brain to work on Magenta.

Eck’s drive to create AI tools for making art began as a rant, “but after a few cycles of thinking,” he said, “it became, ‘Of course we need to do this, this is really important.’”