And it reveals how well-designed the original monsters were.

Neural networks and Pokémon Go. It was only a matter of time before some computer scientist realized this was a chocolate-and-peanut butter combination and programmed a convolution neural network like Google’s Deep Dream on Pokémon Go‘s dataset of 151 cute, collectible monsters. Now we have it, courtesy of a Japanese researcher known as Bohemia highlighted today by Prosthetic Knowledge.

advertisement

What you get from the results are abstract impressions of Pokémon. The sense of a Snorlax. The premonition of a Pikachu. The hunch of a Horsea. The notion of a Nidorino. The alliterative of an Alakazam. These blurry Pokémon blobs end up looking like pocket monsters that have viewed through the gene-slimed porthole by Professor Oak after being put through the telepods together. They’re less new Pokémon than chromatic smears of the mashed-up attributes of existing Pokémon.

To me, what’s fascinating about the exercise is that it really shows just how well-designed the original 151 Pokémon were. Even when a neural network is hallucinating them, the core traits of various Pokémon usually come through. This is actually by necessity. The original 151 Pokémon, which is what Pokémon Go uses, were originally designed in the mid-’90s so that they would appear distinct when viewed on the original Nintendo Game Boy’s 160×144 resolution screen. So to be successful, a Pokémon needed to have its own distinct silhouette, and ideally one distinct highlighting feature–like the big spiral Poliwhirl has on his stomach–in addition to the colors that would never be seen on a Game Boy’s screen (but would be seen in cartoons and merchandise).

It’s a credit to how successful these designs were that even when a neural network is mashing them up, you can look at the results and say, “That’s, like, 70% a Pikachu, and 25% a Bulbasaur, and 5% a Magikarp.” In fact, that sounds like a pretty fun game variant of Pokémon in its own right.