T
T
he concept of the digital pet goes back a ways. Veterans of the 1990s dot-com boom might remember the bizarre toy fad known as Tamagotchi, in which kids were encouraged to care for surprisingly needy little digital devices made of cheap plastic. It was just like having a real pet, minus all the fun.

Electrical engineer Janelle Shane has found a 21st-century variation on the theme by spending her spare time training baby neural networks. Sometimes referred to as machine learning systems, neural nets are highly evolved computer programs modeled on the human brain and nervous system. They can learn on their own and think laterally in a way that traditional programs can't.

In that sense, neural nets are a kind of artificial intelligence, although that phrase is famously slippery these days. (If you're in no rush and plan to live forever, ask two computer scientists to debate the term.) The bottom line is that neural networks, given enough input data, can teach themselves to think up all kinds of things: heavy metal band profiles, for instance.

Stormgarden, black metal from Germany; Inhuman Sand, melodic death metal from Russia; and Black Clonic Sky, black metal from Greece were among the profiles generated by Shane's neural network after the system was fed a data set of 100,000 band names, subgenres, and countries of origin.

Shane's neural net outputted the new band names by applying pattern recognition algorithms to the raw chunk of input data. In a very real sense, the AI system simply thought up the new names, all by itself.

“With a neural net, you set up some very basic rules, then you give it the data and let it figure out the patterns in that data and how to process them,” Shane said. “Its really just a framework where the computer figures out its own rules. The computer program is writing itself, in a sense.”

The term artificial intelligence might conjure images of sinister supercomputers or space-age crystalline mechanisms, but Shane's neural net program is an unassuming little chunk of open-source computer code called char-rnn. It's just one of several neural net frameworks available online, for free, developed by a growing community of neural network enthusiasts.

As for the hardware required, well, that's surprising, too.

“I'm doing the work now on a 2010 MacBook Pro,” Shane said. “It's slow, but it works.”

In fact, pretty much anyone can adopt a neural net of their own and set it up on their laptop or desktop system.

“To get the fastest performance, it's best if you have a graphics card that can help speed up your calculations,” she said. “But you don't need to own a powerful computer to get these calculations going. The limiting factor for me is really just the time I have to go through the data sets. I just get a computation going, then I go do something else and check back in on the progress.”

For a little over a year now, Shane has been posting the results of her experiments on her Tumblr blog, which she started while earning her Ph.D in electrical engineering from the University of California, San Diego. Her neural net project is a hobby, essentially, and Shane has no financial or professional stake in the work. By day, she's employed as a research engineer at a small company outside Boulder, Colorado.

But in recent weeks, her experiments, along with her articulate enthusiasm for the subject, have won her a growing number of fans and readers. Several of her posts have gone viral, getting picked up by media outlets like Nerdist and The Atlantic and forwarded around by researchers in the AI business.

Click around Shane's blog and you'll find many more lists generated by her pet AI. In one recent experiment, Shane fed her neural net more than 100,000 plot summaries from Wikipedia entries on various books, films, video games, and TV shows. After a few days of ruminating, the neural net suggested a long list of potential story titles, which Shane then sorted by genre: Cannibal Spy 2 (action/adventure), Swords and Batman: Summer Party (sci-fi/fantasy), Zombies of Florence (horror).

Dungeons and Dragons fans will appreciate the AI system's suggestion for new D&D spells, like "Barking Sphere" and "Wrathful Hound," or the unnerving sorcery known as "Gland Growth."

In her most recent experiment, Shane persuaded the neural net to generate new paint colors, using a database of 7,700 Sherwin-Williams products. The AI outputted not only new colors, generated from mixing various RBG values, it suggested appropriate names, many of which are oddly evocative: Felthy Blue, Burple Simp, and Stanky Bean.

Shane recalled a particular moment of pride when she got her neural network to figure out the art of the knock-knock joke.

“I really didn't have high hopes,” Shane said. “Someone online had very kindly provided a list of a few hundred knock-knock jokes that I fed into the algorithm. We kept in touch by email and I would tell her about the progress it was making: 'Oh, look, it learned how to spell knock!'"

“I remember the moment when it actually spit out a coherent joke," she continued. "It had made a huge leap from barely getting the formula of a knock-knock joke to writing a new joke.”

The joke seems to suggest that the neural net might be enjoying its work:

So does Shane feel a sense of pride when her digital pal makes a cognitive breakthrough?

“Oh, yeah," she said. "I sent a lot of emails that day, with a lot of exclamation marks.”

The really nice thing about the hobby, Shane said, is that pretty much anyone can adopt a pet AI. No experience is necessary to train your own neural net, and the open-source code packages out there are increasingly user-friendly.

“They're pretty accessible,” Shane said. “It helps to have some experience with code, just to kind of know what it's doing and make small tweaks. But the tutorials they have out there now, it's really accessible for anyone who wants to learn how.”