The results are surreal. Barrat posted many of the final pieces of artwork -- which can only be described as surreal, blobby, swirly naked women -- on Twitter. It's almost like a very intoxicated Salvador Dali and a dizzy Picasso joined forces to make art. ...Barrat's AI-assisted artwork isn't exactly sensual. In fact, most of the nudes look like they are melting on a very hot day.

"The way that it paints faces makes me uncomfortable. It always paints them as like, purple and yellow globs -- that isn't in the training set so I'm actually still not sure why it does that.

In her delightful blog AI Weirdness, Janelle Shane entered 18,458 unique bills introduced in Massachusetts into a neural network, which then created some rather hilarious bills, including: Read the rest

The AI paint name generator (previously) has refined its preferences. Though still very bad at naming paint colors, there seems to be (to my mind) an emerging personality, one that has beliefs and, perhaps, opinions about its creators.

Pictured at the top of this post, for reference, is the human-named classic Opaque Couché.

Jacques Mattheij hoped to make some cash buying cheap boxes of used, unsorted Lego that he'd organize into more valuable assortments for resale. After acquiring two metric tons of bricks, he was motivated to build a technological solution for sorting. He outfitted a conveyor belt with a cheap magnifying USB camera and employed air nozzles to blow the bricks into various bins. The bigger challenge though was how to get the PC to identify the bricks. From IEEE Spectrum:

After a few other failed approaches, and six months in, I decided to try out a neural network. I settled on using TensorFlow, an immense library produced by the Google Brain Team. TensorFlow can run on a CPU, but for a huge speed increase I tapped the parallel computing power of the graphics processing unit in my US $700 GTX1080 Ti Nvidia video card....

...I managed to label a starter set of about 500 assorted scanned pieces. Using those parts to train the net, the next day the machine sorted 2,000 more parts. About half of those were wrongly labeled, which I corrected. The resulting 2,500 parts were the basis for the next round of training. Another 4,000 parts went through the machine, 90 percent of which were labeled correctly! So, I had to correct only some 400 parts. By the end of two weeks I had a training data set of 20,000 correctly labeled images...

Once the software is able to reliably classify across the entire range of parts in my garage, I’ll be pushing through the remainder of those two tons of bricks.

It’s a really small dataset, actually - so small that in almost no time at all, it learned to reproduce the original input data verbatim, in order. But by setting the “temperature” flag to a really high value (i.e. it has a higher chance of NOT going with its best guess for the next character in the phrase), I can at least induce spelling mistakes. Then the neural network has to try to recover from these, with often entertaining results.

@Smutclyde Google Translated sequences of unicode characters and short pairings, at varying lengths, to see what the neural networks would interpret each as. The results are remarkable. Lovecraftian wailings, for example, become homoerotic death metal lyrics.

In her spare time, University of California, San Diego engineer Janelle Shane trained a neural network to generate recipes for new dishes. Informed by its reading of existing recipes, the neural network did improve over time yet it's clearly not quite ready for Iron Chef. Here are two recipes from her Tumblr, Postcards from the Frontiers of Science:

Brush each with roast and refrigerate. Lay tart in deep baking dish in chipec sweet body; cut oof with crosswise and onions. Remove peas and place in a 4-dgg serving. Cover lightly with plastic wrap. Chill in refrigerator until casseroles are tender and ridges done. Serve immediately in sugar may be added 2 handles overginger or with boiling water until very cracker pudding is hot.

Yield: 4 servings

This is from a network that’s been trained for a relatively long time - starting from a complete unawareness of whether it’s looking at prose or code, English or Spanish, etc, it’s already got a lot of the vocabulary and structure worked out.

This is particularly impressive given that it has the memory of a goldfish - it can only analyze 65 characters at a time, so by the time it begins the instructions, the recipe title has already passed out of its memory, and it has to guess what it’s making.

Robbie Barrat is president and founder of their high school computer science club; they created Rapper-Neural-Network, a free software project that uses machine learning trained on a corpus of 6,000 Kanye West lines to autogenerate new rap songs. Read the rest

It's not bad. In fact, this is a triumph: a Christmas song written entirely by an artificial intelligence at the University of Toronto. Yet it has that uncanny neural network je ne sais quoi in spades.

I swear it’s Christmas Eve
I hope that’s what you say
The best Christmas present in the world is a blessing
I’ve always been there for the rest of our lives.

Reed Morgan Milewicz, a programmer and computer science researcher, may be the first person to teach an AI to do Magic, literally. Milewicz wowed a popular online MTG forum—as well as hacker forums like Y Combinator’s Hacker News and Reddit—when he posted the results of an experiment to “teach” a weak AI to auto-generate Magic cards. He shared a number of the bizarre “cards” his program had come up with, replete with their properly fantastical names (“Shring the Artist,” “Mided Hied Parira's Scepter”) and freshly invented abilities (“fuseback”). Players devoured the results.

Before training my own dreaming network, I'll need to choose a network layout that suits my needs. In order to learn about the strengths and weaknesses of different layouts, I've run the same guided dreaming tour with four different Imagenet-pretrained models: GoogLeNet, VGG CNN-F, VGG CNN-S and Network-in Network Imagenet model (all available via Caffe model zoo).

The interframe processing is the same for all except NIN which is keen to hallucinate very bright saturated spots, so I decided to couple it with a desaturation filter which effectively produces a gray background. Most of the artifacts you are likely to see stem from the cumulative nature of the interframe processing (not from compression).