Music, Art & Machine Intelligence 2016 Conference Proceedings

Conference participants asking questions on the role of technology and machine intelligence on the practice of making art.

On June 1st, Google’s AMI and Magenta groups jointly hosted a conference on MI/ML and creative practice, called Music, Art & Machine Intelligence. The roughly 80 attendees and 29 presenters represented a broad range of perspectives on music, art, and machine intelligence, as well as neuroscience and philosophy. Each presentation lasted only ten minutes, but the variety of disciplines, the art and music demos, and regular breaks in the San Francisco sun and air kept brains and bodies stimulated.

While “islands” of research do exist, interdisciplinary efforts are bolstering an image of creative research and researched creativity. Generative art (whether genetic, rule-based, or hallucinated) fills the humid equatorial regions. The great northern tundra is home to ever more efficient and ingenious MI techniques, while nomadic neuroscientists and non-denominational wanderers traverse the plains between.

Computer scientists, artists, neuroscientists and psychologists shared their latest research into creativity and the brain.

A cluster of talks on new generative tools took things to the next level of interactivity. Rebecca Fiebrink of Goldsmiths treated us to some Mego-worthy industrial noise, controlled by her highly playable Wekinator software, a combination of Leap motion tracking and on-the-fly perceptron manipulation that enables access to the subtleties of embodied knowledge acoustic musicians take for granted. Hannah Davis of NYU crossed the already-crossed streams with her TransProse project, which translates literature into music through emotion mapping. By reading the emotional temperature of a text with NLP, Davis created timelines that could be composed to algorithmically. Her early experiments sounded very mid-20th c. and atonal, while later explorations became impressionistic.

Mike Tyka and Chris Olah were on-hand to situate us art-historically and geometrically, respectively. Tyka observed the accelerated nature of kitsch and the absorption of novelty by art viewers. Olah demystified neural nets so we could understand them as simple high-dimensional manipulations of geometry and not slime-breathing multi-eyed dog-monsters.

Artist Tivon Rice capped the session off with exquisite corpses of another nature, namely drone photogrammetry of buildings under construction in Seattle, a hotbed of urban change like many cities in the US right now. His project with AMI pairs neural-storyteller text generated from these images and trained on corpora of city planning submissions and public responses. His show is currently on view at Threshold Gallery at Mithun Architecture in Seattle.

In the segment titled Creating with Machines, Gil Weinberg of Georgia Tech showed us his regime for training robots to listen to and generate music. Musical augmentation is just around the corner; Weinberg is working on a drum-centric prosthetic arm that can follow (and fill) along using MI. We saw examples ranging from a simple swing ride pattern to black metal-ready 20Hz snare blasts, bringing restoration of human ability into super-human territory.

Columbia’s Hod Lipson showed us physical works painted by a robotic brush. The artist in question reproduces existing works and generates new ones using MI. The (human) artist Ian Cheng brought a whole host of entities into the mix, some of whom were controlled by voices from the bicameral beyond. His video pieces are real-time simulations of small scales societies, or, in one case, a sponge-y animal-vegetable hybrid. These simulations produce unexpected and unpredictable behavior, in beautifully emergent and entrancing ways.

Resident AMI artist Memo Akten presented MI-based lighting control that responds to the motions of dancers. He also showed a gestural interface for music, intended to provide the feeling of performing classical piano without stressful years of training at the hands of a brutal Russian master. Expect to see more from Memo as he completes his residency with the AMI team.

If you missed it at Moogfest, you could have caught it at MAMI — Magenta’s Adam Roberts showed off the group’s latest music sequence generation. Roberts played a musical phrase on an MI-enabled illustration of a MiniMoog synth, which improvised on the theme. It may not have been ‘Trane but it was well-trained and reliably melodic.

The field of MI-enhanced creativity is wild, and in many ways, unexplored. It was clear at the MAMI conference that a multidisciplinary approach is not only fruitful and necessary but also entertaining and thought-provoking. Perception and creation are indeed two ends of one kaleidoscope, and the multi-sensory ways of knowing that art and music provide are essential in deepening our investigations of creativity, technology, and humanity.

AMI is a program at Google that brings together artists and engineers to realize projects using Machine Intelligence. Works are developed together alongside artists’ current practices and shown at galleries, biennials, festivals, or online.

Never miss a story from Artists and Machine Intelligence, when you sign up for Medium. Learn more