Sign up for our weekly newsletter

The Futures of Music: Metaphors, Robots and Questioning AI

A scary baby-like robot sits in front of a television screen, alongside two older well-known robots, as these words flash their shiny helmets. The year is 2005. Daft Punk launched Technologic with an iconic video that reflects on our complicated relationship with technology. Meanwhile at the Swiss Institute for Artificial Intelligence, researchers were struggling to create the ultimate IQ test for AI, but as New Scientist reported back then “too often intelligence is identified with human intelligence but, given the wide range of systems in AI, this anthropomorphic approach is not always appropriate”.

Fast forward to 2017. While Daft Punk’s music is now used as a soft power weapon and the robots are getting tired of their helmets, the question about the I in AI remains. The term is constantly repeated in most conversations about the future of everything, including the future of what could arguably be the most universal of human expressions: music.

Just as in the Daft Punk video, the repetitiveness of tech-related terms that we don’t fully understand end up feeding hypes based on a misleading use of metaphors. While back then “Plug it, play it, burn it, rip it, drag and drop it, zip – unzip it” stand for useful metaphors that challenged the music industry, today terms as “artificial intelligence” (AI), “cognitive computing” and “deep learning” make us believe that computers can think and learn the same way humans do. However, the truth is computers don’t think. They are just “harder, better, faster, stronger”.

As Nicholas Carr points out in his book, The Glass Cage, “much of the power of artificial intelligence stems from its very mindlessness. Immune to the vagaries and biases that attend conscious thought, computers can perform their lightning-quick calculations without distraction or fatigue, doubt or emotion. The coldness of their thinking complements the heat of our own.”

It is time to acknowledge that the words intelligence, cognitive and learning are just metaphors when used in the context of technology. Machines are mindless. And when it comes to the narratives shaping the futures of music and other creative fields, we need to pick better metaphors and as Public Enemy suggested in the 80s: “don’t believe the hype”.

The deep learning algorithms behind AI have been there for decades, but the advances in GPUs (Graphics Processing Units) mostly fostered by the growth of the video games industry, are allowing them to run along with colossal data sets gaining massive utility value for leading corporations such as Amazon, Google and Apple. With the emergence of voice-controlled assistants amongst many other novelties available in our pockets everyday, the idea of AI is now a big part of what Professor Benjamin H. Bratton labels as “The New Normal”.

“The collective intuition of humanity has cultivated for centuries a mechanism that helps us deal with this kind of broad and complex cultural change: the arts”

The collective intuition of humanity has cultivated for centuries a mechanism that helps us deal with this kind of broad and complex cultural change: the arts. As researchers Roelof Pieters & Samim Winiger point out in their essay CreativeAI On the Democratisation & Escalation of Creativity, already in the 70s Michael Noll, researcher at Bell Labs and pioneer in the use of computers for creativity, called for “a new breed of artist-computer scientist”. One of the artists who listened to the call was Brian Eno, who in 1975 started using algorithmic and generative principles to compose music in his album Discreet Music.

Almost 40 years later, Eno keeps finding alternative applications for AI in albums as The Ship, doing what artists know best: interpreting technologies in different ways than they are originally conceived to push boundaries, reveal new possibilities and most importantly ignite critical thinking. In his words “using the technology that was invented to make replicas to make originals”

While companies as Jukedeck are using AI to compose music, selling tracks to anyone who needs background music for videos, games or commercials, the big players like Sony, Google or IBM with their Flow Machines, Magenta and Watson Beat initiatives are partnering with artists to explore new ways to create music, where the AI becomes a collaborator, assistant or a sophisticated inspiration manager.

These are early examples of how a shift in the conversations about the role of AI in the futures of music, one that refers to the human potential can allow us to escape from the fear narrative around automation, robots taking our jobs or a man vs. machine paradigm. Taking this idea further as physicists Kailash Awati and Simon Buckingham Shum suggests in the essay Big Data Metaphors We Live By perhaps we should use a different metaphor to better understand AI. One that is inspired by the notion of “augmenting the human body with exoskeletons that add strength, reach and stamina to perform at a completely new physical level — but with human intelligence still firmly at the centre.”

In other words, they suggest a simple shift with a huge potential: from AI to IA. From “Artificial Intelligence” to Intelligence Augmentation. Using these fascinating and complex tools to help composers become more creative and also enabling more people to become composers, because after all, music is an expression of humanity, or in Daft Punk words, we are “humans after all”.