Then last week, James Lynden shared his research into how Spotify affects mood and found out that people are mood-aware when they make choices on the service (emphasis mine):

Overall, mood is a vital aspect of participants’ behaviour on Spotify, and it seems that participants listen to music through the platform to manage or at least react to their moods. Yet the role of mood is normally implicit and unconscious in the participants’ listening.

Having developed music streaming products myself, like Fonoteka, when I was at Zvooq, I’m obviously very interested in this topic and what it means for the way we structure music experiences.

Another topic I love to think about is artificial intelligence, generative music, as well as adaptive and interactive music experiences. Particularly, I’m interested at how non-static music experiences can be brought to a mass market. So when I saw the following finding (emphasis mine), things instantly clicked:

In the same way as we outsource some of our cognitive load to the computer (e.g. notes and reminders, calculators etc.) perhaps some of our emotional state could also be seen as being outsourced to the machine.

For the music industry, I think explicitly mood-based listening is an interesting, emerging consumption dynamic.

Mood augmentation is the best way for non-static music to reach a mass market

James is spot-on when he says mood-based listening is an emerging consumption dynamic. Taking a wider view: the way services construct music experiences also changes the way music is made.

The playlist economy is leading to longer albums, but also optimization of tracks to have lower skip rates in the first 30 seconds. This is nothing compared to the change music went through in the 20th century:

The proliferation of the record as the default way to listen to music meant that music became a consumer product. Something you could collect, like comic books, and something that could be manufactured at a steady flow. This reality gave music new characteristics:

Music became static by default: a song sounding exactly the same as all the times you’ve heard it before is a relatively new quality.

Music became a receiving experience: music lost its default participative quality. If you wanted to hear your favourite song, you better be able to play it, or a friend or family member better have a nice voice.

Music became increasingly individual: while communal experiences, like concerts, raves and festivals flourished, music also went through individualization. People listen to music from their own devices, often through their headphones.

Personalized music is the next step

I like my favourite artist for different reasons than my friend does. I connect to it differently. I listen to it at different moments. Our experience is already different, so why should the music not be more personalized?

The gaming industry has figured out a different model: give people experience to the base game for free, and then charge them to unlock certain features. Examples of music apps that do this are Bjork’s Biophilia as well as mixing app Pacemaker.