12 February, 2008, 06:42:01 AM

My foobar2000 just went from Hardcore "Scooter" to gentle "Enya"... And I thought "We live in 2008, it MUST be possible to analyze audio data for it's energy and generally adapt a simple category for it, something like "sad/romantic/happy/energy".

I know this has been discussed several times already, and I'm not looking for a bunch of posts about this being impossible, but positive thoughts about how this could be done! This will never replace the need for genres and maybe BPM etc., but it could be a great addition to existing meta-data.

In a seperate thread we can discuss the user-interface that can handle all this

almost everything is possible, but there are many things that define if it's going to happen or not:how useful is it?how easy is it to implement?how well is it going to work?and the answers to these questions don't look favorable.

There are programs like Moody, but those only let you choose colors corresponding to moods and write them to tags. This would be an excellent idea, if someone could find a way to reliably implement it.

almost everything is possible, but there are many things that define if it's going to happen or not:how useful is it?how easy is it to implement?how well is it going to work?and the answers to these questions don't look favorable.

Why not? It needs a userinterface of course, but it should be simple, just like my proposal above. (Maybe it could already works with "facets" for foobar2000 if this could be stored in a tag)

There are programs like Moody, but those only let you choose colors corresponding to moods and write them to tags. This would be an excellent idea, if someone could find a way to reliably implement it.

Moody is a nice concept, and it might work if we have a shared database to submit info to. For manually doing this for my 22000 track collection is just tedious. Actually I have several great ideas for a database like this, but I need to convince a developer about this first

So the first thing is to identify objectively measurable features of the signal that correlate to moods. Hmmm. I guess high BPM plus loud would be one set of clues.

Any objective measurables to correlate with the purity of voice of, say, Joan Baez (her first album still thrills me the way it did, gulp, 45 years ago), and the dirty whisky vocals of, say Tom Waits? Naively, I think of the pure voice as being "simpler" in wave form, but I guess it's not that easy.

almost everything is possible, but there are many things that define if it's going to happen or not:how useful is it?how easy is it to implement?how well is it going to work?and the answers to these questions don't look favorable.

Why not? It needs a userinterface of course, but it should be simple, just like my proposal above. (Maybe it could already works with "facets" for foobar2000 if this could be stored in a tag)

making a tool to properly analyze the 'mood' of a song is many times more complex than any fancy ui.

besides, same songs could produce different feeling in different people. the most sensible solution (based on the questions i stated above) would be a quick way to tag ('set mood' option in the context menu, perhaps) and add to playlist items for a certain mood.

besides, same songs could produce different feeling in different people. the most sensible solution (based on the questions i stated above) would be a quick way to tag ('set mood' option in the context menu, perhaps) and add to playlist items for a certain mood.

I'm not looking for a perfect solution, but a theory of how this can be estimated. Also I think the different feeling different people get, are more related to rating other than the "mood"

I think the most perfect way to do this, would be to use an online database based on music fingerprints and a cloud-like (similar to "last.fm") tagging system, and a set of guidelines to make people understand the different types of music (like Ishkur did for EDM)

I tend to agree that it is a subjective thing when it comes to determining the mood of a song and a database might work for the more obvious examples of mood (i.e. slow songs, or really fast ones) it's the songs that lie in the middle that would be argued over which side they should fall on. Perhaps an ideal solution would be an online database where people submitted a songs tags with a mood tag as well. The majority tag would be the default mood for that song, but you could have the option of seeing what other people had tagged as well. This would allow people who couldn't decide for themselves to have a somewhat reliable alternative which would be easy enough to change if they decided to do so.

This is definitely a topic worthy of discussion in my opinion. As someone who has a growing music collection, but very little in the way of classification tags, I think varied ways of classification outside of genre and rating would be a very good thing. This to me seems like one of the next steps in libraries of digital music which continue to grow. It creates something of a problem when you have over 10,000 songs at your fingertips but can't decide what you want to listen to at the moment. This may not be a problem for some, but the fact that there are already mood tags and mood indicators in the wild should say that it is a growing interest.

Done improperly I think this could do more harm than good. You get to a certain point, or at least I do, when you ask yourself; "How much data do I really need about my data?"

The MusicIP mixer is supposedly able to generate "good" playlists based on a small selection of songs that fits your mood. -- I never tried it though.

IIRC they use a big database that stores the "songs' genes" and the program requires some analyzing stage where the "genetic meta infos" are retrieved for all your songs (= scanning your collection's meta infos and requeting the "genes" from the server). Once you've done that you could at least theoretically create playlists with "similar" songs using "gene distance metrics".

The problem is that this service provider learns something about you because the server gets all the gimme-genes-for-tags requests (yay data mining!)

GJay (Gtk+ DJ) generates playlists across a collection of music (ogg, mp3, wav) such that each song sounds good following the previous song. It is ideal for home users who want a non-random way to wander large collections or for DJs planning a set list. You can generate playlists from within the application, or run GJay as a standalone command-line utility.

Playlist matches are based on:

* Song characteristics that don't change o Frequency fingerprint o Beats per minute o Location in file system * Song attributes that you set o Rating o Color (whatever that means to you)

I did use this for a while, although it annoyed me by always using xmms to play everything. And it's quite interesting the way it matches up songs. Certainly the frequency spread and BPM are somewhat indicative of mood. I'm not aware of anything that works better than this, since you can apply your own colours to each song to tweak its results.

There's currently no scientific basis for musical mood analysis - there is just not enough known scientifically to develop what you are after. Read Musicophilia by Oliver Sacks - it's a big best seller and a good read.

The closest you can expect to come is some kind of group rating system by people trained to your standards, sort of like the Musical Genome Project. Infact, you should try out your concept with their database - pick some songs that you think have a certain mood and feed them into the MGP's front-end (Pandora) and see if the songs it comes up with also have that mood.

2. Slow song, low BPM, low, growling notes, sung by some sad-sounding blues artist. Very blues, down sound. The mood would be low and depressing.

To us, the songs are very different, but it might be damn hard to program something that can reliably determine the mood. Or to make it worse, they could both be in the same octave, have the same type of vocalist, but have totally different lyrics. Pearl Jam's Last Kiss sounds uplifting at first until you realize it's about a guy who lost his girlfriend in a car crash. Or Jeremy, a song about a kid shooting up his classroom. Not very easy to determine mood automatically.

I made a Mediamonkey script that has a dj mode which uses last.fm related tracks/artists feeds to choose the next track to play, along with giving you a node view displays related tracks/artists for your favorite artists/tags/user/groups/location.

It's not quite finished (doesn't save the nodes you add), but otherwise is quite functional.

What a HUGE topic! Take the time to do a Google Search for "music by mood" or something to that effect and you'll find hundreds upon hundreds of threads all over the web on the topic, with just as many applications to accomplish said task. As such, I find it somewhat amazing that the 'big players' on the market (WinAmp / Windows Media Player / FooBar etc.), don't 'ship' with some kind of contextual plug-in part and parcel right off the bat.

The MusicIP mixer is supposedly able to generate "good" playlists based on a small selection of songs that fits your mood. -- I never tried it though.

IIRC they use a big database that stores the "songs' genes" and the program requires some analyzing stage where the "genetic meta infos" are retrieved for all your songs (= scanning your collection's meta infos and requeting the "genes" from the server). Once you've done that you could at least theoretically create playlists with "similar" songs using "gene distance metrics".

The problem is that this service provider learns something about you because the server gets all the gimme-genes-for-tags requests (yay data mining!)

Cheers,SG

I use MusicIP with my Squeezebox and it generates a playlist from a "seed track" of my choice. There are adjustable parameters for how similar or different the playlist tracks will be compared to the seed track. They also have a "Mood" function - you mark several songs representing the mood you want and it generates a playlist accordingly - haven't tried this as I don't think it works with the SB.

The MusicIP mixer is supposedly able to generate "good" playlists based on a small selection of songs that fits your mood. -- I never tried it though.

IIRC they use a big database that stores the "songs' genes" and the program requires some analyzing stage where the "genetic meta infos" are retrieved for all your songs (= scanning your collection's meta infos and requeting the "genes" from the server). Once you've done that you could at least theoretically create playlists with "similar" songs using "gene distance metrics".

Actually their playlists are built on acoustic analysis, and has nothing to do with genres, ect... And yes, they do a very nice job of creating playlists. I would think they could adapt their technology to include moods.

The "music genome project" was based on manual analysis of a large music database. However, I imagine that identifying your songs, then doing a database lookup to such a database is a lot easier and more precise.

For automatic characterization, I tend to agree with what is suggested:1. BPM analysis2. "loudness" analysis3. Perhaps some frequency analysis indicating the number of "harmonic" sources at any time

The "music genome project" was based on manual analysis of a large music database. However, I imagine that identifying your songs, then doing a database lookup to such a database is a lot easier and more precise.

For automatic characterization, I tend to agree with what is suggested:1. BPM analysis2. "loudness" analysis3. Perhaps some frequency analysis indicating the number of "harmonic" sources at any time

-k

Am I the only person in this conversation who has actually read up on the science on this topic? N.B. that any automatic mood-sensing scheme must be able to detect transitions from major to minor chords and vice versa.

Am I the only person in this conversation who has actually read up on the science on this topic? N.B. that any automatic mood-sensing scheme must be able to detect transitions from major to minor chords and vice versa.

It's something like that I was thinking of. There are already techniques for detecting the overall key of a song, and I would assume that it's possible to do finer recognization like this.

Should we call this "melodic-mood" from now on, to prevent messages like "It's impossible to detect mood since lyrics are not taking into account"? I'm fully aware of that, and I know that people have different ways of listening to music. Personally I listen to the melody more than I hear the lyrics, and I would rather base my playlist mixing on a melodic-mood than lyrical similarities.

Also if the audio is analyzed this way, you could probably get things like "energy" and maybe even detect probabilities of "Rock", "Celtic", "Electronic" songs by looking at the characteristics of different instruments.

(By the way, creating a lyrics scanner could be a nice idea as well, but I have no idea where we are in something like that ...and I really don't want to mix it into this topic.)

I think that Tweet Web is a nice idea, and it follows the last.fm principle with users defining a cloud, but I would like to stay on automatic tagging in this topic.