Wednesday, 29 August 2007

[UPDATE]
Phew, so we’ve received >1 million fingerprints so far.. not bad for the first 24hrs. The most fingerprints submitted by a single user is 12,203. I’m sure that record won’t stand for long tho :)
The ‘server overloaded’ message should be silenced now. We are currently receiving ~42 fingerprints per second.
More news to follow tomorrow.[/UPDATE]

The veteran Scrobblers amongst you will probably remember our “moderation system” – this was a user-voting system that let you propose and merge artists, ultimately fixing misspelled artists by creating aliases to the correct version.

We are planning to bring this back in a big way, addressing not only artists, but albums and tracks too.

We don’t want to have to vote on the really obvious stuff (“01 – Radiohead”), so we are going to do as much as possible automatically, with various algorithms and data mining tricks. The entries we can’t be 100% sure about, and the remaining stuff, will again be thrown open to a public vote.

Phase 1 is now underway with the first public “beta” release of our new fingerprinting technology. This will mature into a nice sexy (free) API that lets you grab clean metadata based on an audio fingerprint. For now, all that it does is send the fingerprint data to bootstrap the moderation system. This doesn’t change any MP3 files on your computer. It does send useful fingerprint data to our moderation system so we can get the ball rolling. If you have a big MP3 collection, it will take a while… Thankfully it remembers where it got to, so you don’t have to do it all in one session.

What we’ll do next is figure out all the popular (mis)spellings for tracks with the same fingerprints. We will publish lots of stats, example data and graphs showing our progress as the fingerprint database grows in the coming weeks. We need people with MP3 collections (of any size/quality) to download and run the fingerprinter to make this work, so spread the word.

Remember, you don’t need to clean up your ID3 tags before running the fingerprint app: This time round, people with imperfect tags are actually going to be of some use to us, and don’t deserve all the terrible things we normally wish on them ;)

Download the app, and watch this space for lots of stats and graphs detailing our findings in the coming days and weeks!

@Nectar_Card: The fingerprinting client only submits the fingerprint and your metadata, nothing else. This is pretty much the same as scrobbling, privacy-wise. All we know is that you have that particular file.

@christiand: The client won’t be able to read DRMed files, so we won’t be able to fingerprint them.

However, this release for OSX doesn’t appear to allow me to select a drive other than the boot disk. Symbolic linked folders don’t work either so I can’t scan my MP3 collection which is on an external drive…

Brad: I’ll promise to fix it for the next release if you’ll promise to fingerprint your music. And that you also promise to think of me, hard at work at my London desk, desperately compiling, holding back frustration at shiny strange music laden devices, longing for a cool, light ale, but ignoring the urge so I can finish up the work, and make iPod scrobbling work properly. For the good of mankind.

@Macca – The Windows client is the one that has recently gone completely haywire for a lot of people when it comes to iPod plays.

http://www.last.fm/forum/34905/_/315141/1

Since shortly before my old 4G died, the Last.FM client only sporadically would detect new plays. For whatever reason the last update seemed to screw it all up. Now, with my new 5.5G iPod, I think I’ve only submitted something like five out of 300 tracks I’ve listened to. Starting yesterday, every time I sync my iPod, the Last.FM client detects it as a new iPod and asks me if I’d like to scrobble it… and yet nothing gets scrobbled. ;(

Sorry, that is off-topic, but I am sad.

Fingerprinter went from 64 hours remaining to saying 55 hours. Now it’s back to 64 hours.

It’s also only using 20-50% of my CPU time. Make it threaded, use both my cores!

How is this fingerprinting/metadatabase going to relate to MusicBrainz? I know last.fm recently came to an agreement with MusicBrainz. Is this data ultimately going to find its way to musicbrainz? Will the soon to be released web services allow me to resolve a track to an MBID? Is this using the MusicIP fingerprinter? I’ll fingerprint my 1.5TB of music if you answer these questions ;)

Paul, we’ve done some testing, and we weren’t really happy with MusicIP’s fingerprinting service. We want to work together more closely with MusicBrainz and maybe at some point MusicBrainz might even want to switch to the fingerprinter we’re using? It really does a pretty good job and the source code is already out there and the web service will be open. I’m not sure how we will map MusicBrainz IDs to ours, but I’m guessing we could run MusicIP’s finger printer on our music.

Thanks Elias – Kimko is spot on … it’d be nice to understand how this is going to fit in with MB up front. I’m guessing there are some folks who wouldn’t mind contributing this kind of data if there’s a guaranteed path to a Musicbrainz ID resolver. If there’s no guarantee, then it starts to feel a bit like CDDB. I may contribute lots of data that someone else gets to use and make money with. Certainly there’s value in cleaning up metadata, but please don’t overlook the tremendous opportunity to create a single, ubiquitous, track-level song ID. Last.fm is now in a very influential position in the industry. I really hope that you will explicitly adopt the MBID and support it in your web services.

Are you going to use the MusicBrainz information when verifying the data? Gathering data isn’t really a democratic process (well, you know that already) and it would be a shame if the amount of work done on the MB regarding wrongly attributed tracks/albums, wrong names, misspellings etc. wasn’t used here.

What about multiple libraries hosted on different computers (laptop and desktop)? Not identical, but there is certainly overlap. Could have different metadata for the same tracks even (artist, the vs. the artist for instance)

About 5 minutes into my rather large collection it started crashing. Oddly I traced it down to a particular file which was also causing problems with another app recently (Simplify Media). In that case it was due to the ID3 tag library they were using. Hope it doesn’t keep choking on the meta data every so often. Well, only 262 hours left to go now ;)

We use taglib, could it be the same lib? These tag reading libraries are never particularly stable it seems :( For the client we should separate it into another process I reckon just to be more stable.

I’m wondering, though: does your app send the ID3v1 or ID3v2 tag? Or both? I clean up the v2 tags but often leave the v1 tags whatever eMusic or whomever set them to to assist in tracing where they came from…

Speaking of TagLib… the Windows installer and the source code tarball include binaries of my old port of TagLib to Windows, but not the source code. For the sake of LGPL, I think it would be better to fix this.

And while you are on it, you could probably remove the .dmg file from the source tarball. :)

This is great! Hah, I can’t clearly remember if this re-incarnation of the older checks & balances was part of the old AudioScrobbler when I joined [I was a bit daunted by the layout then, but quickly overcome.].

However, knowing music collections as large as mine [& ever growing], I’m quite thankful for it!
I’m positive there’s sure to be sooommme thing that I’ve missed. xP

Thanks much for all the hard work!
I’m still loving AS/Last.FM ;] for well more than 2 years, & those to come.

Overloaded Servers/Network Error message is received about every thirty seconds for the four times I tried fingerprinting my entire collection.. Trying single albums/folders at a time does the same thing.. Maybe server-side issues could be fixed/amplified before releasing the next client-side fingerprinting upgrade?

Also, along with Jester.NL, a minimization to task bar or system tray option would be great so it can run in the background.. A memory allowance would be nice too down the road.. I understand this is the initial “quick” release and beta.. Great overall idea though.. I’ll try it again another night..

Pointless stuff for developer(s) in case it matters: Using Windows Vista Ultimate with over 50,000 mp3 files (Raid-0/500GB/2 Hard drives + three external drives).. ID3v1 and ID3v2 tagged initially by CDex then re-tagged with TagScanner as are the ones I “share” and “borrow” from SLSK and from free Last.fm mp3s..

BTW, to anyone wondering what to do if they have multiple collections (laptop/desktop, home/work etc), I would recommend to just fingerprint it all. Don’t worry about overlap, that’s what the fingerprints are eventually supposed to fix :)

A quick note on the algorithm: as we specify in the source, it is based on the work of Yan Ke, Derek Hoiem, and Rahul Sukthankar (see http://www.cs.cmu.edu/~yke/musicretrieval/).

They’re idea is rather simple: use a computer vision algorithm to identify “patterns” in a (noisy) visual representation of the audio (i.e. spectrogram). The patterns that better discriminate between same/different pairs of songs are found by training a smart machine learning algorithm, and they are codified into binary thresholds, which are then applied to overlapping snippets of the song. Since we have 32 of those threshold, each snippet (2048 frames at ~5kHz) can be summarized into an integer (a key). For about 30 seconds we have about 2500 of those keys, which are sent to our searching service! And.. voila’! Fingerprint is served! ;)

@Lukáš: Was it your port? Wow, thank you very much, it saved us a lot of troubles (even though, the compilation was kinda tricky)! :)
Can you provide us with an updated link to the source? We would like not to include the source of ALL the libraries we use. :)

I have about 1/4 of my music in FLAC? Also, will the fingerprints be able to tell that the fingerprints for different formats are the same song, i.e. ogg and mp3 if you ever add more than mp3? The reason I ask is I have most of one artist in 5.1 24 bit, so I figure the channels will have different fingerprints but it would still be the same song?

@David: FLAC and AAC will follow (among others), it’s just about getting the wav data out of the format and that’s why we will NOT support DRM. Regarding different formats, the algorithm is good enough to be format agnostic. Clearly if there is some difference in the music (i.e. a drum or some rearrangements) this will generate different outputs.

@MaastrichtBiker: we rejected openIP for several reasons:
1. It’s proprietary ;)
2. It needs two minutes of audio data. This is a lot of decoding+processing! Ours will (during detection) only need about 30 seconds of data.
3. It does not seem to be very reliable. Musicbrainz is full of duplicate ids for the same audio.
4. The actual fingerprint recognition is quite slow. It seems fast because they collected a huge database of file hashes!
5. Others.. :)

Don’t get us wrong, we love MusicBrainz and the MusicIP guys. It’s just that we feel our solution works better for what we need.

You mention it can pick up where it left off, but if I’m adding stuff all the time, does it go back and pick up ones I’ve added? Or should I wait until I have it all ripped from cd to mp3 to start fingerprinting?

6 seconds? Dang, I’m averaging about 3.6, peaking at 4.5. Is this because all of my stuff already has MBIDs or is my 2.5 year old computer magic or something?

I think the problem with Picard is that it freaks out if my file isn’t exactly what it expects. Well when I rip stuff Audiograbber automatically trims off silence, so my files are often a second or two shorter than the track on the CD, and Picard can’t seem to get past that to realize it’s still the same song.

Also, iPod people, I’ve been using iSproggler for years now and it scrobbles just fine. The only thing it has issues with is that it doesn’t recognize anything with a playcount of 0, so you either have to listen to each track once in iTunes before playing it on your iPod or be willing to be off by one for each new song.

Sounds great, except the program doesn’t wanna find any of my mp3’s on my computer! Using the explorer built into it, the only folder it thinks is on my c: is the Documents & Settings one – not very useful when I store all my mp3’s in c:\music, not in the My Music folder inside My Documents! Yet oddly, it can see all the folders fine on my other hard drives.

So yeah, tried and failed… but I suppose that’s the issue with beta software :P

Craig, I thought I had the same problem is you (I use C:\mp3 and C:\new) but I realized that my folders were all there, just not in the right order. For some reason the program doesn’t show the folders in alphabetical order, they’re just all jumbled up which made it look like some were missing.

I ran the fingerprinter for an hour last, stopped it to do other things. I just started it up this morning, it remembered where I left off (altho I did have to reselect the folders with my music), and it’s chugging away again today!

We will post to the MB mailing list soon about how this will impact the Lastfm->Musicbrainz relationship.

The jist will be that although we will use a different fingerprint system, it will still be possible to use it to resolve MBIDs, and once we’ve cleaned up our metadata a bit, we will endeavour to match our catalogue with Musicbrainz. Matching our catalogue to MB atm is hard, because it is so messy.

This sounds great!! I have 12 mp3 files from Brazilian artists and I don’t know the names of the songs nor their artists….would this tool be able to help me out in such a way that I’ll be able to get the names of the songs?

Still overloaded… I’ve had enough of restarting it now, this is the last time. I’ve left my Mac on for the last 18 hours and it quit out after 2, AGAIN.
Half measures…..
I hope you’ve offloaded this to a different server farm or is this just going to be yet another slowdown on the last.fm servers?
(off topic I know but last.fm website is really really slow, even as a subscriber)

I just looked around and didn’t see any reference to this; please let me know if what I’m describing is in use.

If many listeners have ripped their own CDs using the default settings of their player, that we have many files that are exactly the same.

To leverage all of the CPU work that’s already happening out there, couldn’t a music-profiling app perform a quick MD5 of the file, and if the file has already been profiled, skip the expensive analysis?

Perhaps it could ignore the metadata and only MD5 the music data itself, to improve the chances of finding a hit.

What would be wrong with this approach? Somebody must have already thought of this — is it just not worth the effort? Even if there were only 5% overlap, it might save a lot of time, and would only get more useful with time.

@Royce: I am not a last.fm developer but… The way I see it, it’s the disk read that’s expensive, not the CPU time or bandwidth. You’d still have to read the file from disk to hash it, and you’d still have to communicate with the server to check the hash, so I can’t see that you’d save all that much.

Plus, I wouldn’t be surprised if there were subtle differences that would throw off the has. Not everyone uses Exact Audio Copy to rip their files…

@RJ: Regarding Russ’s comments, I can certainly understand that there may be good reasons for not giving all your IP away; and offering an API to a song identification service is not a bad compromise. I guess I’d ask, though, which business last.fm is in (or wants to be in): metadata (narrowly defined) or recommendations and social networking? Assuming the latter, I can’t really see how keeping the metadata piece proprietary helps. Isn’t better song metadata better for last.fm, regardless of how it’s generated and who makes money off of it?

Anyway, just a thought. And, FWIW, I’m also in favor of a league table. The distributed.net RC5 project just doesn’t waste enough time each day—I need more ways to procrastinate.

Why not make the fingerprinter run without needing to connect to the last.fm server… my computer is just sitting here doing nothing because it cant connect to server. Let me store the fingerprints temp. on my server and upload later.

@Royce: an equivalent of MD5 is performed and stored in your local machine (it’s a superfast operation rob :) ) to avoid re-fingerprinting the same stuff.
But this solution cannot be applied in general, since even the slightest variation in the parameters of the encoder generate a different file hash. If we really want to be able to correctly tag files, we have to look at the audio.

Rob Szarka has a good point … free and readily available clean song metadata, along with a widely used song ID will help all and embed last.fm even further in the center of a thriving, non-riaa-dominated music ecosystem.

@Rob Szarka: I’m not sure how much of the per-song time is server lag, but one of my CPUs is pegged doing the analysis right now, and my average time across the first 300 songs was 15.6 seconds … so I don’t think that I/O is the bottleneck. By contrast, I just ran MD5 against 300 songs and averaged 1.2 seconds each. It’s relatively inexpensive, from my perspective.

@Norman: Understood; I do see the need for actual audio analysis. I was only advocating using MD5 as a quick first pass. Sounds like you’re using something similar anyway, and I’m just advocating reordering some of the sequence and using it in a slightly enhanced way.

If the Fingerprinter is checksumming just the audio, then the Fingerprinter have to reanalyze my music if I change my tags? If you’re checksumming just the audio, then I would love to see some statistics on how many matches there are so far. It would either refute or support my argument. :)

If 80% of the MP3s in the entire world are entirely unique — even when leaving out the metadata — that’s 20% of them that matching somebody’s, and could turn my 10-hour pass into an 8-hour pass as you accumulate more data.

If you’re computing an audio-only checksum anyway, and you’re doing it after you’ve done the analysis, you could move it to happen before the audio analysis instead.

To get the CPU savings, you’d have to connect to the server prior to analysis, which might be expensive from a network perspective. To minimize the cost of that, you could do the following:

Even if you didn’t want to go to the trouble of trying for the CPU savings, you might as well upload the checksum and use it for statistical purposes, IMO. You might discover patterns that you hadn’t considered before.

Only 45 hours to go before all of my 12726 tracks are fingerprinted. I have no idea what this means but I assume it will help someone, hopefully me, in the future. I’m assuming if my tracks are stolen, I can use the fingerprinting feature to find the culprit, right?

Sorry if this was used earlier in the thread. I started to see the words “thread”, “CPU”, “makefile” and “MD5”, dozed off, hit my head on the space bar, and ended up here.

Yeah(!) for whatever it is you’re providing us. I’m sure in 6 months it will make perfect sense to me…just like Twitter. I’m slow, but I’m assuming this is something kick ass and worth 2 days of 100% CPU time that I would have spent viewing pr0n or something…

Someone mentioned earlier that the app shows the folders in all jumbled up order. Not entirely true. Looks like it’s alphabetical with a twist. It orders capitalized folder first, then low-case folders, and then foreign letter folders. Kind of messy, but logical in it’s own crazy way?

mll: the main motivation for asking for user/pass at this stage is for stats. I can imagine the FP process being an optional part of the scrobbling protocol in future tho, and it would be tied to the username there for security reasons. Wiping out all data from malicious users / spammers and so on. We didn’t anticipate any privacy issues because it’s more or less the same data that is sent when scrobbling.

Phil: the algorithm only looks at the audible data, so it ignores watermarks.

Royce: we only checksum the entire file atm, not the audio part. This may change. We took the easy option to start with, and it will still be a good shortcut. (Seems to work ok in p2p apps that do multi-source download based on file hash ;)

Most FPs from a single user atm is 16,409.
I’m in the midst of moving house atm so haven’t managed to fingerprint my collection yet.. Maybe i’ll pester max/norman to make the trunk work on multi-cores so i can catch up :D

Hi, I’m doing this little experiment in making a repository for Ubuntu Linux in which I try to collect as many third-party .deb packages as I can find. Would you mind if I included this .deb in that repository?

By the way, I’d like to ask the same question for the official Last.fm Player.

I’m slowly working my way through – I’m on the C’s in my library of artists. Thanks for the stop-restart feature. Otherwise, my poor little laptop would probably burn up; I can almost hear the teeny weeny fan crying from exhaustion. ;-)

The fingerprinting takes much longer than a normal MD5-style hash is because it’s looking at the actual sound of the MP3 files, rather than the compressed MP3 data in the file (which would be pretty much useless for something like this). This means that the fingerprinter has to decode the whole file, as it’s playing it, then create what sounds like a quite complicated set of data from it. Usually, no-one notices the processor power needed to decode an MP3, because it’s played realtime, but it’s not exactly a simple thing to do – I’m only 20 but I can remember the days where you couldn’t do anything else on a computer while playing MP3s! Don’t get me started on how long it took to rip anything…

I’ve just fingerprinted 1300 tracks from my MacBook – not a large collection, but I doubt there’s many people with Durwood Douche’s Big Banned and Blue! Including such musical gems as “Everybody’s Fucking But Me,” “I Just Can’t Keep My Mitts Off Your Tits” and “Just A Little Christmas Blowjob,” I felt that it was an album just dying to be fingerprinted!

As of 3:28 eastern US time (and for the last 12 hours), I can’t submit a single fingerprint without getting a “servers are overloaded” message. I’d sure like to help, but you seem to be lacking in the resources department.

It looks like your servers have a limited number of inbound connections, like a cell phone tower, and each song submission looks like a unique session. So if too many people try to submit at once, the server gets clogged.

Hello Last.fm people. Thanks for the great sites. May I ask of you to make statistics on Last.fm more diverse? Like, I dunno, just more diverse. Anyhoo, thanks for everything. I am looking forward to all the new updates and upgrades and what have you.

I see a problem. The program is processing the audio files in alphabetical order. Processing seems to take a couple of days for elaborate music collections. There will probably be quite an account of people, who won’t let the program process their whole collection, when it takes very long.
Because of this, there will be less and less data, for artists with »big« names (i.e. towards the end of the alphabet).

Wouldn’t it make some sense to either shuffle or start with a random offset?

@Graham: Yup, this is part of the plan! :)
@Johannes: Don’t worry. That’s just phase 1. Soon we will add it to the client, and since a query to our FP servers needs only 30 seconds of audio, it will be much faster too!

To all: yes we will support other formats! It’s just a matter of finding the right library! :)

I have very little experience with MusicBrainz, but I think it’s important to be more dynamic when interpreting mp3 tags. I format my albums in the “<year> – <albumtitle>” format – and I know a lot of other people do too – so in any list of albums, they are displayed chronologically. This in turn means that not a single album has been sent to my last.fm profile. Pretty average. For either last.fm or MusicBrainz to dictate what is simply ‘correct’ or not, is too simplistic.

I hope there is some kind of support for tags using a different (read: better) syntax like this. Thanks.

Well, one could argue that displaying albums chronologically should be done via the year part of the ID3 tag.

But that’s what fingerprinting is about. The tags don’t matter so much anymore, we suddenly know what the song/album is, even if you put the year in the album title field and hence can display it ‘correctly’ on Last.fm.

@Anonymous: The problem with your theory is, it isn’t the correct title for the album. It might be useful for personal sorting, but that is it – the ID3 tags have a year entry for that reason otherwise.

Hello Last.fm people. Say if I scanned fingerprinted or whatever my collection a bit and then stopped, added some hundred music or so, and then start fingerprinting again, how will the program know that I added music and which music I added?

@Anonymous: The problem with your theory is, it isn’t the correct title for the album. It might be useful for personal sorting, but that is it – the ID3 tags have a year entry for that reason otherwise.

This was exactly my point; tag info needs to be sent regardless of whether it’s deemed “correct”. Measures should be taken to interpret “incorrect” tags like the syntax I mentioned.

Just thought I’d try and ask this again, I’ve noticed someone else asking the same question (and hoping I haven’t stupidly missed the answer!)….is the stop/restart feature of the fingerprinter smart enough to pick up anything it hasn’t done already, or does it just work top to bottom and not go back and pick up anything that has been added in the meantime? (So if I have a Pink Floyd directory which is scanned, then I stop the fingerprinter, add a new bunch of mp3s under that folder and then restart, will the fingerprinter pick the new files up at any stage, or just continue on to the next directory?)
I ask ‘cos I’m right in the middle of converting my cd collection to mp3 and just wondering if I should start fingerprinting now, or wait until I’m done (a long way off)….

@pocketmumble (and all the others with the same question):
The fingerprinter will remember where it was before you stopped it (even after a reboot) and will do a quick scan to see if any new files have been added in a previous scanned directory, or alphabetically before the last scanned track.
So, all your music will be scanned, without any double work.

I think it has to do with how you store your music. If it is stored in one large folder, then it will be fingerprinted alphabetically. However, if you use some sort of hierarchal directory structure, the algorithm appears to move through the higher level folders in alphabetical order, then the lower level folders, down to the actual folder containing the music. For me, this means that my folder containing “Blues” is fingerprinted before my folder containing “Classical” and so on.

I (and I suspect others too) would be done already if the damn fingerprinter wouldn’t give up everytime the damn server overload comes back. I mean, sure it’s beta, but even betas can have basic features like fingerprinting to cache?

But then again, if we fingerprinted to cache, we would most likely kill the server with the cache submissions.

Afterall, it has happened before with the scrobbling, so I guess the lack of cache might be a feature that has been left out purposedly for the time being. Inconvenient for us, but necessary I suppose.

maybe the folks who have tons of music stored elsewhere changed the path to the “my music” special folder to lead to that directory. it’s very useful in this kind of situations. :)
btw. i still can’t fingerprint. the servers have been overloaded for like 20 hours now..

I agree with Amr Hassan; I have my music stored on a dedicated server, but used TweakUI to change the “My Music” folder (actually “Music” folder, Vista dropped the “My”) to this server. Now everything that tries to access “My Music” automatically will find all my music there, even though it is physically elsewhere… Works great…

For all the guys that are experiencing overloaded messages — wait a few minutes and try again.

I was getting those messages a lot, granted I was also maxing out my connection (with a sister and mom who love Skype :D )

I just finished doing roughly 26k tracks. Most of them are perfectly tagged so that should help out a ton. Took me a while to do all the tracks considering I could only analyze during the evening while I was home.

that would be pointless as well as time-consuming. processing takes up 6 seconds at the most. multiply that by the number of users currently fingerprinting and the number of users accessing said page and the last.fm servers would certainly crack under the strain.

I have many tracks by POLYSICS, as well as some other artist with foreign-alphabet track names

A lot of their tracks have been submitted in both the original Japanese and translated into English/Romaji.

I’d prefer if they were displayed in the original Japanese (as that’s what’s on the back of the album case), but oters will want it in English. Will this be given special treatment with some sort of prefernce for dsplay?

Will old submissions with bad metadata finally be fused with the correct versions?

<@Rj>… news update coming today … is there a special section at my profile, where I’ll find fingerprint news / even a live graph? – all this sound very interesting to me, looking forward to what comes out + appreciate your work on this. Thanks, marc (still fingerprinting 1% idle… whoo… ) + IS THERE AN EMAILLISTFORINTERESTEDPEOPLE/ TEAMS</@Rj>

However, I hate to say it, but your source tree is a bit of a mess. You included a big mess of binaries in the source tree! Eww!

Since I’m one of those pedantic folks, I decided to try to build the entire thing from scratch. After a little bit of mucking about I manged to figure out that I could do so by removing the original bin/ directory, and then running make in src/libLastFmTools, src/libFingerprint, and finally in the top level directory. Hope this helps anyone else who wants to rebuild it.

The application has been working fine for a while, now it started to crash frequently. I have 91’738/131’516 of my collection fingerprinted, but now I am struck due to crashes. I use version 1.0.3.2 on Mac OS X, my mp3 collection is on an external firewire 2TB Raid.

Hmm, probably brilliant, but it’s a bit difficult to get it going on my main music player, a Samsung Q1 with a 640×800 screen res. Any chance you could make the window resizable to fit in with smaller screens?

I salute this effort, and i support the requests for a command line version (preferably with a -silent option) and ability to query files located on a local network. My current avg is 11.2 sec, and about 8hrs remaining. For this, i expect the full office tour (swag and free booze included) when i come to London :D Cheers

I know I’m a bit late to chip in with this (and perhaps it has been suggested already.. stopped reading after the 100th comment or so), but how about a version that does the fingerprinting in a ‘cache’ mode? That way the process isn’t interupted every time the servers get overloaded, and the client can just keep retrying until the server is able to process… should speed up things for a lot of people?

This is very interesting. What I’m most interested in learning though, is will this correct old tracks that have been scrobbled with incorrect tags, or will it only affect those tracks from which point this system goes live?

All my music is stored on a remote server and accessed via a network share which the fingerprinting program will not recognize. Any chance for this to be updated or should I leave this to all the users who store their music locally?