Facing the music with Perl

My digital music libraries were messed up. Spread across several devices and a couple of flirtations with iTunes Match and iCloud, I didn’t have everything in one place—ironically. Not only that, but Apple had replaced some files with what it considered better versions. Although I don’t want to perform the experiment to confirm it, I’m sure that the new files had different metadata. I needed to sort it out to start on a better system. I thought the task would be arduous, and it was until I settled on a simpler problem that a couple of Perl modules solved quickly.

For my first step, I needed to find all the music I had. I had backed up my files before I let Apple replace them with better versions. But I seemed to have made several backups, each with a different subset of my music. One backup would have most of the Led Zepplin but none of the Beatles, while another had no Zepplin and some of the Beatles. Another had all of the Beatles but no Cat Stevens.

I started by collecting all the unique files from the directories in which I had found music. This program has some of my favorite things about Perl, especially since I still have the wounds from moving files around during my C phase.

File::Find provides the code to traverse the file structure for me. I give find the list of starting directories, in this case those in @ARGV, and a callback subroutine as a reference. The meat of my program is in that $wanted subroutine. The hardest part of this code is remembering that $File::Find::name is the full path and $_ is the filename only. I put those into variables to remind me which is which.

File::Map allows me to access a file’s data directly from disk as a memory map rather than reading it into memory. I don’t need to change the file to get its digest (using Digest::MD5), so memory mapping is a big win across tens of thousands of music files. If I have seen that digest before, I move on to the next file. Otherwise I do some string manipulations to create new file paths, putting the pieces together with the cross-plaform File::Spec. I copy the file to the new location with File::Copy. I specifically make a copy so I leave the original files where they are for now. I anticipate messing up at least a couple of times. The new path is four levels deep with each deeper level based on the next two characters in the file’s digest. That way, no directory gets too big, slowing down all directory operations.

Some rough calculations showed me that no particular music library was more than 85% complete. This was where the real fun began, but also my embarrassing tales of woe. Out of the newly copied files, I needed to select the ones I wanted to keep.

First, I merely cleaned out my iTunes library and reimported everything to see what I was working with. Most music I had in duplicates, and some in triplicates. iTunes Match had upgraded MP3 files to M4A (encoded in Apple’s AAC codec) and had done the same for M4P files, the DRM-ed versions of music I had purchased. Each version had a different digest, so several versions of the same content survived.

I struggled with the next part of the problem because I have too much computer power at my disposal. I could collect all of the metadata for each file and store it in a database. I could throw it into a NoSQL thingy. I even thought about redis. Any one of these technologies are fun diversions but they require too much work. I started and abandoned several approaches, including a brief attempt to use AppleScript to interact with iTunes directly. Oh, the insanity.

Working from the digested directory each time was a bad decision. I’d have to collect the metadata then group files by album or artist. iTunes had already done that for me, although I didn’t realize this for a week. When I imported the music, it copied the files into folders named after the artist and album (something I could have done instead of using the digests). Most of my work would be limited to the files in a single directory. I don’t need a data structure to hold all of that. I certainly didn’t need a database.

If I could enter a directory, examine each file in that directory, then process them on the way out of that directory, removing the duplicate files becomes much easier. I remembered that File::Find has a post_process option that allows me to do this, although I haven’t used it in years:

While I was in each directory, I could collect information on each file. Each file is already sorted by artist and album but I still need to choose which one of the duplicate files to keep. After a bit of thought, the solution turned out to be simple. I could sort on file extension, looking up the ordering in a hash. When I have two files with the same extension I’ll choose the one with the higher bitrate. When the bitrates match, I’ll choose the one with the shortest filename. With the various music libraries, I had some files like Susie Q.m4a and Susie Q 1.m4a; essentially the same file except for some slight metadata differences. I used Music::Tag to get the metadata since it automatically delegated to plugins for the various file formats.

After sorting, I mark for deletion everything except the first element in the list. I don’t delete them right away; I print the list to a file which I can use later to delete files. I’ve been around too long to delete files right away.

And that was it. This left behind a couple of problems, such as messed up metadata, but I wasn’t going to be able to solve that programmatically anyway. Getting a complete set of files with no duplicates solved most of the problem and leaves me with the joy of flipping through physical albums that only us grey beards remember.

Site Map

Contact Us

Legal

PerlTricks.com and the authors make no representations or warranties with respect to the accuracy or completeness of the contents of all work on this website and specifically disclaim all warranties, including without limitation warranties of fitness for a particular purpose. No warranty may be created or extended. The advice and strategies published on this website may not be suitable for every situation. All work on this website is provided with the understanding that PerlTricks.com and the authors are not engaged in rendering legal, accounting, or other professional services. Neither PerlTricks.com nor the authors shall be liable for damages arising herefrom.