I'm a music informatics researcher, and I work as an Applied Researcher in industry, but I devote some of my spare time to being a Visiting Academic at Queen Mary University of London (see my Queen Mary web page).

Past work places include the Internet music platform Last.fm, where I worked as Research Fellow, the Japanese research centre AIST in Tsukuba, and, as a research student, the Centre for Digital Music. Find more info on my biography page.

&nbsp
I’ve had the privilege to co-present a tutorial on “Why singing is interesting” with Simon Dixon and Masataka Goto this Monday (26th October, 2015) at the ISMIR 2015 conference. Check the slides here. I include our abstract below.
The proposed tutorial aims to introduce to the ISMIR community the exciting world of singing styles, the mechanisms of the singing voice, and provide a guide to representations, engineering tools, and methods for analyzing and leveraging it. The singing voice is arguably the most expressive of all musical instruments, and all popular music cultures around the world use singing. Across disciplines, a lot is known about singing culture and the intricate physiological and psycholo…

&nbsp Publication authored by Dai, Jiajie and Mauch, Matthias and Dixon, Simon.
We present a new dataset for singing analysis and modelling, and an exploratory analysis of pitch accuracy and pitch trajectories. Shortened versions of three pieces from The Sound of Music were selected: “Edelweiss”, “Do-ReMi” and “My Favourite Things”. 39 participants sang three repetitions of each excerpt without accompaniment, resulting in a dataset of 21762 notes in 117 recordings. To obtain pitch estimates we used the Tony software’s automatic transcription and manual correction tools. Pitch accuracy was measured in terms of pitch error and interval error. We show that singers’ pitch accuracy correlates significantly with self-reported singing skill and musical training. Larger intervals led to larger errors, a…

My TEDx talk at Goodenough House is the latest in a long list of outreach “things” that I’ve had the chance to do over the last few weeks, all connected to the paper that Armand and I worked on, with help from Bob and Mark: The Evolution of Popular Music: USA 1960–2010.
It’s been quite a ride. Articles about us have appeared in over 400 print media worldwide, including the Economist, Guardian, Telegraph, Süddeutsche Zeitung, Neue Zürcher Zeitung, and so forth. I gave lots of interviews, my main occupation for about two weeks around the publication. It gave me the opportunity to appear on national TV (Sky, Channel 5), and national, even world-wide radio (Click on BBC World).
I can’t deny that it’s been a blast. I particularly enjoyed being chauffeured around in cabs through London, from one interview to the next. And I’m very happy I’ve got this TEDx talk now, which I can show off to folks.
Yet it’s clearly out of proportion. Sure, we worked very hard to make that paper, and it took us a long, long time. Still, there’s other research that I’m quite proud of, which took nearly as long as this, and it didn’t receive any media attention whatsoever. This is, of course, equally the case for the fantastic research carried out by many of my fellow researchers in the field.
Perhaps it’s a consolation for those who would like to have that sort of attention too sometimes that it does have it’s downsides: for several weeks, I really didn’t manage to work on what I would consider my actual job, and it felt a bit wrong after a while. Luckily, press attention has faded now (though I’ll do an interview with a radio station in Utah even today), so I’ve resumed work almost as usual. And you also get some bad reactions from people. Firstly, the press aren’t always in favour of your work. But I’ve even appeared on a website whose only purpose is to shame people. Interesting.
And then there’s the question: what is this media attention actually good for? I assume Queen Mary are happy because it might attract new students. I assume that more researchers around the world will know of the work and will cite it — but realistically, there are not so many people who write papers for which our study might be relevant. Perhaps the media will have the power to reach researchers who work on music and culture who would not normally read music informatics papers. Perhaps that’s the greatest hope I have.
So, uhm, what shall we conclude. Ah, nothing, I’ll leave it to you dear reader.
…

pYIN v1.1 is released! After the launch a couple of weeks ago of the lovely Tony software for melody transcription we have now released the automatic pitch and note tracker pYIN as a Vamp plugin in its own right. It’s a slightly refined version with improved note tracking. Best to go to the project page, which explains everything, or directly to the downloads page with compiled binaries and source code downloads. Enjoy!

We present Tony, a software tool for the interactive annotation of melodies from monophonic audio recordings, and evaluate its usability and the accuracy of its note extraction method. The scientific study of acoustic performances of melodies, whether sung or played, requires the accurate transcription of notes and pitches. To achieve the desired transcription accuracy for a particular application, researchers manually correct results obtained by automatic methods. Tony is an interactive tool directly aimed at making this correction task efficient. It provides (a) state-of-the art algorithms for pitch and note estimation, (b) visual and auditory feedback for easy error-spotting, (c) an intelligent graphical user i

&nbsp
Publication authored by Matthias Mauch and Robert M MacCallum and Mark Levy and Armand M Leroi.
[Paper page at Royal Society Open Science] In modern societies, cultural change seems ceaseless. The flux of fashion is especially obvious for popular music. While much has been written about the origin and evolution of pop, most claims about its history are anecdotal rather than scientific in nature. To rectify this we investigate the US Billboard Hot 100 between 1960 and 2010. Using Music Information Retrieval (MIR) and text-mining tools we analyse the musical properties of ~17,000 recordings that appeared in the charts and demonstrate quantitative trends in their harmonic and timbral properties. We then use these properties to produce an audio-based classification of musical styles and study the evolution of musical div…

Why not listen to Dancing Queen again? A masterpiece of pop, by one of the most prolific and successful songwriting partnerships the world has ever seen: Benny Andersson and Bjoern Ulvaeus from ABBA. Get your headphones out.
So what is Dancing Queen about? It’s about getting nerdy boys onto the dance floor. How does it do that?
Firstly, very clever, suggestive lyrics, secondly an onslaught of never-ending, interlocking melody madness. Well, and yes, there’s some other bits going on as well. But let’s listen together first and then I’ll tell you more.

Listen-through

0:00 Straight in! Instantly recognisable piano [zoom], then into the intro with that instantly recognisable synth tune. Not any old borin…

&nbsp Publication authored by Tian Cheng and Simon Dixon and Matthias Mauch.
We investigate piano acoustics and compare the theoretical temporal decay of individual partials to recordings of real-world piano notes from the RWC Music Database. We first describe the theory behind double decay and beats, known phenomena caused by the interaction between strings and soundboard. Then we fit the decay of the first 30 partials to a standard linear model and two physically-motivated non-linear models that take into account the coupling of strings and soundboard. We show that the use of non-linear models provides a better fit to the data. We use these estimated decay rates to parameterise the characteristic decay response (decay rates along frequencies) of the piano under investigation. The results also show…

&nbsp Publication authored by Rachel Bittner and Justin Salamon and Mike Tierney and Matthias Mauch and Chris Cannam and Juan Bello.
We introduce MedleyDB: a dataset of annotated, royalty-free multitrack recordings. The dataset was primarily developed to support research on melody extraction, addressing important shortcomings of existing collections. For each song we provide melody f0 annotations as well as instrument activations for evaluating automatic instrument recognition. The dataset is also useful for research on tasks that require access to the individual tracks of a song such as source separation and automatic mixing. In this paper we provide a detailed description of MedleyDB, including curation, annotation, and musical content. To gain insight into the new challenges presented by the dataset, we run a set of experiments using a state-of…