Machine Learning for Bird Song Learning

Abstract

Songbirds, including familiar species like chaffinches and great tits, share an unusual ability with us: vocal learning. Like us, birds need to hear and imitate others in order to develop their vocal communication signals. Most mammal and vertebrate species cannot do this, including all other primate species apart from us. In recent years, research into the development, neurobiology, and genetics of song learning have revealed ever deeper links between human speech and bird song - so much so that bird song currently represents the best animal model we have for understanding the biology of speech.

In order to study bird song, researchers need to accurately measure how different songs are from each other. These measures are needed to assess whether one bird really did imitate another, and how precisely they did so. Developing computer algorithms to make such measurements is difficult, however, for many of the same reasons that speech recognition is a difficult task for computers. In this grant, we will use a new approach to solve this problem - inspired by developments in speech recognition. First we will train birds to peck on buttons to get a food reward from a bird feeder, and then train them further to discriminate between different "notes" within bird songs. Then we will train "machine learning" computer algorithms to replicate the birds' decisions. We will thus develop a computer algorithm that we can use to compare bird songs in a way that is biologically validated.

We will then use our algorithm to investigate how birds learn their songs. To do this, we will make use of data-sets where researchers have simply recorded the different songs sung by birds within the population. This data contains a signature of how the birds actually learned their songs in much the same way that our genomes contain signatures of our evolutionary history. We will exploit this by using a statistical technique in combination with simulation models to infer how birds learn their songs: how frequently they generate new song types due to errors or innovations; who they prefer to learn from; and which songs they prefer to learn. We will do this for 15 different species and populations, allowing us to compare how different groups learn their songs for the first time.

Technical Summary

Bird song learning research has been built on our ability to judge the similarity between song syllables, but current methods have not been validated against birds' own perception. In order to carry out the next generation of studies of song learning, we need to develop more accurate methods, rooted in biology. And to do that, we first need comprehensive data-sets of how birds themselves perceive differences in song syllables.

Objective 1: Generate data-sets for how birds perceive differences between song syllables using operant conditioning methods, using an AXB task, for three unrelated species: zebra finch, great tit and jackdaw. We will generate around 150,000 trials.

Objective 2: Develop and train machine learning algorithms to measure song syllable similarity. Recent developments in machine learning provide powerful methods for fitting algorithms to complex time series data, like bird song syllables. We will develop and train algorithms using the results from Objective 1. We will compare the performance of our algorithm against current methods, and will host a data tournament for the machine learning field to further search for optimal solutions.

Objective 3: Apply the machine learning algorithms developed in Objective 2 to a fundamental problem in bird song learning: we lack quantitative estimates for how precisely birds learn songs. Without this information, it is impossible to take advantage of the diversity of bird song learning styles in different species and gain a comparative understanding of how song learning behaviour evolves. For this objective, we will (a) collate patterns of song sharing in populations of birds of 15 different taxa; (b) compare syllable structure of all songs within each of the populations using our algorithm; (c) use Approximate Bayesian Computing to fit the results to cultural evolutionary simulations, and thus estimate underlying parameters of learning - in particular the precision of syllable imitation.

Planned Impact

We will generate a state-of-the-art method for comparing the similarity of bird songs, and a data-set for other researchers to use when developing their own methods. Our method will be incorporated into a song-analysis program (Luscinia) that will be readily useable by members of the research field. Research that will benefit from these methods has the following impacts: (a) Biomonitoring. Bird song is often the best record that we have of avian biodiversity - especially in tropical forests where biodiversity is highest and visibility of birds very limited. Processing hours of song recordings manually is a difficult and skilled task, and recently, interest has grown in computational methods that can automate the task. Our project will add to this by developing the first method validated by avian perception itself. Both R-Co-I Stowell (developer of Warblr), and PI Lachlan (developer of Luscinia) have a proven track record in implementing computational bioacoustic techniques for a broader audience. (b) Biodiversity. Song often provides one of the critical phenotypic cues needed to identify new species. In some cases, song is the only clear and unambiguous character. To use song features to distinguish taxa, an accurate way to quantitatively compare songs is required; we will create and make this available to the field via the Luscinia software. The less sophisticated measures already implemented in Luscinia have already been used for this purpose, helping to identify the Gran Canarian Blue Chaffinch as a separate species from the Tenerife Blue Chaffinch, and in so doing, discovering the rarest, and one of the most endangered bird species in the E.U. Other labs are currently carrying out similar studies in Colombia and Tanzania amongst other places. (c) Bird song neuroscience. Bird song is an established model system for speech, at a neurobiological and genomic level. Genes involved with bird song learning have been implicated in human disease. Research into this field requires accurate assessments of song structure and song similarity, which we will deliver. Through PI Lachlan's work on Luscinia, and co-PI Clayton's senior position in the bird song neurobiology field, we again have a clear plan of how we will make our methods available to a broader field and advertise them.